The holy grail of mechanistic interpretability at least for safety purposes would be to figure out how models might implement some of these most concerning behaviors and then be able to detect that the formation of those sub graphs in the training process. I think it's personally just fascinating work that you know I'm very curious about kind of independent of its consequences. It just passes the you know it's interesting on its own merits test but as I mentioned at the top to me it feels like it's a pretty promising path to safety.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode