I find it's kind of a an audience driven thing where you your goal is to create a tool that will be adopted by interpretability researchers and to convince them that this is actually meaningful. I think these two approaches complement each other because on one hand if the architectures that like we were studying were more comprehensible, then this would make mechanistic interpretability much easier but at the same time work that's building into the loss function some hopeful notion of interpretability does need to be validated to actually be interpretable down the line. Some models can learn like strange solutions which appear interpretable but are not in reality as interpretable.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode