
19 - Mechanistic Interpretability with Neel Nanda
AXRP - the AI X-risk Research Podcast
Introduction
In this episode, we talk with Neil Nanda about his research into mechanistic interpretability. He's pursuing independent research and producing resources to help build the field of mechanistic interpretable. Around when this episode will likely be released, he'll be joining the language model interpretability team at DeepMind. We'll also discuss a mathematical framework for a Transformer Circuit's in context learning and induction heads.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.