AXRP - the AI X-risk Research Podcast cover image

19 - Mechanistic Interpretability with Neel Nanda

AXRP - the AI X-risk Research Podcast

CHAPTER

What's Happening in the Final Neural Network?

To me, the core goal of the field is to be able to look at a final network and be really good at understanding what it does and why. My blocking work, which I'll hopefully chat about, is a pretty good example of this where I tried really hard on the network at the end of training and then what happened during training just fell out. Do you actually see that in practice? I'm not aware of a concrete example of seeing that. And I would be very surprised if it doesn't happen.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner