AXRP - the AI X-risk Research Podcast cover image

19 - Mechanistic Interpretability with Neel Nanda

AXRP - the AI X-risk Research Podcast

00:00

What's Happening in the Final Neural Network?

To me, the core goal of the field is to be able to look at a final network and be really good at understanding what it does and why. My blocking work, which I'll hopefully chat about, is a pretty good example of this where I tried really hard on the network at the end of training and then what happened during training just fell out. Do you actually see that in practice? I'm not aware of a concrete example of seeing that. And I would be very surprised if it doesn't happen.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app