AXRP - the AI X-risk Research Podcast cover image

19 - Mechanistic Interpretability with Neel Nanda

AXRP - the AI X-risk Research Podcast

CHAPTER

The Modular Edition Algorithm

It took me a week and a half of staring at the weights until i figured out what it was doing. The memorization component just adds so much noise and other garbage to the test performance that even though there's a pretty good generalizing circuit it's not sufficient to overcome the noise. Then as it gets closer to the generalizing solution things somewhat accelerate. It eventually decides to clean up the memorizing solution and then suddenly figures out how to generalize.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner