AXRP - the AI X-risk Research Podcast cover image

19 - Mechanistic Interpretability with Neel Nanda

AXRP - the AI X-risk Research Podcast

00:00

The Modular Edition Algorithm

It took me a week and a half of staring at the weights until i figured out what it was doing. The memorization component just adds so much noise and other garbage to the test performance that even though there's a pretty good generalizing circuit it's not sufficient to overcome the noise. Then as it gets closer to the generalizing solution things somewhat accelerate. It eventually decides to clean up the memorizing solution and then suddenly figures out how to generalize.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app