AXRP - the AI X-risk Research Podcast cover image

19 - Mechanistic Interpretability with Neel Nanda

AXRP - the AI X-risk Research Podcast

00:00

Using a 113 Arithmetic Module You're Learning the Sign Function on 113 Data

The embedding where it's 113 because you want the thing you're taking the module self to be prime for various reasons doesn't make things a bit cleaner. and so there are these like nine possible terms and every neuron the cluster can be well approximated as a linear combination of these. If you delete everything that's not covered by these the model does in bad as well. Each neuron cluster is rank nine even though there's often kind of a hundred or more neurons into cluster which is kind of wild.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app