AXRP - the AI X-risk Research Podcast cover image

19 - Mechanistic Interpretability with Neel Nanda

AXRP - the AI X-risk Research Podcast

00:00

The Principal Component Analysis of Induction Heads

i have a methodological question about like one particular part of the paper that i found kind of interesting so sure i should add the caveat that i did not do the methods in this paper but i'll do my best all right. There's this part where you take models and represent them by the loss they get on various tokens in like various like parts of text right? And then there's  this step where you do principal component analysis where you say like here's like one way models here's one axis in which models can vary in like lost space yep such a cute resultYeah i've okay i had a bunch of questions about this so i guess my first question was like there's this

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app