
19 - Mechanistic Interpretability with Neel Nanda
AXRP - the AI X-risk Research Podcast
The Principal Component Analysis of Induction Heads
i have a methodological question about like one particular part of the paper that i found kind of interesting so sure i should add the caveat that i did not do the methods in this paper but i'll do my best all right. There's this part where you take models and represent them by the loss they get on various tokens in like various like parts of text right? And then there's this step where you do principal component analysis where you say like here's like one way models here's one axis in which models can vary in like lost space yep such a cute resultYeah i've okay i had a bunch of questions about this so i guess my first question was like there's this
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.