AXRP - the AI X-risk Research Podcast cover image

19 - Mechanistic Interpretability with Neel Nanda

AXRP - the AI X-risk Research Podcast

CHAPTER

The Key Takeaway From This Paper Is That Attention Is a Parameterized Matrix

The key takeaway from this is that the matrix wk transpose wq is the main thing that matters. Every destination position gets the exact same information from a source position it can only choose how much to wait it hmm and finally this just kind of reinforces the idea that attention is about rooting information because we're multiplying by the attention pattern on the source dimension axis. The value steps are interesting so what's going on is you start with a tensor with a source residual stream dimension and a demodel dimension which is just like the actual content of the residual stream yep then wv acts on the content of the revenue streamdimension, wo acts on the residual stream content

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner