3min chapter

Machine Learning Street Talk (MLST) cover image

Neel Nanda - Mechanistic Interpretability

Machine Learning Street Talk (MLST)

CHAPTER

The Importance of Induction Heads in Context Learning

In toy two layer attention only language models we found this circuit called an induction head which does this it's a real algorithm that works on say repeated random tokens. Text often contains repeated sub-sequences like after tim scarf may come next but if tim scarf has appeared like five times then it's much more likely to come next. When the attention induction head decided to look at scarf the which is determined purely by the qk matrix it then just copies that to the apple which is purely done by the ov matrix and i think induction heads are a really interesting circuit case study.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode