LessWrong (Curated & Popular) cover image

"Deep learning as program synthesis" by Zach Furman

LessWrong (Curated & Popular)

00:00

Induction heads and in‑context learning

Zach discusses induction heads as mechanistic attention circuits that enable in‑context learning in transformers.

Play episode from 21:17
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app