AXRP - the AI X-risk Research Podcast cover image

19 - Mechanistic Interpretability with Neel Nanda

AXRP - the AI X-risk Research Podcast

00:00

The Second Mesh Thing to Bear in Mind When Using Transformers

Transformers are fundamentally sequence modeling networks their input is a sequence of tokens which you can basically think of as words or subwords and at each step they're doing the same processing in parallel for every element of the sequence. A decent chunk of transformers computation comes down to rooting information between different positions figuring out what information to root. The main thing that so obviously any smooth information between positions because we don't just want it to be a function of only token of only the current token.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app