AXRP - the AI X-risk Research Podcast cover image

19 - Mechanistic Interpretability with Neel Nanda

AXRP - the AI X-risk Research Podcast

00:00

Anthropic Contributions Statements

Anthropic: I found these author contribution statements to be like yeah good good reading. Chris has an amazing blog post about credit and how he thinks about the importance of sharing credit generously and fairly with academic work. He puts a lot of thought into contributions statements, so I respect him a lot for thatYeah So it's kind of taking these versions of transformers that don't have the multilayer perceptron parts that are just attention heads up to two layers and kind of building a mathematical framework for them.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app