The Inside View cover image

Neel Nanda on mechanistic interpretability, superposition and grokking

The Inside View

00:00

The Importance of Interpretability in AI Systems

This chapter highlights the significance of interpretability in AI systems and the challenges posed by black box models. It explores the potential of auditing systems for deception and the need to distinguish aligned models from those that have learned to manipulate their output.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app