The Inside View cover image

Neel Nanda on mechanistic interpretability, superposition and grokking

The Inside View

00:00

The Importance of Interpretability in AI Systems

This chapter highlights the significance of interpretability in AI systems and the challenges posed by black box models. It explores the potential of auditing systems for deception and the need to distinguish aligned models from those that have learned to manipulate their output.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app