Machine Learning Street Talk (MLST) cover image

Neel Nanda - Mechanistic Interpretability

Machine Learning Street Talk (MLST)

CHAPTER

In-Context Learning and Interpretability in AI

This chapter explores the concept of in-context learning in AI models, emphasizing how learning occurs during inference and the role of 'induction heads' in this process. It also discusses the importance of mechanistic interpretability, including techniques like activation patching and various causation interventions to better understand and analyze model behaviors.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner