
Mapping the Mind of a Neural Net: Goodfire’s Eric Ho on the Future of Interpretability
Training Data
00:00
Intro
This chapter explores the significance of understanding neural networks through a dedicated research company's efforts in AI interpretability. It highlights the challenges and advancements in deciphering AI models, discussing superposition, biological insights, and ambitious predictions for 2028.
Transcript
Play full episode