Training Data cover image

Mapping the Mind of a Neural Net: Goodfire’s Eric Ho on the Future of Interpretability

Training Data

00:00

Intro

This chapter explores the significance of understanding neural networks through a dedicated research company's efforts in AI interpretability. It highlights the challenges and advancements in deciphering AI models, discussing superposition, biological insights, and ambitious predictions for 2028.

Play episode from 00:00
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app