undefined

Tom McGrath

Chief scientist at Goodfire, with a background in AI safety research at DeepMind, focusing on mechanistic interpretability.

Top 3 podcasts with Tom McGrath

Ranked by the Snipd community
undefined
258 snips
May 29, 2025 • 1h 50min

Mechanistic Interpretability: Philosophy, Practice & Progress with Goodfire's Dan Balsam & Tom McGrath

In a thought-provoking discussion, Dan Balsam, CTO of Goodfire, and Tom McGrath, Chief Scientist, dive into the exciting world of mechanistic interpretability in AI. They analyze how understanding neural networks can spark breakthroughs in scientific discovery and creative domains. The pair tackle challenges in natural language processing and model debugging, drawing fascinating parallels with biology. Additionally, they underscore the importance of funding and innovative approaches in advancing AI explainability, paving the way for a more transparent future.
undefined
117 snips
Aug 17, 2024 • 1h 52min

Popular Mechanistic Interpretability: Goodfire Lights the Way to AI Safety

Dan Balsam, CTO of Goodfire with extensive startup engineering experience, and Tom McGrath, Chief Scientist focused on AI safety from DeepMind, dive into mechanistic interpretability. They explore the complexities of AI training, discussing advances like sparse autoencoders and the balance between model complexity and interpretability. The conversation also reveals how hierarchical structures in AI relate to human cognition, illustrating the need for collaborative efforts in navigating the evolving landscape of AI research and safety.
undefined
Oct 2, 2025 • 1h 2min

Inside the Black Box: The Urgency of AI Interpretability

Jack Lindsay, a researcher at Anthropic with a background in theoretical neuroscience, teams up with Tom McGrath, co-founder and Chief Scientist at Goodfire and a former member of DeepMind's interpretability team. They tackle the critical topic of AI interpretability, discussing the urgency of understanding modern AI models for safety and reliability. They explore technical challenges, real-world applications, and how larger models complicate analysis. Insights into neuroscience inform their work, making the case for interpretability as essential for trusted AI.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app