undefined

Lee Sharkey

Principal Investigator at mechanistic interpretability startup Goodfire. His research focuses on parameter decomposition methods for understanding and steering AI systems.

Top 3 podcasts with Lee Sharkey

Ranked by the Snipd community
undefined
39 snips
Aug 27, 2025 • 2h 2min

Untangling Neural Network Mechanisms: Goodfire's Lee Sharkey on Parameter-based Interpretability

Lee Sharkey, Principal Investigator at Goodfire, focuses on mechanistic interpretability in AI. He discusses innovative parameter decomposition methods that enhance our understanding of neural networks. Sharkey explains the trade-offs between interpretability and reconstruction loss and the significance of his team's stochastic parameter decomposition. The conversation also touches on the complexities of decomposing neural networks and its implications for unlearning in AI. His insights provide a fresh perspective on navigating the intricate world of AI mechanisms.
undefined
Jun 17, 2025 • 30min

“Mech interp is not pre-paradigmatic” by Lee Sharkey

In this discussion, Lee Sharkey, a specialist in mechanistic interpretability, challenges the notion that Mech Interp is pre-paradigmatic. He explores the evolution of mechanistic interpretation through distinct waves, addressing the crises within both first and second waves. Sharkey emphasizes the importance of paradigm shifts in scientific understanding and introduces the concept of parameter decomposition in neural networks. He advocates for a potential third wave that could resolve ongoing challenges, inviting collaboration in this emerging field.
undefined
Jun 3, 2025 • 2h 16min

41 - Lee Sharkey on Attribution-based Parameter Decomposition

Lee Sharkey, an interpretability researcher at Goodfire and co-founder of Apollo Research, shares his insights into Attribution-based Parameter Decomposition (APD). He explains how APD can simplify neural networks while maintaining fidelity, discusses the trade-offs of model complexity and performance, and delves into hyperparameter selection. Sharkey also draws analogies between neural network components and car parts, highlighting the importance of understanding feature geometry. The conversation navigates the future applications and potential of APD in optimizing neural network efficiency.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app