AXRP - the AI X-risk Research Podcast cover image

40 - Jason Gross on Compact Proofs and Interpretability

AXRP - the AI X-risk Research Podcast

00:00

Understanding Mechanistic Interpretability with Cross Coders

This chapter explores mechanistic interpretability through the innovative framework of cross coders, which help extract meaningful insights from complex models. It discusses the efficiency of compact proofs and feature interactions within neural networks, emphasizing the importance of model analysis for understanding errors and behaviors. The speakers draw parallels to physics concepts, highlighting the balance between complexity and analytical clarity in deep learning research.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app