AXRP - the AI X-risk Research Podcast cover image

40 - Jason Gross on Compact Proofs and Interpretability

AXRP - the AI X-risk Research Podcast

CHAPTER

Understanding Mechanistic Interpretability with Cross Coders

This chapter explores mechanistic interpretability through the innovative framework of cross coders, which help extract meaningful insights from complex models. It discusses the efficiency of compact proofs and feature interactions within neural networks, emphasizing the importance of model analysis for understanding errors and behaviors. The speakers draw parallels to physics concepts, highlighting the balance between complexity and analytical clarity in deep learning research.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner