Machine Learning Street Talk (MLST) cover image

Neel Nanda - Mechanistic Interpretability (Sparse Autoencoders)

Machine Learning Street Talk (MLST)

00:00

Exploring Interpretability and Applications of Sparse Autoencoders

This chapter explores the interpretability challenges in machine learning models, with a focus on sparse autoencoders. It emphasizes the importance of generating useful features over sheer quantity and highlights tools for analyzing model behavior.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app