Artificial General Intelligence (AGI) Show with Soroush Pour cover image

Ep 14 - Interp, latent robustness, RLHF limitations w/ Stephen Casper (PhD AI researcher, MIT)

Artificial General Intelligence (AGI) Show with Soroush Pour

00:00

Interpretability in AI with Sparse Autoencoders

Exploring the research paper's use of sparse autoencoders to enhance interpretability in AI models by mitigating gender bias through feature examination and ablation studies. The chapter discusses the importance of creating effective hypotheses for guiding model edits and critiques past interpretability research methodologies.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app