The Jim Rutt Show cover image

EP 327 Nate Soares on Why Superhuman AI Would Kill Us All

The Jim Rutt Show

00:00

Limits of interpretability: glimpses vs deep understanding

Jim and Nate discuss progress in model interpretability, golden-gate activation vectors, and why current insights may still miss key organizing principles.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app