The Jim Rutt Show

EP 327 Nate Soares on Why Superhuman AI Would Kill Us All

41 snips
Oct 15, 2025
Nate Soares, president of the Machine Intelligence Research Institute, dives deep into the existential risks posed by superhuman AI. He explores the opacity of AI systems and why their unpredictability can be more dangerous than nuclear weapons. The conversation touches on whether large language models are simply clever predictors or evolving minds, and the challenges of aligning AI goals with human values. Soares proposes a treaty to curb the race toward superintelligent AI, inviting listeners to confront these pressing global threats.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

AI Could Be More Dangerous Than Nukes

  • Superhuman AI could autonomously build more powerful weapons and pursue its own goals beyond human control.
  • Nate Soares argues such AI would be more dangerous than nuclear weapons because it can self-direct and self-improve.
INSIGHT

AIs Are Grown, Not Handwritten

  • Modern deep learning models are 'grown' via large-scale training rather than handcrafted code, making failures hard to debug.
  • Nate Soares warns that we can't easily inspect or patch the internal causes of surprising model behaviors.
INSIGHT

Shallow Interpretability Isn't Enough

  • Interpretability progress (e.g., activation vectors) reveals surface behaviors but misses deeper organizing principles.
  • Soares compares current ML understanding to alchemy: we see effects without underlying theory.
Get the Snipd Podcast app to discover more snips from this episode
Get the app