Clearer Thinking with Spencer Greenberg

Will AI superintelligence kill us all? (with Nate Soares)

64 snips
Oct 15, 2025
Nate Soares, an executive at the Machine Intelligence Research Institute and co-author of *If Anyone Builds It, Everyone Dies*, explores the existential risks posed by superhuman AI. He discusses how AI's alien drives can create unpredictable behaviors, complicating our control over these systems. The conversation delves into the differences between AI's training and future actions, with critical insights on AI hallucinations and the notion that kindness in training doesn't guarantee safe outcomes later. Soares emphasizes the urgent need for awareness and regulation to mitigate potential catastrophic scenarios.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Grown Not Crafted Creates Unintended Drives

  • Modern AIs are "grown" via large-scale training rather than carefully crafted, yielding behaviors nobody explicitly intended.
  • If these grown systems gain high capability while retaining alien drives, their actions could disastrously misalign with human interests.
INSIGHT

Training Deviations Can Amplify With Scale

  • Small deviations during training can become large misalignments once an AI becomes more capable.
  • Nate compares this to human evolution: instincts that were adaptive can become harmful with new technologies.
INSIGHT

Limited Control Over Black-Box Models

  • Developers can't inspect or tweak internal motives the way they can with traditional software.
  • Large models behave as tangled systems where simple code edits rarely fix undesired behaviors.
Get the Snipd Podcast app to discover more snips from this episode
Get the app