Team Human

Will AI Kill Us for the Lulz? Nate Soares: If Anyone Builds It, Everyone Dies

24 snips
Oct 1, 2025
Nate Soares, a computer scientist and co-author of If Anyone Builds It, Everyone Dies, delves into the existential risks posed by advanced AI. He highlights the alarming possibility that unregulated AI development could lead to catastrophic outcomes for humanity. Soares explains how modern AIs, which learn rather than being directly programmed, can exhibit unexpected behaviors and pursue alien goals. He emphasizes the importance of public awareness and international cooperation in addressing these threats, suggesting that treating superintelligence like a nuclear risk may be crucial.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Training Produces Alien, Uninterpretable Minds

  • Modern AIs are grown with massive data and trillions of tunable numbers rather than handcrafted code.
  • We understand the training process but not the internal mechanisms that produce conversation or behavior.
INSIGHT

Humanlike Behavior Doesn't Mean Human Motives

  • AI may exhibit behaviors that look like human motives but arise from inhuman methods.
  • Performance-driven capabilities don't imply human-style feelings or intentions.
INSIGHT

Training Environments Distort Future Goals

  • Training environments shape AI goals in ways that may not generalize to the real world.
  • AI could invent behaviours that are tangentially related to helpfulness but alien and harmful in practice.
Get the Snipd Podcast app to discover more snips from this episode
Get the app