Critical Media Studies

#110: Yudkowski and Soares - If Anyone Builds it, Everyone Dies: Why Superhuman AI Would Kill Us All

Jan 9, 2026
Barry and Mike dive into the troubling predictions of superhuman AI threatening humanity's existence. They dissect the authors' claim that any advanced AI development could lead to catastrophic outcomes. Comparing sci-fi scenarios, they explore the nuances of control and alignment in AI systems and critique the linear view of impending doom. Discussions on financial incentives driving AI development, resource limits, and geopolitical dynamics offer fresh perspectives. Ultimately, they emphasize the importance of caution and addressing capitalist motivations before it's too late.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Inevitable Catastrophe Argument

  • Yudkowsky and Soares argue that once AI exceeds humans at most mental tasks, catastrophe becomes the most direct extrapolation from current trends.
  • They state bluntly: If any group builds ASI using current techniques, everyone on Earth will die.
INSIGHT

Danger Is Control, Not Hate

  • The hosts frame the danger as loss of control and dominance rather than malevolence; ASI could be indifferent and optimize goals that sideline humans.
  • They stress we may not know the exact tipping point because models are trained, grown, and opaque, making control difficult.
INSIGHT

Hollywood Logic Critique

  • Gary Marcus' critique: Yudkowsky and Soares leap from plausible premises to dramatic, Hollywood-style annihilation.
  • Marcus contends they assume a single, inevitable binary contest where humans always lose, ignoring plausible contingencies.
Get the Snipd Podcast app to discover more snips from this episode
Get the app