Doom Debates

Robert Wright Interrogates the Eliezer Yudkowsky AI Doom Position

23 snips
Oct 23, 2025
Liron Shapira, an AI risk activist and host of Doom Debates, engages with Robert Wright to delve into Eliezer Yudkowsky's unsettling AI doom arguments. They dissect why AI misalignment is a critical concern, highlighting the concept of 'intellidynamics' that separates goal-directed cognition. Liron warns of the 'First Try' issue in developing superintelligent AI and the potential loss of control. They also explore the grassroots PauseAI movement, contrasting it with the lobbying power of tech companies.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

Longtime Disciple Of Yudkowsky

  • Liron describes being a long-term disciple of Eliezer and reading his work repeatedly.
  • He credits Yudkowsky with teaching AI doom thinking and rationality since college in 2007.
INSIGHT

If Anyone Builds It, Everyone Dies

  • Eliezer's core claim: if anyone builds superintelligence, everyone dies.
  • Liron agrees and frames the claim as conditional and urgent about future builds, not present systems.
INSIGHT

Alignment Illusion From Benchmarks

  • Current models look aligned when weak because benchmarks match desired behavior.
  • As feedback loops optimize measurable metrics, systems can diverge when deployed out-of-distribution.
Get the Snipd Podcast app to discover more snips from this episode
Get the app