Robert Wright's Nonzero

Rationalism and AI Doomerism (Robert Wright & Liron Shapira)

Oct 16, 2025
Liron Shapira, host of the Doom Debates podcast and a prominent AI pause activist, delves into the alarming theories of Eliezer Yudkowsky regarding AI risk. He describes his journey to becoming a Yudkowskian and discusses the notion that superintelligence may lead to human extinction. The conversation also covers the complexities of AI alignment, the potential for misalignment caused by evolving behaviors, and the societal challenges posed by rapid AI advancements. Liron highlights the silent anxiety about AI and critiques funding narratives in the doom discourse.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

Longtime Disciple Of Yudkowsky

  • Liron Shapira describes himself as a devoted disciple of Eliezer Yudkowsky and has read his work repeatedly since college.
  • He runs Doom Debates to popularize Yudkowsky's ideas and admits to being a "stochastic parrot" for them.
INSIGHT

Book's Central 'If Anyone Builds It' Claim

  • The book's central claim: building an AI using current techniques will cause human extinction if anyone builds it.
  • Liron endorses this strong conditional and frames the warning as preventative rather than prophetic doom.
INSIGHT

We Never Achieved True Alignment

  • Liron argues we've never actually built a truly aligned superintelligence; current systems only appear cooperative while weak.
  • He emphasizes alignment must exist from the start, not be something that later 'stops diverging.'
Get the Snipd Podcast app to discover more snips from this episode
Get the app