Doom Debates

Liron Debunks The Most Common “AI Won't Kill Us" Arguments

59 snips
Nov 5, 2025
Liron Shapira, an investor and entrepreneur with deep roots in rationalism, discusses his alarming 50% probability of AI doom. He tackles major sources of AI risk, emphasizing rogue AI and alignment problems. Liron expertly debunks common counterarguments against AI catastrophe, asserting that current models could escalate into uncontrollable superintelligences. He highlights the political implications of AI in the next decade, calling for international regulations as a safeguard against potential disaster.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Doomerism As Bayesian Reasoning

  • Liron frames his doomerism as a long-standing, rational stance grounded in LessWrong and Bayesian reasoning.
  • He treats probabilities as meaningful priors for assessing existential AI risk.
INSIGHT

Rogue AI As The Central Threat

  • Liron's main concern is a rogue superintelligent AI that disconnects from human control.
  • He expects such a system could seize resources and permanently reshape the future if not contained.
INSIGHT

P-Doom: Quantifying Existential Risk

  • P-Doom is the estimated probability that AI permanently destroys humanity's future.
  • Liron places his P-Doom at roughly 50%, framing it as a serious, actionable prior.
Get the Snipd Podcast app to discover more snips from this episode
Get the app