

if anyone builds it, everyone dies
Oct 2, 2025
Liron Shapira, founder of Doom Debates and a leading voice on AI risks, shares his chilling '50% by 2050' forecast for existential threats posed by superintelligence. He explores why skepticism about AI's dangers persists despite rapid advancements and discusses the impossibility of controlling emergent self-improvement. Liron warns against the illusion of safety measures, critiques proposals like short pauses, and highlights the potential for AIs to manipulate humans economically and socially, urging listeners to reconsider their optimism about AI's future.
AI Snips
Chapters
Transcript
Episode notes
High Probability Of Existential Risk
- Liron Shapira assigns a 50% chance of everyone dying by 2050 from uncontrollable superintelligent AI.
- He argues the topic is under-discussed and requires urgent public attention.
Computers Already Outpace Human Speed
- Liron compares computational speed and parallelism to human brain limits to show machines will outclass humans.
- He uses clock rates and brain Hertz to argue biological intelligence won't stay in the lead.
Founder Who Uses AI But Fears Loss Of Control
- Liron runs a Y Combinator startup and uses AI tools in his business daily.
- He says he enjoys AI but fears losing control when systems outclass humans.