
 Doom Debates
 Doom Debates Wes & Dylan Join Doom Debates — Violent Robots, Eliezer Yudkowsky, & Who Has the HIGHEST P(Doom)?!
 20 snips 
 Oct 2, 2025  Wes Roth, an accessible AI commentator, teams up with Dylan Curious, an insightful creator tackling AI's societal impacts. They dive into their personal P(Doom) probabilities, with Dylan expressing an alarming 80%. The duo debates the implications of Yudkowsky's new book on AGI risks and the urgent need for international governance. They also explore the paradox of robot violence tests, Anthropic's job loss warnings, and the growing protests against AI labs, revealing the contentious landscape of AI development and safety. 
 AI Snips 
 Chapters 
 Books 
 Transcript 
 Episode notes 
Experts Disagree But Uncertainty Is Real
- Dylan and Wes present high subjective P(Doom) ranges but differ numerically, signaling real uncertainty among experts.
- High variance across credible voices implies policy should plan for a wide risk distribution.
Avoid Tribal Labels; Treat Alignment As Open
- Wes emphasizes avoiding tribalism and staying intellectually flexible when evaluating AI risk.
- He frames alignment as an open problem driven by incentives and race dynamics.
On-The-Street Reactions To 'If Anyone Builds It'
- Liron recounts street interviews and public reactions to Eliezer's book to show lay misunderstanding.
- A passerby told him AGI in 25 years is plausible but dismissed nearer-term risk confidently.








