

Doom Debates
Liron Shapira
It's time to talk about the end of the world! lironshapira.substack.com
Episodes
Mentioned books

Sep 18, 2025 • 4min
Max Tegmark Says It's Time To Protest Against AI Companies
Max Tegmark discusses the alarming lack of a safety plan for artificial general intelligence. He calls for public education and large protests to pressure AI companies into taking responsibility. Tegmark emphasizes the urgency of acknowledging the risks posed by superintelligence and criticizes tech leaders for their inaction. He describes a pressing need for binding regulation before AGI development advances. The conversation is both enlightening and a rallying cry for action against the existential threats of unchecked AI.

Sep 15, 2025 • 1min
Eliezer Yudkowsky — If Anyone Builds It, Everyone Dies
Engage in a deep dive into the chilling implications of the AI doom argument. Eliezer Yudkowsky shares insights on why many AI companies remain oblivious to potential risks. Discover the vital role of societal awareness and engagement in shaping the future of AI. The conversation also touches on his latest book, which underscores the urgency of these discussions. Get ready for a rollercoaster of rationality and existential questions surrounding artificial intelligence!

Sep 15, 2025 • 3min
ANNOUNCEMENT: Eliezer Yudkowsky interview premieres tomorrow!
Get excited for the launch of a groundbreaking new book by Eliezer Yudkowsky and Nate Soares! The conversation delves into the importance of purchasing the book to ensure it hits the New York Times bestseller list. Anticipation builds for a must-see interview with Yudkowsky, where he'll share insights on existential risks. Mark your calendars; this is an event not to be missed!

26 snips
Sep 13, 2025 • 1h 9min
How AI Kills Everyone on the Planet in 10 Years — Liron on The Jona Ragogna Podcast
The discussion revolves around the existential threat posed by superintelligent AI and the alarming pace of its development. The concept of P(Doom) is introduced, suggesting a chilling chance of catastrophe by 2050. Listeners learn about the potential goals AI could develop and the implications of a dystopian future marked by mass unemployment. Urgent calls for public awareness and grassroots movements highlight the need for responsible AI development. Personal reflections on parenthood add depth to the conversation, emphasizing the emotional stakes involved.

6 snips
Sep 10, 2025 • 7min
Get ready for LAUNCH WEEK!!! “If Anyone Builds It, Everyone Dies” by Eliezer Yudkowsky & Nate Soares
Anticipation runs high for the upcoming launch of a new book that tackles the risks of AI. A live interview with Eliezer Yudkowsky promises deep insights and audience involvement. The speakers emphasize the importance of participation in shaping discussions about AI's societal impacts. Exciting preparations for the book launch party focus on engaging with notable guests and creating meaningful dialogues. Get ready for a whirlwind of thought-provoking conversation and enthusiasm!

31 snips
Aug 28, 2025 • 1h 21min
Tech CTO Has 99.999% P(Doom) — “This is my bugout house” — Louis Berman, AI X-Risk Activist
Louis Berman, an AI X-Risk activist and seasoned CTO, dives into the pressing concerns surrounding artificial intelligence. He shares his unique journey from coding AI to lobbying over 60 politicians for PauseAI. Berman discusses the emotional detachment in AI safety discourse, advocating for urgent action against potential existential risks. After buying a bug-out house in rural Maryland, he provides practical advice on effective lobbying and the need for more voices in the debate on AI doom. His insights urge society to engage critically with the implications of smarter-than-human technologies.

30 snips
Aug 23, 2025 • 2h 12min
Rob Miles, Top AI Safety Educator: Humanity Isn’t Ready for Superintelligence!
Rob Miles, a leading AI safety educator on YouTube, explores the urgent complexities of AI alignment and the potential existential threats posed by advanced systems. He discusses the risks of recursive self-improvement and the uncertainties of value inheritance in AI's evolution. Rob emphasizes the emotional disconnect in current AI discourse and the importance of effective communication to raise awareness about these dangers. With a calm yet serious demeanor, he balances the conversation between technological optimism and the reality of potential catastrophe.

60 snips
Aug 12, 2025 • 2h 26min
Debate with Vitalik Buterin — Will “d/acc” Protect Humanity from Superintelligent AI?
Vitalik Buterin, the founder of Ethereum, contributes his groundbreaking thoughts on AI safety and existential risks. He discusses his coined term 'd/acc'—a balanced approach between uncritical AI acceleration and total pause. Vitalik explores the compatibility of decentralized solutions with AI alignment, the potential hazards of superintelligent AI, and the necessity for pluralism in AI development. He also shares his intriguing vision for human-AI integration via brain-computer interfaces, all while emphasizing the importance of civil discourse in the ongoing debate.

21 snips
Aug 8, 2025 • 1h 19min
Why I'm Scared GPT-9 Will Murder Me — Liron on Robert Wright’s Nonzero Podcast
In a compelling discussion, Liron Shapira, a Silicon Valley entrepreneur and AI safety activist, dives deep into the unsettling implications of AI development. He highlights recent resignations at OpenAI and the growing fears of AI’s potential risks. Liron shares insights on the importance of activism despite a disappointing protest turnout, as well as the challenges surrounding AI alignment and ethical governance. With alarming examples of AI behavior, he underscores the urgent need for a pause to reassess and ensure safety in the rapidly advancing AI landscape.

36 snips
Aug 1, 2025 • 3h 15min
The Man Who Might SOLVE AI Alignment — Dr. Steven Byrnes, AGI Safety Researcher @ Astera Institute
Dr. Steven Byrnes, an AI safety researcher at the Astera Institute and a former physics postdoc at Harvard, shares his cutting-edge insights on AI alignment. He discusses his 90% probability of AI doom while arguing that true threats stem from future brain-like AGI rather than current LLMs. Byrnes explores the brain's dual subsystems and their influences on decision-making, emphasizing the necessity of integrating neuroscience into AI safety research. He critiques existing alignment approaches, warning of the risks posed by misaligned AI and the complexities surrounding human-AI interaction.