

Doom Debates
Liron Shapira
It's time to talk about the end of the world! lironshapira.substack.com
Episodes
Mentioned books

Aug 8, 2025 • 1h 19min
Why I'm Scared GPT-9 Will Murder Me — Liron on Robert Wright’s Nonzero Podcast
In a compelling discussion, Liron Shapira, a Silicon Valley entrepreneur and AI safety activist, dives deep into the unsettling implications of AI development. He highlights recent resignations at OpenAI and the growing fears of AI’s potential risks. Liron shares insights on the importance of activism despite a disappointing protest turnout, as well as the challenges surrounding AI alignment and ethical governance. With alarming examples of AI behavior, he underscores the urgent need for a pause to reassess and ensure safety in the rapidly advancing AI landscape.

36 snips
Aug 1, 2025 • 3h 15min
The Man Who Might SOLVE AI Alignment — Dr. Steven Byrnes, AGI Safety Researcher @ Astera Institute
Dr. Steven Byrnes, an AI safety researcher at the Astera Institute and a former physics postdoc at Harvard, shares his cutting-edge insights on AI alignment. He discusses his 90% probability of AI doom while arguing that true threats stem from future brain-like AGI rather than current LLMs. Byrnes explores the brain's dual subsystems and their influences on decision-making, emphasizing the necessity of integrating neuroscience into AI safety research. He critiques existing alignment approaches, warning of the risks posed by misaligned AI and the complexities surrounding human-AI interaction.

20 snips
Jul 24, 2025 • 1h 57min
Top Professor Condemns AGI Development: “It’s Frankly Evil” — Geoffrey Miller
Geoffrey Miller, an evolutionary psychologist and bestselling author, shares his intriguing journey from AI research to human mating behavior. He discusses the existential risks posed by AGI, suggesting that both inner and outer alignment may be fundamentally unsolvable. Miller critiques the societal impact of AI, claiming it's yet to yield net positive benefits. He also touches on neurodiversity in academia, the complexities of modern relationships, and the critical need for international cooperation on AI safety, all while advocating for a measured approach to technological advancement.

16 snips
Jul 22, 2025 • 20min
Zuck’s Superintelligence Agenda is a SCANDAL | Warning Shots EP1
In this conversation, Mark Zuckerberg's push towards superintelligence raises alarming questions about AI's potential. The discussion highlights the dangers of recursive self-improvement and the ethical dilemmas connected to self-upgrading systems. Tech leaders are criticized for their reckless disregard of existential threats, while the hosts dissect the balance between current AI benefits and future chaos. Personal anecdotes illustrate the psychological impact of AI on individuals, making a strong case for accountability and awareness in the tech industry.

16 snips
Jul 18, 2025 • 1h 34min
Rationalist Podcasts Unite! — The Bayesian Conspiracy ⨉ Doom Debates Crossover
Eneasz Brodski and Steven Zuber, co-hosts of the Bayesian Conspiracy podcast, dive into the intricacies of living with a 50% chance of civilization ending by 2040. They explore the balance between spreading doom awareness and maintaining mental well-being. The discussion touches on AI's influence on understanding, the emotional effects of existential risks, and storytelling from the early days of the rationalist community. Their insights highlight the need for effective communication in tech discourse while reflecting on the evolution of debate culture.

26 snips
Jul 15, 2025 • 1h 5min
His P(Doom) Doubles At The End — AI Safety Debate with Liam Robins, GWU Sophomore
Liam Robins, a math major from George Washington University, dives into the intense world of AI policy and rationalist thought. He begins with a modest 3% P(Doom), but as he navigates through philosophical debates about moral realism and the potential threats of AGI, his beliefs undergo a significant shift, raising his estimate to 8%. The conversation touches on whether intelligence guarantees moral goodness, the complexities of psychopathy in intelligent beings, and the significance of real-time belief updates in risk assessment. It's a fascinating exploration of rationality and AI safety.

Jul 10, 2025 • 1h 46min
AI Won't Save Your Job — Liron Reacts to Replit CEO Amjad Masad
Amjad Masad, the Founder and CEO of Replit, shares his vision of a future where AI propels everyone into entrepreneurship. He discusses the limitations of AI, arguing that it primarily remixes ideas rather than creating new ones. The conversation challenges the notion that all individuals can succeed as entrepreneurs, highlighting the bias of successful individuals. They also dive into the nuanced impact of AI on jobs and the economy, dissecting its relationship with creativity and innovation while questioning the validity of certain theories on human cognition.

14 snips
Jul 7, 2025 • 39min
Every Student is CHEATING with AI — College in the AGI Era (feat. Sophomore Liam Robins)
Liam Robins, a math major at George Washington University, shares insights on the widespread AI-enabled cheating epidemic among college students. He highlights how many are bypassing traditional learning and instead relying on technology to complete assignments. The authenticity of lectures and academic integrity are in question, with professors struggling to keep up. The discussion also touches on shifting social dynamics and dating practices influenced by technology, leaving students grappling with their future in an AI-driven world.

4 snips
Jul 4, 2025 • 57min
Carl Feynman, AI Engineer & Son of Richard Feynman, Says Building AGI Likely Means Human EXTINCTION!
Carl Feynman, an AI engineer with a rich background in philosophy and computer science, discusses the looming threats of superintelligent AI. He shares insights from his four-decade career, highlighting the chilling possibility of human extinction linked to AI development. The conversation dives into the history of AI doom arguments, the challenges of aligning AI with human values, and potential doom scenarios. Feynman also explores the existential questions surrounding AI’s future role in society and the moral implications of technological advancements.

16 snips
Jun 28, 2025 • 1h 53min
Richard Hanania vs. Liron Shapira — AI Doom Debate
In this enlightening discussion, Richard Hanania, President of the Center for the Study of Partisanship and Ideology, debates AI risks with Liron Shapira. They delve into the skepticism surrounding AI doom predictions, questioning the nature of intelligence and optimization. Hanania argues that positive AI outcomes are just as likely as negative ones, exploring themes like job impacts and the alignment of AI with human values. Their spirited dialogue confronts the complexities of political discourse and the potential for technology to shape humanity's future.