Doom Debates

Liron Shapira
undefined
27 snips
Jan 13, 2026 • 31min

Liron Enters Bannon's War Room to Explain Why AI Could End Humanity

Liron Shapira, host of Doom Debates, discusses the existential risks posed by artificial intelligence. He highlights a shocking ~50% chance of human extinction within 10-20 years due to AGI. Liron outlines potential takeover scenarios, emphasizing nanotechnology and rapid advancements in AI capabilities. Despite identifying as a techno-optimist, he calls for urgent global treaties and grassroots mobilization to address this looming threat. As a father, he reflects on the implications of raising children in a high-risk world and advocates for building a reliable emergency stop for AI systems.
undefined
54 snips
Jan 5, 2026 • 1h 55min

Noah Smith vs. Liron Shapira — Will AI spare our lives AND our jobs?

Noah Smith, an economist and popular Substack writer, shares his optimistic vision of AI's future. He believes rather than causing extinction, AI will generate plentiful high-paying jobs. The discussion explores the probabilities of AI-induced catastrophe versus transformation. Noah argues AI could enable existing threats rather than create new ones. They also debate the potential for harmful AI persuasion, the risk of a dominant AI, and how to preserve human livelihoods amidst evolving technology. Optimism shines through as he advocates for resource policies to benefit humanity.
undefined
14 snips
Dec 30, 2025 • 3h 53min

I Debated Beff Jezos and His "e/acc" Army

This discussion features Bayeslord, an advocate for Effective Accelerationism, and Beth Jaisos, an engaging debater challenging doomer perspectives. They delve into intricate topics like the transition from AI tools to autonomous agents and the implications of chaotic unpredictability in AI development. Bayeslord emphasizes practical constraints on AI's speed and capabilities while Beth argues against the likelihood of imminent doom. Their lively debate explores the balance between technological advancement and potential risks, making for a thought-provoking exchange.
undefined
12 snips
Dec 24, 2025 • 2h 55min

Doom Debates LIVE Call-In Show! Listener Q&A about AGI, evolution vs. engineering, shoggoths & more

In this lively discussion, live caller Jay dives into the intricate differences between automated intelligence and AGI. His Mr. Meeseeks analogy highlights the challenges of always-on agent scenarios and token/context limitations. The conversation ventures into AGI timelines, the balance between AI offense and defense, and superintelligence skepticism. Jay questions whether engineered AI could truly surpass biological evolution, while Liron shares insights on AI risks and alignment failures, painting a vivid picture of our potential future.
undefined
28 snips
Dec 17, 2025 • 1h 17min

DOOMER vs. BUILDER — AI Doom Debate with Devin Elliot, Software Engineer & Retired Pro Snowboarder

Join Devin Elliot, a self-taught software engineer and former pro snowboarder, as he discusses the future of AI with a refreshing dose of optimism. He believes fears of an AI takeover are as absurd as a car sprouting wings. Devin argues against centralization in favor of decentralized governance, likening AI risks to nuclear policy debates. He critiques current LLM capabilities, asserting that they rely on external tools rather than demonstrating innate intelligence. The duo also dives into their vastly different timelines for superintelligence, pitting years against millennia.
undefined
62 snips
Dec 11, 2025 • 1h 52min

PhD AI Researcher Says P(Doom) is TINY — Debate with Michael Timothy Bennett

Michael Timothy Bennett, a pioneering AI researcher and PhD candidate, presents a framework suggesting that superintelligence has a minimal probability of doom due to resource constraints and a tendency towards cooperation. The debate covers his thesis on intelligence as efficient adaptation, challenging the idea of simple comparisons like Einstein versus a rock. They explore concepts like embodiment and W-maxing, discussing whether AI will align with human goals or pose existential risks, all while engaging in lively arguments about AGI timelines and the nature of intelligence.
undefined
42 snips
Dec 5, 2025 • 1h 12min

Nobel Prizewinner SWAYED by My AI Doom Argument — Prof. Michael Levitt, Stanford University

In this engaging discussion, Michael Levitt, a Nobel Prize-winning computational biologist from Stanford, openly revises his thoughts on AI doom arguments. He explores the evolution of AI and its unpredictable timelines influenced by advances in computing. Levitt debates the potential existential risks of powerful AI, comparing them to nuclear threats and pandemics. He also emphasizes the need for effective regulation and outreach to mitigate these risks. Ultimately, he acknowledges the importance of dialogues like this in shaping future safety measures.
undefined
16 snips
Nov 29, 2025 • 2h 16min

Facing AI Doom, Lessons from Daniel Ellsberg (Pentagon Papers) — Michael Ellsberg

Michael Ellsberg, author and commentator, is the son of Pentagon Papers leaker Daniel Ellsberg. He dives into alarming parallels between the Vietnam War and today's AI arms race. Michael posits a staggering 99% probability of doom, sharing his personal experience of being replaced by AI. He emphasizes the moral duty of tech insiders to disclose risks and critiques the economic implications of AI on jobs. The discussion covers everything from nuclear near-misses to the psychological toll of existential risks, wrapping up with a call for responsible action against AI threats.
undefined
27 snips
Nov 21, 2025 • 1h 51min

Max Tegmark vs. Dean Ball: Should We BAN Superintelligence?

Join MIT professor Max Tegmark and former White House adviser Dean Ball as they dive into the fiery debate over banning superintelligence. Tegmark argues for a precautionary approach, pushing for strict safety standards and public oversight, while Ball counters that a ban would stifle beneficial innovation and is hard to define. They explore regulatory designs, national security concerns, and the risks of recursive self-improvement. With high stakes at play, this debate challenges listeners to consider the future of AI and its implications.
undefined
27 snips
Nov 14, 2025 • 2h 18min

The AI Corrigibility Debate: MIRI Researchers Max Harms vs. Jeremy Gillen

Max Harms, an AI alignment researcher and author of the novel Red Heart, debates with former MIRI research fellow Jeremy Gillen on AI corrigibility. Max argues that aiming for obedient, corrigible AI is essential to prevent existential risks, drawing parallels to human assistant dynamics. Jeremy is skeptical about the feasibility of this approach as a short-term solution. The discussion explores the intricacies of maintaining control over superintelligent AI and whether efforts toward corrigibility might be a hopeful strategy or an over-optimistic dream.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app