Doom Debates

Liron Shapira
undefined
36 snips
Dec 5, 2025 • 1h 12min

Nobel Prizewinner SWAYED by My AI Doom Argument — Prof. Michael Levitt, Stanford University

In this engaging discussion, Michael Levitt, a Nobel Prize-winning computational biologist from Stanford, openly revises his thoughts on AI doom arguments. He explores the evolution of AI and its unpredictable timelines influenced by advances in computing. Levitt debates the potential existential risks of powerful AI, comparing them to nuclear threats and pandemics. He also emphasizes the need for effective regulation and outreach to mitigate these risks. Ultimately, he acknowledges the importance of dialogues like this in shaping future safety measures.
undefined
16 snips
Nov 29, 2025 • 2h 16min

Facing AI Doom, Lessons from Daniel Ellsberg (Pentagon Papers) — Michael Ellsberg

Michael Ellsberg, author and commentator, is the son of Pentagon Papers leaker Daniel Ellsberg. He dives into alarming parallels between the Vietnam War and today's AI arms race. Michael posits a staggering 99% probability of doom, sharing his personal experience of being replaced by AI. He emphasizes the moral duty of tech insiders to disclose risks and critiques the economic implications of AI on jobs. The discussion covers everything from nuclear near-misses to the psychological toll of existential risks, wrapping up with a call for responsible action against AI threats.
undefined
27 snips
Nov 21, 2025 • 1h 51min

Max Tegmark vs. Dean Ball: Should We BAN Superintelligence?

Join MIT professor Max Tegmark and former White House adviser Dean Ball as they dive into the fiery debate over banning superintelligence. Tegmark argues for a precautionary approach, pushing for strict safety standards and public oversight, while Ball counters that a ban would stifle beneficial innovation and is hard to define. They explore regulatory designs, national security concerns, and the risks of recursive self-improvement. With high stakes at play, this debate challenges listeners to consider the future of AI and its implications.
undefined
27 snips
Nov 14, 2025 • 2h 18min

The AI Corrigibility Debate: MIRI Researchers Max Harms vs. Jeremy Gillen

Max Harms, an AI alignment researcher and author of the novel Red Heart, debates with former MIRI research fellow Jeremy Gillen on AI corrigibility. Max argues that aiming for obedient, corrigible AI is essential to prevent existential risks, drawing parallels to human assistant dynamics. Jeremy is skeptical about the feasibility of this approach as a short-term solution. The discussion explores the intricacies of maintaining control over superintelligent AI and whether efforts toward corrigibility might be a hopeful strategy or an over-optimistic dream.
undefined
9 snips
Nov 11, 2025 • 16min

These Effective Altruists Betrayed Me — Holly Elmore, PauseAI US Executive Director

Holly Elmore, Executive Director of PauseAI US and a passionate activist, discusses the tensions within the AI safety community and her decision to lead protests against frontier AI labs. She shares her experiences of feeling betrayed by former allies and highlights the insular nature of effective altruism, where reputation often takes precedence over genuine safety concerns. Holly emphasizes the importance of public advocacy, explaining how shifting focus can bridge gaps between communities and reduce harmful tribalism in AI discourse.
undefined
26 snips
Nov 7, 2025 • 53min

DEBATE: Is AGI Really Decades Away? | Ex-MIRI Researcher Tsvi Benson-Tilsen vs. Liron Shapira

In a thought-provoking debate, Tsvi Benson-Tilsen, an ex-MIRI researcher and founder of the Berkeley Genomics Project, argues that AGI is much further away than commonly believed. He emphasizes the limitations of current AI, pointing out tasks it struggles with, like generating novel scientific ideas. The conversation also explores the need for clear benchmarks in predicting AI progress and debates whether advances could trigger an AI winter. Tsvi proposes germline engineering as a solution for enhancing human intelligence to tackle future challenges.
undefined
77 snips
Nov 5, 2025 • 1h 7min

Liron Debunks The Most Common “AI Won't Kill Us" Arguments

Liron Shapira, an investor and entrepreneur with deep roots in rationalism, discusses his alarming 50% probability of AI doom. He tackles major sources of AI risk, emphasizing rogue AI and alignment problems. Liron expertly debunks common counterarguments against AI catastrophe, asserting that current models could escalate into uncontrollable superintelligences. He highlights the political implications of AI in the next decade, calling for international regulations as a safeguard against potential disaster.
undefined
13 snips
Oct 31, 2025 • 41min

Why AI Alignment Is 0% Solved — Ex-MIRI Researcher Tsvi Benson-Tilsen

Tsvi Benson-Tilsen, a former MIRI researcher, spent seven years grappling with AI alignment challenges. He reveals a stark truth: humanity has made virtually no progress on this complex issue. Tsvi delves into critical concepts like reflective decision theory and corrigibility, illuminating why controlling superintelligence is so daunting. He discusses the implications of self-modifying AIs and the risks of ontological crises, prompting important debates about the limitations of current AI models and the urgent need for effective alignment strategies.
undefined
18 snips
Oct 29, 2025 • 47min

Eben Pagan (aka David DeAngelo) Interviews Liron — Why 50% Chance AI Kills Everyone by 2050

In this engaging discussion, Eben Pagan, an influential entrepreneur and business trainer known as David DeAngelo, dives into the chilling topic of AI risk. Liron presents a compelling case for a staggering 50% chance of existential doom by 2050. They explore the concept that AI doesn’t need to harbor malice to pose a threat, and discuss why superintelligence might lack an 'off switch.' With the urgency of international coordination emphasized, listeners are left questioning how the future of humanity hinges on our relationship with AI.
undefined
25 snips
Oct 25, 2025 • 49min

Former MIRI Researcher Solving AI Alignment by Engineering Smarter Human Babies

Tsvi Benson-Tilsen, a former MIRI researcher and co-founder of the Berkeley Genomics Project, advocates for engineering smarter humans as a solution to AI alignment challenges. He discusses the alarming P(doom) estimates and the urgent need to slow AGI development. Delving into human germline engineering, Tsvi shares insights on chromosome selection and its potential to enhance intelligence significantly. He also debates societal stigma around AGI research and outlines an ambitious timeline for creating genetically enhanced humans to tackle the impending AI risks.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app