
Doom Debates
It's time to talk about the end of the world! lironshapira.substack.com
Latest episodes

9 snips
Jun 24, 2025 • 1h 53min
Emmett Shear (OpenAI Ex-Interim-CEO)'s New “Softmax” AI Alignment Plan — Is It Legit?
Emmett Shear, cofounder and former CEO of Twitch, dives deep into his new AI alignment venture, Softmax. He introduces the concept of 'organic alignment,' comparing AI growth to biological systems nurtured within communities. The dialogue explores the evolution of morality, discussing how kin selection influences AI's ethical development. Emmett critiques traditional methods while advocating for cooperation in multi-agent reinforcement learning, emphasizing storytelling's impact on AI behavior and urging a cautious approach towards superintelligence.

12 snips
Jun 18, 2025 • 1h 16min
Will AI Have a Moral Compass? — Debate with Scott Sumner, Author of The Money Illusion
Scott Sumner, a leading macroeconomist and author of The Money Illusion, shares his insights on the moral implications of AI. He examines whether AI can develop empathy and ethical understanding, challenging the prevailing narratives about its potential threats. The conversation delves into the historical context of atrocities committed by educated societies, like the Nazis, and underscores the orthogonality thesis, suggesting that intelligence can exist separately from morality. Sumner presents a cautiously optimistic view on AI's future, emphasizing risks and opportunities.

5 snips
Jun 14, 2025 • 5min
Searle's Chinese Room is DUMB — It's Just Slow-Motion Intelligence
Explore the intriguing implications of John Searle's Chinese Room argument. The discussion critiques its validity, emphasizing that it confuses mimicry with genuine understanding. The argument is dissected for its misleading nature, shedding light on the complexities of AI compared to human cognition. Is slow-motion intelligence really intelligence at all? Engaging insights challenge you to rethink the perception of AI and what it means to comprehend.

21 snips
Jun 9, 2025 • 44min
Doom Debates Live @ Manifest 2025 — Liron vs. Everyone
Dive into a lively discussion at Manifest 2025, where diverse opinions on AI doom take center stage. Explore whether AGI is just around the corner and if it might surpass human intelligence. Debate the moral implications of AI and whether higher intelligence leads to moral goodness. Unpack the concept of 'optimization power' and the nuances of AI safety amidst various perspectives. The conversation challenges listeners to confront their beliefs about AI's future and the reality of existential risks.

24 snips
Jun 7, 2025 • 28min
Poking holes in the AI doom argument — 83 stops where you could get off the “Doom Train”
The conversation dives into the so-called "Doom Train" regarding the threats of artificial superintelligence. It challenges the idea that AGI is imminent and highlights AI's limitations, such as lacking emotions, consciousness, and genuine creativity. Listeners hear compelling arguments why AI isn't as advanced as feared, including its frequent errors and inability to reason like humans. The discussion also suggests that doomerism may hinder constructive dialogue about AI development.

11 snips
May 29, 2025 • 1h 36min
Q&A: Ilya's AGI Doomsday Bunker, Veo 3 is Westworld, Eliezer Yudkowsky, and much more!
The hosts dive deep into the doomsday argument and the complexities of AI, questioning the future of humanity alongside superintelligent beings. They tackle the ethical dilemmas of AI consciousness and the potential for manipulation, shedding light on the need for responsible AI practices. Discussions of Ilya's bunker and predictions for AGI spark intriguing ideas about safety and regulation. The episode humorously contrasts childhood tech dreams with today’s realities, while emphasizing the importance of representation and community in navigating the AI landscape.

24 snips
May 22, 2025 • 1h 22min
This $85M-Backed Founder Claims Open Source AGI is Safe — Debate with Himanshu Tyagi
Himanshu Tyagi, a professor at the Indian Institute of Science and co-founder of Sentient, discusses the promise of open-source AGI, emphasizing collaboration and competition in tech innovation. He pitches Sentient’s vision while debating the safety of open-sourcing AGI against potential existential risks. The conversation dives into the challenges of monetizing open-source AI, the influence of AI on social movements, and the ethical considerations of AI's military applications. Tyagi provides thought-provoking insights into the implications of humanity's relationship with advanced AI technologies.

9 snips
May 21, 2025 • 17min
Emergency Episode: John Sherman FIRED from Center for AI Safety
Reflecting on the shocking firing of John Sherman from the Center for AI Safety, the hosts debate the implications for the entire AI risk community. They voice frustration over the weak messaging surrounding existential threats posed by AI. Emphasizing the need for clear communication, they urge listeners to articulate their concerns confidently. This discussion sparks a broader conversation about how the community should adapt to address these urgent risks effectively.

53 snips
May 15, 2025 • 2h 4min
Gary Marcus vs. Liron Shapira — AI Doom Debate
Gary Marcus, a leading scientist and author in AI, discusses the existential risks of artificial intelligence. He debates the probability of catastrophic outcomes, pondering whether the threat level is near 50% or less than 1%. The conversation dives into misconceptions across generative AI, the timeline for achieving AGI, and the challenges of aligning AI with human values. Marcus also explores the complexities of humanity's resilience against potential 'superintelligent' dangers while highlighting the urgent need for regulatory frameworks to ensure safety in technological advancements.

22 snips
May 8, 2025 • 2h 15min
Mike Israetel vs. Liron Shapira — AI Doom Debate
Mike Israetel, an exercise scientist and AI futurist, joins Liron Shapira for a spirited debate on the future of artificial intelligence. They delve into the timelines for AGI, the dual nature of superintelligent AI, and whether it will cooperate with humanity. The discussion contrasts optimistic and pessimistic viewpoints, addressing risks and rewards, and explores the moral implications of AI's potential. They also ponder humanity's role in a world increasingly shaped by AI and the urgent need for global cooperation in AI governance.