
Doom Debates
It's time to talk about the end of the world! lironshapira.substack.com
Latest episodes

11 snips
May 29, 2025 • 1h 36min
Q&A: Ilya's AGI Doomsday Bunker, Veo 3 is Westworld, Eliezer Yudkowsky, and much more!
The hosts dive deep into the doomsday argument and the complexities of AI, questioning the future of humanity alongside superintelligent beings. They tackle the ethical dilemmas of AI consciousness and the potential for manipulation, shedding light on the need for responsible AI practices. Discussions of Ilya's bunker and predictions for AGI spark intriguing ideas about safety and regulation. The episode humorously contrasts childhood tech dreams with today’s realities, while emphasizing the importance of representation and community in navigating the AI landscape.

24 snips
May 22, 2025 • 1h 22min
This $85M-Backed Founder Claims Open Source AGI is Safe — Debate with Himanshu Tyagi
Himanshu Tyagi, a professor at the Indian Institute of Science and co-founder of Sentient, discusses the promise of open-source AGI, emphasizing collaboration and competition in tech innovation. He pitches Sentient’s vision while debating the safety of open-sourcing AGI against potential existential risks. The conversation dives into the challenges of monetizing open-source AI, the influence of AI on social movements, and the ethical considerations of AI's military applications. Tyagi provides thought-provoking insights into the implications of humanity's relationship with advanced AI technologies.

9 snips
May 21, 2025 • 17min
Emergency Episode: John Sherman FIRED from Center for AI Safety
Reflecting on the shocking firing of John Sherman from the Center for AI Safety, the hosts debate the implications for the entire AI risk community. They voice frustration over the weak messaging surrounding existential threats posed by AI. Emphasizing the need for clear communication, they urge listeners to articulate their concerns confidently. This discussion sparks a broader conversation about how the community should adapt to address these urgent risks effectively.

52 snips
May 15, 2025 • 2h 4min
Gary Marcus vs. Liron Shapira — AI Doom Debate
Gary Marcus, a leading scientist and author in AI, discusses the existential risks of artificial intelligence. He debates the probability of catastrophic outcomes, pondering whether the threat level is near 50% or less than 1%. The conversation dives into misconceptions across generative AI, the timeline for achieving AGI, and the challenges of aligning AI with human values. Marcus also explores the complexities of humanity's resilience against potential 'superintelligent' dangers while highlighting the urgent need for regulatory frameworks to ensure safety in technological advancements.

14 snips
May 8, 2025 • 2h 15min
Mike Israetel vs. Liron Shapira — AI Doom Debate
Mike Israetel, an exercise scientist and AI futurist, joins Liron Shapira for a spirited debate on the future of artificial intelligence. They delve into the timelines for AGI, the dual nature of superintelligent AI, and whether it will cooperate with humanity. The discussion contrasts optimistic and pessimistic viewpoints, addressing risks and rewards, and explores the moral implications of AI's potential. They also ponder humanity's role in a world increasingly shaped by AI and the urgent need for global cooperation in AI governance.

29 snips
May 5, 2025 • 1h 24min
Doom Scenario: Human-Level AI Can't Control Smarter AI
The podcast dives into the complex landscape of AI risks, exploring the delicate balance between innovation and control. It discusses the concept of superintelligence and the critical thresholds that could lead to catastrophic outcomes. Key insights include the importance of aligning AI values with human welfare and the potential perils of autonomous goal optimization. Listeners are prompted to consider the implications of advanced AI making decisions independent of human input, highlighting the need for ongoing vigilance as technology evolves.

24 snips
Apr 30, 2025 • 1h 53min
The Most Likely AI Doom Scenario — with Jim Babcock, LessWrong Team
In a riveting discussion, Jim Babcock, a key member of the LessWrong engineering team, shares insights from nearly 20 years of contemplating AI doom scenarios. The conversation explores the evolution of AI threats, the significance of moral alignment, and the surprising implications of large language models. Jim and the host dissect the complexities of programming choices and highlight the importance of ethical AI development. They emphasize the potential risks of both gradual disempowerment and rapid advancements, demanding urgent attention to ensure AI aligns with human values.

30 snips
Apr 24, 2025 • 1h 59min
AI Could Give Humans MORE Control — Ozzie Gooen
Ozzie Gooen, founder of the Quantified Uncertainty Research Institute, delves into the fascinating world of AI safety and forecasting tools. He discusses the importance of high-quality discourse in tackling AI risks and the role of Bayesian modeling in decision-making. Ozzie shares insights on innovative software like Guesstimate and Metaforecast, enhancing prediction accuracy. The conversation touches on effective altruism, the ethical responsibilities within the community, and the philosophical implications of population ethics as AI takes on greater societal roles.

15 snips
Apr 18, 2025 • 2h 8min
Top AI Professor Has 85% P(Doom) — David Duvenaud, Fmr. Anthropic Safety Team Lead
David Duvenaud, a Computer Science professor at the University of Toronto and former AI safety lead at Anthropic, shares gripping insights into AI's existential threats. He discusses his high probability of doom regarding AI risks and the necessity for unified governance to mitigate these challenges. The conversation delves into his experiences with AI alignment, the complexities of productivity in academia, and the pressing need for brave voices in the AI safety community. Duvenaud also reflects on the ethical dilemmas tech leaders face in balancing innovation and responsibility.

Apr 15, 2025 • 58min
“AI 2027” — Top Superforecaster's Imminent Doom Scenario
The discussion delves into the chilling predictions of AI evolution by 2027, featuring autonomous AI agents that could lead to societal upheaval. A whistleblower exposes alarming misalignments, prompting a moral crossroads for lawmakers. The podcast critiques the development of AI models aimed at aligning with human values amid rising geopolitical tensions, particularly between the U.S. and China. There's a focus on engagement within the AI community, highlighting the importance of rational dialogue and upcoming events for those passionate about AI safety.