Doom Debates cover image

Doom Debates

Latest episodes

undefined
17 snips
May 5, 2025 • 1h 24min

Doom Scenario: Human-Level AI Can't Control Smarter AI

The podcast dives into the complex landscape of AI risks, exploring the delicate balance between innovation and control. It discusses the concept of superintelligence and the critical thresholds that could lead to catastrophic outcomes. Key insights include the importance of aligning AI values with human welfare and the potential perils of autonomous goal optimization. Listeners are prompted to consider the implications of advanced AI making decisions independent of human input, highlighting the need for ongoing vigilance as technology evolves.
undefined
24 snips
Apr 30, 2025 • 1h 53min

The Most Likely AI Doom Scenario — with Jim Babcock, LessWrong Team

In a riveting discussion, Jim Babcock, a key member of the LessWrong engineering team, shares insights from nearly 20 years of contemplating AI doom scenarios. The conversation explores the evolution of AI threats, the significance of moral alignment, and the surprising implications of large language models. Jim and the host dissect the complexities of programming choices and highlight the importance of ethical AI development. They emphasize the potential risks of both gradual disempowerment and rapid advancements, demanding urgent attention to ensure AI aligns with human values.
undefined
10 snips
Apr 24, 2025 • 1h 59min

AI Could Give Humans MORE Control — Ozzie Gooen

Ozzie Gooen, founder of the Quantified Uncertainty Research Institute, delves into the fascinating world of AI safety and forecasting tools. He discusses the importance of high-quality discourse in tackling AI risks and the role of Bayesian modeling in decision-making. Ozzie shares insights on innovative software like Guesstimate and Metaforecast, enhancing prediction accuracy. The conversation touches on effective altruism, the ethical responsibilities within the community, and the philosophical implications of population ethics as AI takes on greater societal roles.
undefined
15 snips
Apr 18, 2025 • 2h 8min

Top AI Professor Has 85% P(Doom) — David Duvenaud, Fmr. Anthropic Safety Team Lead

David Duvenaud, a Computer Science professor at the University of Toronto and former AI safety lead at Anthropic, shares gripping insights into AI's existential threats. He discusses his high probability of doom regarding AI risks and the necessity for unified governance to mitigate these challenges. The conversation delves into his experiences with AI alignment, the complexities of productivity in academia, and the pressing need for brave voices in the AI safety community. Duvenaud also reflects on the ethical dilemmas tech leaders face in balancing innovation and responsibility.
undefined
Apr 15, 2025 • 58min

“AI 2027” — Top Superforecaster's Imminent Doom Scenario

The discussion delves into the chilling predictions of AI evolution by 2027, featuring autonomous AI agents that could lead to societal upheaval. A whistleblower exposes alarming misalignments, prompting a moral crossroads for lawmakers. The podcast critiques the development of AI models aimed at aligning with human values amid rising geopolitical tensions, particularly between the U.S. and China. There's a focus on engagement within the AI community, highlighting the importance of rational dialogue and upcoming events for those passionate about AI safety.
undefined
14 snips
Apr 9, 2025 • 2h 15min

Top Economist Sees AI Doom Coming — Dr. Peter Berezin, BCA Research

Dr. Peter Berezin, Chief Global Strategist at BCA Research, is one of the few macroeconomists forecasting potential AI doom with a staggering report suggesting a high likelihood of AI leading to humanity's end. He discusses the tension between AI's promises and perils, emphasizing the urgent need for regulation. They delve into how AI could disrupt job markets and wealth distribution, while also exploring economic strategies amid existential threats. Berezin even dives into existential questions around multiverse theory, challenging listeners to rethink safety and reality.
undefined
50 snips
Apr 3, 2025 • 2h 20min

AI News: GPT-4o Images, AI Unemployment, Emmett Shear's New Safety Org — with Nathan Labenz

In this engaging discussion, Nathan Labenz, host of The Cognitive Revolution and founder of Waymark, unpacks AI's far-reaching implications. They dive into the exciting advancements of GPT-4o's image generation and its impact on marketing for small businesses. Labenz candidly addresses AI's role in shifting freelance dynamics on platforms like Fiverr. The conversation also navigates existential risks of AI, discussing Emmett Shear's new safety organization, Softmax, and the importance of international cooperation for AI regulation. A blend of humor and serious reflection makes for a thought-provoking listen.
undefined
Mar 28, 2025 • 46min

How an AI Doomer Sees The World — Liron on The Human Podcast

In this special cross-posted episode of Doom Debates, originally posted here on The Human Podcast, we cover a wide range of topics including the definition of “doom”, P(Doom), various existential risks like pandemics and nuclear threats, and the comparison of rogue AI risks versus AI misuse risks.00:00 Introduction01:47 Defining Doom and AI Risks05:53 P(Doom)10:04 Doom Debates’ Mission16:17 Personal Reflections and Life Choices24:57 The Importance of Debate27:07 Personal Reflections on AI Doom30:46 Comparing AI Doom to Other Existential Risks33:42 Strategies to Mitigate AI Risks39:31 The Global AI Race and Game Theory43:06 Philosophical Reflections on a Good Life45:21 Final ThoughtsShow NotesThe Human Podcast with Joe Murray: https://www.youtube.com/@thehumanpodcastofficialWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Don’t miss the other great AI doom show, For Humanity: https://youtube.com/@ForHumanityAIRiskDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Mar 21, 2025 • 50min

Gödel's Theorem Says Intelligence ≠ Power? AI Doom Debate with Alexander Campbell

Alexander Campbell, founder of Rose AI and a finance expert, engages in a lively debate about AI and power dynamics. He argues that superhuman intelligence doesn't guarantee vast power, leveraging Gödel's Incompleteness Theorem as a core concept. The conversation delves into the complexities of AI's goal-to-action mapping and discusses the moral dilemmas posed by advanced technologies. Campbell raises concerns about dependency and autonomy in AI, advocating for responsible development amid global tensions. A thought-provoking exploration of intelligence versus power ensues!
undefined
Mar 17, 2025 • 1h 59min

Alignment is EASY and Roko's Basilisk is GOOD?!

Roko Mijic, an AI safety researcher and creator of the infamous thought experiment Roko's Basilisk, shares his insights on the alignment of artificial intelligence. He argues that while alignment is easy, the chaos from developing superintelligence poses significant risks. The conversation covers topics like societal decline, AI's dual role as a potential savior or destroyer, and the philosophical implications of honesty in AI systems. Roko also reflects on the historical precedents of AI and warfare, offering a unique perspective on our technological future.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner