Doom Debates cover image

Doom Debates

Latest episodes

undefined
26 snips
Jul 15, 2025 • 1h 5min

His P(Doom) Doubles At The End — AI Safety Debate with Liam Robins, GWU Sophomore

Liam Robins, a math major from George Washington University, dives into the intense world of AI policy and rationalist thought. He begins with a modest 3% P(Doom), but as he navigates through philosophical debates about moral realism and the potential threats of AGI, his beliefs undergo a significant shift, raising his estimate to 8%. The conversation touches on whether intelligence guarantees moral goodness, the complexities of psychopathy in intelligent beings, and the significance of real-time belief updates in risk assessment. It's a fascinating exploration of rationality and AI safety.
undefined
Jul 10, 2025 • 1h 46min

AI Won't Save Your Job — Liron Reacts to Replit CEO Amjad Masad

Amjad Masad, the Founder and CEO of Replit, shares his vision of a future where AI propels everyone into entrepreneurship. He discusses the limitations of AI, arguing that it primarily remixes ideas rather than creating new ones. The conversation challenges the notion that all individuals can succeed as entrepreneurs, highlighting the bias of successful individuals. They also dive into the nuanced impact of AI on jobs and the economy, dissecting its relationship with creativity and innovation while questioning the validity of certain theories on human cognition.
undefined
14 snips
Jul 7, 2025 • 39min

Every Student is CHEATING with AI — College in the AGI Era (feat. Sophomore Liam Robins)

Liam Robins, a math major at George Washington University, shares insights on the widespread AI-enabled cheating epidemic among college students. He highlights how many are bypassing traditional learning and instead relying on technology to complete assignments. The authenticity of lectures and academic integrity are in question, with professors struggling to keep up. The discussion also touches on shifting social dynamics and dating practices influenced by technology, leaving students grappling with their future in an AI-driven world.
undefined
4 snips
Jul 4, 2025 • 57min

Carl Feynman, AI Engineer & Son of Richard Feynman, Says Building AGI Likely Means Human EXTINCTION!

Carl Feynman, an AI engineer with a rich background in philosophy and computer science, discusses the looming threats of superintelligent AI. He shares insights from his four-decade career, highlighting the chilling possibility of human extinction linked to AI development. The conversation dives into the history of AI doom arguments, the challenges of aligning AI with human values, and potential doom scenarios. Feynman also explores the existential questions surrounding AI’s future role in society and the moral implications of technological advancements.
undefined
16 snips
Jun 28, 2025 • 1h 53min

Richard Hanania vs. Liron Shapira — AI Doom Debate

In this enlightening discussion, Richard Hanania, President of the Center for the Study of Partisanship and Ideology, debates AI risks with Liron Shapira. They delve into the skepticism surrounding AI doom predictions, questioning the nature of intelligence and optimization. Hanania argues that positive AI outcomes are just as likely as negative ones, exploring themes like job impacts and the alignment of AI with human values. Their spirited dialogue confronts the complexities of political discourse and the potential for technology to shape humanity's future.
undefined
14 snips
Jun 24, 2025 • 1h 53min

Emmett Shear (OpenAI Ex-Interim-CEO)'s New “Softmax” AI Alignment Plan — Is It Legit?

Emmett Shear, cofounder and former CEO of Twitch, dives deep into his new AI alignment venture, Softmax. He introduces the concept of 'organic alignment,' comparing AI growth to biological systems nurtured within communities. The dialogue explores the evolution of morality, discussing how kin selection influences AI's ethical development. Emmett critiques traditional methods while advocating for cooperation in multi-agent reinforcement learning, emphasizing storytelling's impact on AI behavior and urging a cautious approach towards superintelligence.
undefined
12 snips
Jun 18, 2025 • 1h 16min

Will AI Have a Moral Compass? — Debate with Scott Sumner, Author of The Money Illusion

Scott Sumner, a leading macroeconomist and author of The Money Illusion, shares his insights on the moral implications of AI. He examines whether AI can develop empathy and ethical understanding, challenging the prevailing narratives about its potential threats. The conversation delves into the historical context of atrocities committed by educated societies, like the Nazis, and underscores the orthogonality thesis, suggesting that intelligence can exist separately from morality. Sumner presents a cautiously optimistic view on AI's future, emphasizing risks and opportunities.
undefined
5 snips
Jun 14, 2025 • 5min

Searle's Chinese Room is DUMB — It's Just Slow-Motion Intelligence

Explore the intriguing implications of John Searle's Chinese Room argument. The discussion critiques its validity, emphasizing that it confuses mimicry with genuine understanding. The argument is dissected for its misleading nature, shedding light on the complexities of AI compared to human cognition. Is slow-motion intelligence really intelligence at all? Engaging insights challenge you to rethink the perception of AI and what it means to comprehend.
undefined
21 snips
Jun 9, 2025 • 44min

Doom Debates Live @ Manifest 2025 — Liron vs. Everyone

Dive into a lively discussion at Manifest 2025, where diverse opinions on AI doom take center stage. Explore whether AGI is just around the corner and if it might surpass human intelligence. Debate the moral implications of AI and whether higher intelligence leads to moral goodness. Unpack the concept of 'optimization power' and the nuances of AI safety amidst various perspectives. The conversation challenges listeners to confront their beliefs about AI's future and the reality of existential risks.
undefined
24 snips
Jun 7, 2025 • 28min

Poking holes in the AI doom argument — 83 stops where you could get off the “Doom Train”

The conversation dives into the so-called "Doom Train" regarding the threats of artificial superintelligence. It challenges the idea that AGI is imminent and highlights AI's limitations, such as lacking emotions, consciousness, and genuine creativity. Listeners hear compelling arguments why AI isn't as advanced as feared, including its frequent errors and inability to reason like humans. The discussion also suggests that doomerism may hinder constructive dialogue about AI development.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app