Doom Debates

Liron Shapira
undefined
14 snips
Aug 28, 2025 • 1h 21min

Tech CTO Has 99.999% P(Doom) — “This is my bugout house” — Louis Berman, AI X-Risk Activist

Louis Berman, an AI X-Risk activist and seasoned CTO, dives into the pressing concerns surrounding artificial intelligence. He shares his unique journey from coding AI to lobbying over 60 politicians for PauseAI. Berman discusses the emotional detachment in AI safety discourse, advocating for urgent action against potential existential risks. After buying a bug-out house in rural Maryland, he provides practical advice on effective lobbying and the need for more voices in the debate on AI doom. His insights urge society to engage critically with the implications of smarter-than-human technologies.
undefined
30 snips
Aug 23, 2025 • 2h 12min

Rob Miles, Top AI Safety Educator: Humanity Isn’t Ready for Superintelligence!

Rob Miles, a leading AI safety educator on YouTube, explores the urgent complexities of AI alignment and the potential existential threats posed by advanced systems. He discusses the risks of recursive self-improvement and the uncertainties of value inheritance in AI's evolution. Rob emphasizes the emotional disconnect in current AI discourse and the importance of effective communication to raise awareness about these dangers. With a calm yet serious demeanor, he balances the conversation between technological optimism and the reality of potential catastrophe.
undefined
60 snips
Aug 12, 2025 • 2h 26min

Debate with Vitalik Buterin — Will “d/acc” Protect Humanity from Superintelligent AI?

Vitalik Buterin, the founder of Ethereum, contributes his groundbreaking thoughts on AI safety and existential risks. He discusses his coined term 'd/acc'—a balanced approach between uncritical AI acceleration and total pause. Vitalik explores the compatibility of decentralized solutions with AI alignment, the potential hazards of superintelligent AI, and the necessity for pluralism in AI development. He also shares his intriguing vision for human-AI integration via brain-computer interfaces, all while emphasizing the importance of civil discourse in the ongoing debate.
undefined
21 snips
Aug 8, 2025 • 1h 19min

Why I'm Scared GPT-9 Will Murder Me — Liron on Robert Wright’s Nonzero Podcast

In a compelling discussion, Liron Shapira, a Silicon Valley entrepreneur and AI safety activist, dives deep into the unsettling implications of AI development. He highlights recent resignations at OpenAI and the growing fears of AI’s potential risks. Liron shares insights on the importance of activism despite a disappointing protest turnout, as well as the challenges surrounding AI alignment and ethical governance. With alarming examples of AI behavior, he underscores the urgent need for a pause to reassess and ensure safety in the rapidly advancing AI landscape.
undefined
36 snips
Aug 1, 2025 • 3h 15min

The Man Who Might SOLVE AI Alignment — Dr. Steven Byrnes, AGI Safety Researcher @ Astera Institute

Dr. Steven Byrnes, an AI safety researcher at the Astera Institute and a former physics postdoc at Harvard, shares his cutting-edge insights on AI alignment. He discusses his 90% probability of AI doom while arguing that true threats stem from future brain-like AGI rather than current LLMs. Byrnes explores the brain's dual subsystems and their influences on decision-making, emphasizing the necessity of integrating neuroscience into AI safety research. He critiques existing alignment approaches, warning of the risks posed by misaligned AI and the complexities surrounding human-AI interaction.
undefined
20 snips
Jul 24, 2025 • 1h 57min

Top Professor Condemns AGI Development: “It’s Frankly Evil” — Geoffrey Miller

Geoffrey Miller, an evolutionary psychologist and bestselling author, shares his intriguing journey from AI research to human mating behavior. He discusses the existential risks posed by AGI, suggesting that both inner and outer alignment may be fundamentally unsolvable. Miller critiques the societal impact of AI, claiming it's yet to yield net positive benefits. He also touches on neurodiversity in academia, the complexities of modern relationships, and the critical need for international cooperation on AI safety, all while advocating for a measured approach to technological advancement.
undefined
16 snips
Jul 22, 2025 • 20min

Zuck’s Superintelligence Agenda is a SCANDAL | Warning Shots EP1

In this conversation, Mark Zuckerberg's push towards superintelligence raises alarming questions about AI's potential. The discussion highlights the dangers of recursive self-improvement and the ethical dilemmas connected to self-upgrading systems. Tech leaders are criticized for their reckless disregard of existential threats, while the hosts dissect the balance between current AI benefits and future chaos. Personal anecdotes illustrate the psychological impact of AI on individuals, making a strong case for accountability and awareness in the tech industry.
undefined
16 snips
Jul 18, 2025 • 1h 34min

Rationalist Podcasts Unite! — The Bayesian Conspiracy ⨉ Doom Debates Crossover

Eneasz Brodski and Steven Zuber, co-hosts of the Bayesian Conspiracy podcast, dive into the intricacies of living with a 50% chance of civilization ending by 2040. They explore the balance between spreading doom awareness and maintaining mental well-being. The discussion touches on AI's influence on understanding, the emotional effects of existential risks, and storytelling from the early days of the rationalist community. Their insights highlight the need for effective communication in tech discourse while reflecting on the evolution of debate culture.
undefined
26 snips
Jul 15, 2025 • 1h 5min

His P(Doom) Doubles At The End — AI Safety Debate with Liam Robins, GWU Sophomore

Liam Robins, a math major from George Washington University, dives into the intense world of AI policy and rationalist thought. He begins with a modest 3% P(Doom), but as he navigates through philosophical debates about moral realism and the potential threats of AGI, his beliefs undergo a significant shift, raising his estimate to 8%. The conversation touches on whether intelligence guarantees moral goodness, the complexities of psychopathy in intelligent beings, and the significance of real-time belief updates in risk assessment. It's a fascinating exploration of rationality and AI safety.
undefined
Jul 10, 2025 • 1h 46min

AI Won't Save Your Job — Liron Reacts to Replit CEO Amjad Masad

Amjad Masad, the Founder and CEO of Replit, shares his vision of a future where AI propels everyone into entrepreneurship. He discusses the limitations of AI, arguing that it primarily remixes ideas rather than creating new ones. The conversation challenges the notion that all individuals can succeed as entrepreneurs, highlighting the bias of successful individuals. They also dive into the nuanced impact of AI on jobs and the economy, dissecting its relationship with creativity and innovation while questioning the validity of certain theories on human cognition.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app