Doom Debates cover image

Doom Debates

Latest episodes

undefined
Dec 18, 2024 • 1h 45min

Roon vs. Liron: AI Doom Debate

Roon, a member of the technical staff at OpenAI and a prominent voice on tech Twitter, dives into existential risks associated with AI. He discusses his coined terms, 'shape rotators' and 'wordcels,' while exploring the nuances of AI creativity versus human originality. The conversation navigates the concept of 'P-Doom' and the importance of effective AI alignment to avert global threats. Roon also weighs in on the ethics of goal-oriented AI and engages in a lighthearted talk about Dogecoin, all while emphasizing the need for thoughtful debate on these critical issues.
undefined
Dec 11, 2024 • 1h 53min

Scott Aaronson Makes Me Think OpenAI's “Safety” Is Fake, Clueless, Reckless and Insane

Today I’m reacting to the recent Scott Aaronson interview on the Win-Win podcast with Liv Boeree and Igor Kurganov.Prof. Aaronson is the Director of the Quantum Information Center at the University of Texas at Austin. He’s best known for his research advancing the frontier of complexity theory, especially quantum complexity theory, and making complex insights from his field accessible to a wider readership via his blog.Scott is one of my biggest intellectual influences. His famous Who Can Name The Bigger Number essay and his long-running blog are among my best memories of coming across high-quality intellectual content online as a teen. His posts and lectures taught me much of what I know about complexity theory.Scott recently completed a two-year stint at OpenAI focusing on the theoretical foundations of AI safety, so I was interested to hear his insider account.Unfortunately, what I heard in the interview confirms my worst fears about the meaning of “safety” at today’s AI companies: that they’re laughably clueless at how to achieve any measure of safety, but instead of doing the adult thing and slowing down their capabilities work, they’re pushing forward recklessly.00:00 Introducing Scott Aaronson02:17 Scott's Recruitment by OpenAI04:18 Scott's Work on AI Safety at OpenAI08:10 Challenges in AI Alignment12:05 Watermarking AI Outputs15:23 The State of AI Safety Research22:13 The Intractability of AI Alignment34:20 Policy Implications and the Call to Pause AI38:18 Out-of-Distribution Generalization45:30 Moral Worth Criterion for Humans51:49 Quantum Mechanics and Human Uniqueness01:00:31 Quantum No-Cloning Theorem01:12:40 Scott Is Almost An Accelerationist?01:18:04 Geoffrey Hinton's Proposal for Analog AI01:36:13 The AI Arms Race and the Need for Regulation01:39:41 Scott Aronson's Thoughts on Sam Altman01:42:58 Scott Rejects the Orthogonality Thesis01:46:35 Final Thoughts01:48:48 Lethal Intelligence Clip01:51:42 OutroShow NotesScott’s Interview on Win-Win with Liv Boeree and Igor Kurganov: https://www.youtube.com/watch?v=ANFnUHcYza0Scott’s Blog: https://scottaaronson.blogPauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Nov 28, 2024 • 2h 60min

Liron Reacts to Subbarao Kambhampati on Machine Learning Street Talk

Today I’m reacting to a July 2024 interview that Prof. Subbarao Kambhampati did on Machine Learning Street Talk.Rao is a Professor of Computer Science at Arizona State University, and one of the foremost voices making the claim that while LLMs can generate creative ideas, they can’t truly reason.The episode covers a range of topics including planning, creativity, the limits of LLMs, and why Rao thinks LLMs are essentially advanced N-gram models.00:00 Introduction02:54 Essentially N-Gram Models?10:31 The Manhole Cover Question20:54 Reasoning vs. Approximate Retrieval47:03 Explaining Jokes53:21 Caesar Cipher Performance01:10:44 Creativity vs. Reasoning01:33:37 Reasoning By Analogy01:48:49 Synthetic Data01:53:54 The ARC Challenge02:11:47 Correctness vs. Style02:17:55 AIs Becoming More Robust02:20:11 Block Stacking Problems02:48:12 PlanBench and Future Predictions02:58:59 Final ThoughtsShow NotesRao’s interview on Machine Learning Street Talk: https://www.youtube.com/watch?v=y1WnHpedi2ARao’s Twitter: https://x.com/rao2zPauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Nov 27, 2024 • 1h 4min

This Yudkowskian Has A 99.999% P(Doom)

In this episode of Doom Debates, I discuss AI existential risks with my pseudonymous guest Nethys.Nethy shares his journey into AI risk awareness, influenced heavily by LessWrong and Eliezer Yudkowsky. We explore the vulnerability of society to emerging technologies, the challenges of AI alignment, and why he believes our current approaches are insufficient, ultimately resulting in 99.999% P(Doom).00:00 Nethys Introduction04:47 The Vulnerable World Hypothesis10:01 What’s Your P(Doom)™14:04 Nethys’s Banger YouTube Comment26:53 Living with High P(Doom)31:06 Losing Access to Distant Stars36:51 Defining AGI39:09 The Convergence of AI Models47:32 The Role of “Unlicensed” Thinkers52:07 The PauseAI Movement58:20 Lethal Intelligence Video ClipShow NotesEliezer Yudkowsky’s post on “Death with Dignity”: https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategyPauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Nov 21, 2024 • 1h 58min

Cosmology, AI Doom, and the Future of Humanity with Fraser Cain

Fraser Cain is the publisher of Universe Today, co-host of Astronomy Cast, a popular YouTuber about all things space, and guess what… he has a high P(doom)! That’s why he’s joining me on Doom Debates for a very special AI + space crossover episode.00:00 Fraser Cain’s Background and Interests5:03 What’s Your P(Doom)™07:05 Our Vulnerable World15:11 Don’t Look Up22:18 Cosmology and the Search for Alien Life31:33 Stars = Terrorists39:03 The Great Filter and the Fermi Paradox55:12 Grabby Aliens Hypothesis01:19:40 Life Around Red Dwarf Stars?01:22:23 Epistemology of Grabby Aliens01:29:04 Multiverses01:33:51 Quantum Many Worlds vs. Copenhagen Interpretation01:47:25 Simulation Hypothesis01:51:25 Final ThoughtsSHOW NOTESFraser’s YouTube channel: https://www.youtube.com/@frasercainUniverse Today (space and astronomy news): https://www.universetoday.com/Max Tegmark’s book that explains 4 levels of multiverses: https://www.amazon.com/Our-Mathematical-Universe-Ultimate-Reality/dp/0307744256Robin Hanson’s ideas:Grabby Aliens: https://grabbyaliens.comThe Great Filter: https://en.wikipedia.org/wiki/Great_FilterLife in a high-dimensional space: https://www.overcomingbias.com/p/life-in-1kdhtml---Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Nov 19, 2024 • 2h 21min

AI Doom Debate: Vaden Masrani & Ben Chugg vs. Liron Shapira

Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are back for a Part II! This time we’re going straight to debating my favorite topic, AI doom.00:00 Introduction02:23 High-Level AI Doom Argument17:06 How Powerful Could Intelligence Be?22:34 “Knowledge Creation”48:33 “Creativity”54:57 Stand-Up Comedy as a Test for AI01:12:53 Vaden & Ben’s Goalposts01:15:00 How to Change Liron’s Mind01:20:02 LLMs are Stochastic Parrots?01:34:06 Tools vs. Agents01:39:51 Instrumental Convergence and AI Goals01:45:51 Intelligence vs. Morality01:53:57 Mainline Futures02:16:50 Lethal Intelligence VideoShow NotesVaden & Ben’s Podcast: https://www.youtube.com/@incrementspodRecommended playlists from their podcast:* The Bayesian vs Popperian Epistemology Series* The Conjectures and Refutations SeriesVaden’s Twitter: https://x.com/vadenmasraniBen’s Twitter: https://x.com/BennyChuggWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Nov 16, 2024 • 2h 37min

Andrew Critch vs. Liron Shapira: Will AI Extinction Be Fast Or Slow?

Dr. Andrew Critch is the co-founder of the Center for Applied Rationality, a former Research Fellow at the Machine Intelligence Research Institute (MIRI), a Research Scientist at the UC Berkeley Center for Human Compatible AI, and the co-founder of a new startup called Healthcare Agents.Dr. Critch’s P(Doom) is a whopping 85%! But his most likely doom scenario isn’t what you might expect. He thinks humanity will successfully avoid a self-improving superintelligent doom scenario, only to still go extinct via the slower process of “industrial dehumanization”.00:00 Introduction01:43 Dr. Critch’s Perspective on LessWrong Sequences06:45 Bayesian Epistemology15:34 Dr. Critch's Time at MIRI18:33 What’s Your P(Doom)™26:35 Doom Scenarios40:38 AI Timelines43:09 Defining “AGI”48:27 Superintelligence53:04 The Speed Limit of Intelligence01:12:03 The Obedience Problem in AI01:21:22 Artificial Superintelligence and Human Extinction01:24:36 Global AI Race and Geopolitics01:34:28 Future Scenarios and Human Relevance01:48:13 Extinction by Industrial Dehumanization01:58:50 Automated Factories and Human Control02:02:35 Global Coordination Challenges02:27:00 Healthcare Agents02:35:30 Final Thoughts---Show NotesDr. Critch’s LessWrong post explaining his P(Doom) and most likely doom scenarios: https://www.lesswrong.com/posts/Kobbt3nQgv3yn29pr/my-motivation-and-theory-of-change-for-working-in-aiDr. Critch’s Website: https://acritch.com/Dr. Critch’s Twitter: https://twitter.com/AndrewCritchPhD---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Nov 13, 2024 • 1h 7min

AI Twitter Beefs #2: Yann LeCun, David Deutsch, Tyler Cowen, Jack Clark, Beff Jezos, Samuel Hammond vs. Eliezer Yudkowsky, Geoffrey Hinton, Carl Feynman

It’s time for AI Twitter Beefs #2:00:42 Jack Clark (Anthropic) vs. Holly Elmore (PauseAI US)11:02 Beff Jezos vs. Eliezer Yudkowsky, Carl Feynman18:10 Geoffrey Hinton vs. OpenAI & Meta25:14 Samuel Hammond vs. Liron30:26 Yann LeCun vs. Eliezer Yudkowsky37:13 Roon vs. Eliezer Yudkowsky41:37 Tyler Cowen vs. AI Doomers52:54 David Deutsch vs. LironTwitter people referenced:* Jack Clark: https://x.com/jackclarkSF* Holly Elmore: https://x.com/ilex_ulmus* PauseAI US: https://x.com/PauseAIUS* Geoffrey Hinton: https://x.com/GeoffreyHinton* Samuel Hammond: https://x.com/hamandcheese* Yann LeCun: https://x.com/ylecun* Eliezer Yudkowsky: https://x.com/esyudkowsky* Roon: https://x.com/tszzl* Beff Jezos: https://x.com/basedbeffjezos* Carl Feynman: https://x.com/carl_feynman* Tyler Cowen: https://x.com/tylercowen* David Deutsch: https://x.com/DavidDeutschOxfShow NotesHolly Elmore’s EA forum post about scouts vs. soldiersManifund info & donation page for PauseAI US: https://manifund.org/projects/pauseai-us-2025-through-q2PauseAI.info - join the Discord and find me in the #doom-debates channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Nov 8, 2024 • 2h 51min

Is P(Doom) Meaningful? Epistemology Debate with Vaden Masrani and Ben Chugg

Vaden Masrani and Ben Chugg, co-hosts of the Increments Podcast, hold a PhD in statistics and machine learning, exploring the fiery debate between Bayesian and Popperian epistemology. They discuss the challenges of quantifying catastrophic risks like nuclear war and asteroid impacts. The conversation dives into the implications of probability in decision-making and its role in existential threats. They also critique traditional methods, emphasizing the significance of epistemology in shaping our understanding of knowledge, prediction markets, and artificial intelligence.
undefined
Nov 4, 2024 • 16min

15-Minute Intro to AI Doom

Our top researchers and industry leaders have been warning us that superintelligent AI may cause human extinction in the next decade.If you haven't been following all the urgent warnings, I'm here to bring you up to speed.* Human-level AI is coming soon* It’s an existential threat to humanity* The situation calls for urgent actionListen to this 15-minute intro to get the lay of the land.Then follow these links to learn more and see how you can help:* The CompendiumA longer written introduction to AI doom by Connor Leahy et al* AGI Ruin — A list of lethalitiesA comprehensive list by Eliezer Yudkowksy of reasons why developing superintelligent AI is unlikely to go well for humanity* AISafety.infoA catalogue of AI doom arguments and responses to objections* PauseAI.infoThe largest volunteer org focused on lobbying world government to pause development of superintelligent AI* PauseAI DiscordChat with PauseAI members, see a list of projects and get involved---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode