Doom Debates cover image

Doom Debates

Latest episodes

undefined
Nov 4, 2024 • 16min

15-Minute Intro to AI Doom

Our top researchers and industry leaders have been warning us that superintelligent AI may cause human extinction in the next decade.If you haven't been following all the urgent warnings, I'm here to bring you up to speed.* Human-level AI is coming soon* It’s an existential threat to humanity* The situation calls for urgent actionListen to this 15-minute intro to get the lay of the land.Then follow these links to learn more and see how you can help:* The CompendiumA longer written introduction to AI doom by Connor Leahy et al* AGI Ruin — A list of lethalitiesA comprehensive list by Eliezer Yudkowksy of reasons why developing superintelligent AI is unlikely to go well for humanity* AISafety.infoA catalogue of AI doom arguments and responses to objections* PauseAI.infoThe largest volunteer org focused on lobbying world government to pause development of superintelligent AI* PauseAI DiscordChat with PauseAI members, see a list of projects and get involved---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Oct 30, 2024 • 1h 32min

Lee Cronin vs. Liron Shapira: AI Doom Debate

Prof. Lee Cronin is the Regius Chair of Chemistry at the University of Glasgow. His research aims to understand how life might arise from non-living matter. In 2017, he invented “Assembly Theory” as a way to measure the complexity of molecules and gain insight into the earliest evolution of life.Today we’re debating Lee's claims about the limits of AI capabilities, and my claims about the risk of extinction from superintelligent AGI.00:00 Introduction04:20 Assembly Theory05:10 Causation and Complexity10:07 Assembly Theory in Practice12:23 The Concept of Assembly Index16:54 Assembly Theory Beyond Molecules30:13 P(Doom)32:39 The Statement on AI Risk42:18 Agency and Intent47:10 RescueBot’s Intent vs. a Clock’s53:42 The Future of AI and Human Jobs57:34 The Limits of AI Creativity01:04:33 The Complexity of the Human Brain01:19:31 Superintelligence: Fact or Fiction?01:29:35 Final ThoughtsLee’s Wikipedia: https://en.wikipedia.org/wiki/Leroy_CroninLee’s Twitter: https://x.com/leecroninLee’s paper on Assembly Theory: https://arxiv.org/abs/2206.02279Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Oct 25, 2024 • 29min

Ben Horowitz says nuclear proliferation is GOOD? I disagree.

Ben Horowitz, cofounder and General Partner at Andreessen Horowitz (a16z), says nuclear proliferation is good.I was shocked because I thought we all agreed nuclear proliferation is VERY BAD.If Ben and a16z can’t appreciate the existential risks of nuclear weapons proliferation, why would anyone ever take them seriously on the topic of AI regulation?00:00 Introduction00:49 Ben Horowitz on Nuclear Proliferation02:12 Ben Horowitz on Open Source AI05:31 Nuclear Non-Proliferation Treaties10:25 Escalation Spirals15:20 Rogue Actors16:33 Nuclear Accidents17:19 Safety Mechanism Failures20:34 The Role of Human Judgment in Nuclear Safety21:39 The 1983 Soviet Nuclear False Alarm22:50 a16z’s Disingenuousness23:46 Martin Casado and Marc Andreessen24:31 Nuclear Equilibrium26:52 Why I Care28:09 Wrap UpSources of this episode’s video clips:Ben Horowitz’s interview on Upstream with Erik Torenberg: https://www.youtube.com/watch?v=oojc96r3KuoMartin Casado and Marc Andreessen talking about AI on the a16z Podcast: https://www.youtube.com/watch?v=0wIUK0nsyUgRoger Skaer’s TikTok: https://www.tiktok.com/@rogerskaerGeorge W. Bush and John Kerry Presidential Debate (September 30, 2004): https://www.youtube.com/watch?v=WYpP-T0IcyABarack Obama’s Prague Remarks on Nuclear Disarmament: https://www.youtube.com/watch?v=QKSn1SXjj2sJohn Kerry’s Remarks at the 2015 Nuclear Nonproliferation Treaty Review Conference: https://www.youtube.com/watch?v=LsY1AZc1K7wShow notes:Nuclear War, A Scenario by Annie Jacobsen: https://www.amazon.com/Nuclear-War-Scenario-Annie-Jacobsen/dp/0593476093Dr. Strangelove or: How I learned to Stop Worrying and Love the Bomb: https://en.wikipedia.org/wiki/Dr._Strangelove1961 Goldsboro B-52 Crash: https://en.wikipedia.org/wiki/1961_Goldsboro_B-52_crash1983 Soviet Nuclera False Alarm Incident: https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incidentList of military nuclear accidents: https://en.wikipedia.org/wiki/List_of_military_nuclear_accidentsDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Oct 13, 2024 • 1h 13min

“AI Snake Oil” Prof. Arvind Narayanan Can't See AGI Coming | Liron Reacts

Arvind Narayanan, a Professor at Princeton and author of "AI Snake Oil," joins fellow podcaster Robert Wright to dissect the nature of AI. They debate whether AI should be viewed as just another technological advance rather than a groundbreaking shift. The conversation dives into the limitations of AI compared to human intelligence, the serious implications for cybersecurity, and the urgency of addressing advanced AI threats. They also grapple with the potential of AGI and its unpredictability, showcasing a range of perspectives on the future of technology.
undefined
Oct 8, 2024 • 2h 12min

Dr. Keith Duggar has a high P(doom)?! Debate with MLST Co-host

Dr. Keith Duggar, Co-host of Machine Learning Street Talk, brings his expertise in AI and computation theory to the table. He engages in a compelling debate on the distinction between Turing Machines and LLMs, exploring their implications for AI limitations. The conversation dives into the P(doom) concept, human misuse of superintelligence, and urgent calls for policy action. Additionally, they unravel the boundaries of AI problem-solving capabilities and navigate the complexities of aligning AI with human values, making for a thought-provoking discussion.
undefined
Oct 4, 2024 • 46min

Getting Arrested for Barricading OpenAI's Office to Stop AI

Sam Kirchner and Remmelt Ellen, leaders of the Stop AI movement, think the only way to effectively protest superintelligent AI development is with civil disobedience.Not only are they staging regular protests in front of AI labs, they’re barricading the entrances and blocking traffic, then allowing themselves to be repeatedly arrested.Is civil disobedience the right strategy to pause or stop AI?00:00 Introducing Stop AI00:38 Arrested at OpenAI Headquarters01:14 Stop AI’s Funding01:26 Blocking Entrances Strategy03:12 Protest Logistics and Arrest08:13 Blocking Traffic12:52 Arrest and Legal Consequences18:31 Commitment to Nonviolence21:17 A Day in the Life of a Protestor21:38 Civil Disobedience25:29 Planning the Next Protest28:09 Stop AI Goals and Strategies34:27 The Ethics and Impact of AI Protests42:20 Call to ActionShow NotesStopAI's next protest is on October 21, 2024 at OpenAI, 575 Florida St, San Francisco, CA 94110.StopAI Website: https://StopAI.infoStopAI Discord: https://discord.gg/gbqGUt7ZN4Disclaimer: I (Liron) am not part of StopAI, but I am a member of PauseAI, which also has a website and Discord you can join.PauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAThere's also a special #doom-debates channel in the PauseAI Discord just for us :)Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Oct 2, 2024 • 1h 10min

Q&A #1 Part 2: Stock Picking, Creativity, Types of Doomers, Favorite Books

This episode is a continuation of Q&A #1 Part 1 where I answer YOUR questions!00:00 Introduction01:20 Planning for a good outcome?03:10 Stock Picking Advice08:42 Dumbing It Down for Dr. Phil11:52 Will AI Shorten Attention Spans?12:55 Historical Nerd Life14:41 YouTube vs. Podcast Metrics16:30 Video Games26:04 Creativity30:29 Does AI Doom Explain the Fermi Paradox?36:37 Grabby Aliens37:29 Types of AI Doomers44:44 Early Warning Signs of AI Doom48:34 Do Current AIs Have General Intelligence?51:07 How Liron Uses AI53:41 Is “Doomer” a Good Term?57:11 Liron’s Favorite Books01:05:21 Effective Altruism01:06:36 The Doom Debates Community---Show NotesPauseAI Discord: https://discord.gg/2XXWXvErfARobin Hanson’s Grabby Aliens theory: https://grabbyaliens.comProf. David Kipping’s response to Robin Hanson’s Grabby Aliens: https://www.youtube.com/watch?v=tR1HTNtcYw0My explanation of “AI completeness”, but actually I made a mistake because the term I previously coined is “goal completeness”: https://www.lesswrong.com/posts/iFdnb8FGRF4fquWnc/goal-completeness-is-like-turing-completeness-for-agi^ Goal-Completeness (and the corresponding Shapira-Yudkowsky Thesis) might be my best/only original contribution to AI safety research, albeit a small one. Max Tegmark even retweeted it.a16z's Ben Horowitz claiming nuclear proliferation is good, actually: https://x.com/liron/status/1690087501548126209---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Oct 1, 2024 • 1h 2min

Q&A #1 Part 1: College, Asperger's, Elon Musk, Double Crux, Liron's IQ

Thanks for being one of the first Doom Debates subscribers and sending in your questions! This episode is Part 1; stay tuned for Part 2 coming soon.00:00 Introduction01:17 Is OpenAI a sinking ship?07:25 College Education13:20 Asperger's16:50 Elon Musk: Genius or Clown?22:43 Double Crux32:04 Why Call Doomers a Cult?36:45 How I Prepare Episodes40:29 Dealing with AI Unemployment44:00 AI Safety Research Areas46:09 Fighting a Losing Battle53:03 Liron’s IQ01:00:24 Final ThoughtsExplanation of Double Cruxhttps://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understandingBest Doomer ArgumentsThe LessWrong sequences by Eliezer Yudkowsky: https://ReadTheSequences.comLethalIntelligence.ai — Directory of people who are good at explaining doomRob Miles’ Explainer Videos: https://www.youtube.com/c/robertmilesaiFor Humanity Podcast with John Sherman - https://www.youtube.com/@ForHumanityPodcastPauseAI community — https://PauseAI.info — join the Discord!AISafety.info — Great reference for various argumentsBest Non-Doomer ArgumentsCarl Shulman — https://www.dwarkeshpatel.com/p/carl-shulmanQuintin Pope and Nora Belrose — https://optimists.aiRobin Hanson — https://www.youtube.com/watch?v=dTQb6N3_zu8How I prepared to debate Robin HansonIdeological Turing Test (me taking Robin’s side): https://www.youtube.com/watch?v=iNnoJnuOXFAWalkthrough of my outline of prepared topics: https://www.youtube.com/watch?v=darVPzEhh-IDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Sep 25, 2024 • 1h 15min

Doom Tiffs #1: Amjad Masad, Eliezer Yudkowsky, Helen Toner, Roon, Lee Cronin, Naval Ravikant, Martin Casado, Yoshua Bengio

In today’s episode, instead of reacting to a long-form presentation of someone’s position, I’m reporting on the various AI x-risk-related tiffs happening in my part of the world. And by “my part of the world” I mean my Twitter feed.00:00 Introduction01:55 Followup to my MSLT reaction episode03:48 Double Crux04:53 LLMs: Finite State Automata or Turing Machines?16:11 Amjad Masad vs. Helen Toner and Eliezer Yudkowsky17:29 How Will AGI Literally Kill Us?33:53 Roon37:38 Prof. Lee Cronin40:48 Defining AI Creativity43:44 Naval Ravikant46:57 Pascal's Scam54:10 Martin Casado and SB 104701:12:26 Final ThoughtsLinks referenced in the episode:* Eliezer Yudkowsky’s interview on the Logan Bartlett Show. Highly recommended: https://www.youtube.com/watch?v=_8q9bjNHeSo* Double Crux, the core rationalist technique I use when I’m “debating”: https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding* The problem with arguing “by definition”, a classic LessWrong post: https://www.lesswrong.com/posts/cFzC996D7Jjds3vS9/arguing-by-definitionTwitter people referenced:* Amjad Masad: https://x.com/amasad* Eliezer Yudkowsky: https://x.com/esyudkowsky* Helen Toner: https://x.com/hlntnr* Roon: https://x.com/tszzl* Lee Cronin: https://x.com/leecronin* Naval Ravikant: https://x.com/naval* Geoffrey Miller: https://x.com/primalpoly* Martin Casado: https://x.com/martin_casado* Yoshua Bengio: https://x.com/yoshua_bengio* Your boy: https://x.com/lironDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Sep 18, 2024 • 2h 7min

Can GPT o1 Reason? | Liron Reacts to Tim Scarfe & Keith Duggar

In this engaging discussion, Tim Scarfe and Keith Duggar, hosts of Machine Learning Street Talk, dive into the capabilities of OpenAI's new model, o1. They explore the true meaning of "reasoning," contrasting it with human thought processes. The duo analyses computability and complexity theories, revealing significant limitations in AI reasoning. They also tackle the philosophical implications of AI's optimization abilities versus genuine reasoning. With witty banter, they raise intriguing questions about the future of AI and its potential pitfalls.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode