Doom Debates

Liron Shapira
undefined
Jul 8, 2024 • 2h 9min

Robin Hanson vs. Liron Shapira: Is Near-Term Extinction From AGI Plausible?

Robin Hanson is a legend in the rationality community and one of my biggest intellectual influences.In 2008, he famously debated Eliezer Yudkowsky about AI doom via a sequence of dueling blog posts known as the great Hanson-Yudkowsky Foom Debate. This debate picks up where Hanson-Yudkowsky left off, revisiting key arguments in the light of recent AI advances.My position is similar to Eliezer's: P(doom) is on the order of 50%. Robin's position is shockingly different: P(doom) is below 1%.00:00 Announcements03:18 Debate Begins05:41 Discussing AI Timelines and Predictions19:54 Economic Growth and AI Impact31:40 Outside Views vs. Inside Views on AI46:22 Predicting Future Economic Growth51:10 Historical Doubling Times and Future Projections54:11 Human Brain Size and Economic Metrics57:20 The Next Era of Innovation01:07:41 AI and Future Predictions01:14:24 The Vulnerable World Hypothesis01:16:27 AI Foom01:28:15 Genetics and Human Brain Evolution01:29:24 The Role of Culture in Human Intelligence01:31:36 Brain Size and Intelligence Debate01:33:44 AI and Goal-Completeness01:35:10 AI Optimization and Economic Impact01:41:50 Feasibility of AI Alignment01:55:21 AI Liability and Regulation02:05:26 Final Thoughts and Wrap-UpRobin's links:Twitter: x.com/RobinHansonHome Page: hanson.gmu.eduRobin’s top related essays:* What Are Reasonable AI Fears?* AIs Will Be Our Mind ChildrenPauseAI links:https://pauseai.info/https://discord.gg/2XXWXvErfACheck out https://youtube.com/@ForHumanityPodcast, the other podcast raising the alarm about AI extinction!For the full Doom Debates experience:* Subscribe to https://youtube.com/@DoomDebates* Subscribe to the Substack: https://DoomDebates.com* Search "Doom Debates" to subscribe in your podcast player* Follow me at https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jul 5, 2024 • 49min

Preparing for my AI Doom Debate with Robin Hanson

This episode is a comprehensive preparation session for my upcoming debate on AI doom with the legendary Robin Hanson.Robin’s P(doom) is <1% while mine is 50%. How do we reconcile this?I’ve researched past debates, blogs, tweets, and scholarly discussions related to AI doom, and plan to focus our debate on the cruxes of disagreement between Robin’s position and my own Eliezer Yudkowsky-like position.Key topics include the probability of humanity’s extinction due to uncontrollable AGI, alignment strategies, AI capabilities and timelines, the impact of AI advancements, and various predictions made by Hanson.00:00 Introduction03:37 Opening Statement04:29 Value-Extinction Spectrum05:34 Future AI Capabilities08:23 AI Timelines13:23 What can't current AIs do15:48 Architecture/Algorithms vs. Content17:40 Cyc18:55 Is intelligence many different things, or one thing?19:31 Goal-Completeness20:44 AIXI22:10 Convergence in AI systems23:02 Foom26:00 Outside view: Extrapolating robust trends26:18 Salient Events Timeline30:56 Eliezer's claim about meta-levels affecting capability growth rates33:53 My claim - the optimization power model trumps these outside-view trends35:19 Aren't there many other possible outside views?37:03 Is alignment feasible?40:14 What's the warning shot that would make you concerned?41:07 Future Foom evidence?44:59 How else have Robin's views changed in the last decade?Doom Debates catalogues all the different stops where people get off the "doom train", all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.If you'd like the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack — DoomDebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos — youtube.com/@doomdebates4. Follow me on Twitter — x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 30, 2024 • 1h 33min

Robin Hanson debate prep: Liron argues *against* AI doom!

I’ve been studying Robin Hanson’s catalog of writings and interviews in preparation for our upcoming AI doom debate. Now I’m doing an exercise where I step into Robin’s shoes, and make the strongest possible case for his non-doom position!This exercise is called the Ideological Turing Test, and it’s based on the idea that it’s only productive to argue against someone if you understand what you’re arguing against. Being able to argue *for* a position proves that you understand it.My guest David Xu is a fellow AI doomer, and deep thinker, who volunteered to argue the doomer position against my version of non-doomer “Robin”.00:00 Upcoming Debate with Dr. Robin Hanson01:15 David Xu's Background and Perspective02:23 The Ideological Turing Test02:39 David's AI Doom Claim03:44 AI Takeover vs. Non-AI Descendants05:21 Paperclip Maximizer15:53 Economic Trends and AI Predictions27:18 Recursive Self-Improvement and Foom29:14 Comparing Models of Intelligence34:53 The Foom Scenario36:04 Coordination and Lawlessness in AI37:49 AI's Goal-Directed Behavior and Economic Models40:02 Multipolar Outcomes and AI Coordination40:58 The Orthogonality Thesis and AI Firms43:18 AI's Potential to Exceed Human Control45:03 The Argument for AI Misalignment48:22 Economic Trends vs. AI Catastrophes59:13 The Race for AI Dominance01:04:09 AI Escaping Control01:04:45 AI Liability and Insurance01:06:14 Economic Dynamics and AI Threats01:07:18 The Balance of Offense and Defense in AI01:08:38 AI's Potential to Disrupt National Infrastructure01:10:17 The Multipolar Outcome of AI Development01:11:00 Human Role in AI-Driven Future01:12:19 Debating the Discontinuity in AI Progress01:25:26 Closing Statements and Final Thoughts01:30:34 Reflecting on the Debate and Future DiscussionsFollow David: https://x.com/davidxu90The Ideological Turing Test (ITT) was coined by Bryan Caplan in this classic post: https://www.econlib.org/archives/2011/06/the_ideological.htmlI also did a Twitter version of the ITT here: https://x.com/liron/status/1789688119773872273Doom Debates catalogues all the different stops where people get off the "doom train", all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.If you'd like the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack - https://doomdebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos - https://youtube.com/@doomdebates4. Follow me on Twitter - https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 26, 2024 • 1h 5min

AI Doom Q&A

Today I'm answering questions from listener Tony Warren.1:16 Biological imperatives in machine learning2:22 Evolutionary pressure vs. AI training4:15 Instrumental convergence and AI goals6:46 Human vs. AI problem domains9:20 AI vs. human actuators18:04 Evolution and intelligence33:23 Maximum intelligence54:55 Computational limits and the futureFollow Tony: https://x.com/Pove_iOS---Doom Debates catalogues all the different stops where people get off the "doom train", all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.If you'd like the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack - https://doomdebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos - https://youtube.com/@doomdebates4. Follow me on Twitter - https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 22, 2024 • 57min

AI Doom Debate: Will AGI’s analysis paralysis save humanity?

My guest Rob thinks superintelligent AI will suffer from analysis paralysis from trying to achieve a 100% probability of killing humanity. Since AI won’t be satisfied with 99.9% of defeating us, it won’t dare to try, and we’ll live!Doom Debates catalogues all the different stops where people get off the “doom train”, all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.Follow Rob: https://x.com/LoB_BlacksageIf you want to get the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack - https://doomdebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos - https://youtube.com/@DoomDebates4. Follow me on Twitter - https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 21, 2024 • 28min

Steven Pinker's Flimsy Arguments for AI Optimism

Today I’m debating the one & only Professor Steven Pinker!!! Well, I kind of am, in my head. Let me know if you like this format…Dr. Pinker is optimistic that AI doom worries are overblown. But I find his arguments shallow, and I’m disappointed with his overall approach to the AI doom discourse.Here’s the full video of Steven Pinker talking to Michael C. Moynihan on this week’s episode of “Honestly with Bari Weiss”: https://youtube.com/watch?v=mTuH1Ucbif4If you want to get the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack - https://doomdebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos - https://youtube.com/@DoomDebates4. Follow me on Twitter - https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 20, 2024 • 26min

AI Doom Debate: What's a plausible alignment scenario?

RJ, a pseudonymous listener, volunteered to debate me.Follow RJ: https://x.com/impershblknightIf you want to get the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack - https://doomdebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos - https://youtube.com/@doomdebates4. Follow me on Twitter - https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 18, 2024 • 12min

Q&A: How scary is a superintelligent football coach?

Danny asks:> You've said that an intelligent AI would lead to doom because it would be an excellent goal-to-action mapper.  A great football coach like Andy Reid is a great goal-to-action mapper.  He's on the sidelines, but he knows exactly what actions his team needs to execute to achieve the goal and win the game. > But if he had a team of chimpanzees or elementary schoolers, or just players who did not want to cooperate, then his team would not execute his plans and they would lose.  And even his very talented team of highly motivated players who also want to win the game, sometimes execute his actions badly. Now an intelligent AI that does not control a robot army has very limited ability to perform precise acts in the physical world.  From within the virtual world, an AI would not be able to get animals or plants to carry out specific actions that it wants performed.  I don't see how the AI could get monkeys or dolphins to maintain power plants or build chips.> The AI needs humans to carry out its plans,  but in the real physical world, when dealing with humans, knowing what you want people to do is a small part of the equation.  Won't the AI in practice struggle to get humans to execute its plans in the precise way that it needs?Follow Danny: https://x.com/Danno28_Follow Liron: https://x.com/lironPlease join my email list: DoomDebates.com This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 17, 2024 • 1h 17min

AI Doom Debate: George Hotz vs. Liron Shapira

Today I’m going to play you my debate with the brilliant hacker and entrepreneur, George Hotz.This took place on an X Space last August.Prior to our debate, George had done a debate with Eliezer Yudkowsky on Dwarkesh Podcast:Follow George: https://x.com/realGeorgeHotzFollow Liron: https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 16, 2024 • 56min

Should we gamble on AGI before all 8 billion of us die?

Chase Mann claims accelerating AGI timelines is the best thing we can do for the survival of the 8 billion people alive today.I claim pausing AI is still the highest-expected-utility decision for everyone.Who do you agree with? Comment on my Substack/X/YouTube and let me know!Follow Chase:https://x.com/ChaseMannFollow Liron:https://x.com/lironLessWrong has some great posts about cryonics: https://www.lesswrong.com/tag/cryonics This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app