Doom Debates cover image

Doom Debates

Latest episodes

undefined
Jun 30, 2024 • 1h 33min

Robin Hanson debate prep: Liron argues *against* AI doom!

I’ve been studying Robin Hanson’s catalog of writings and interviews in preparation for our upcoming AI doom debate. Now I’m doing an exercise where I step into Robin’s shoes, and make the strongest possible case for his non-doom position!This exercise is called the Ideological Turing Test, and it’s based on the idea that it’s only productive to argue against someone if you understand what you’re arguing against. Being able to argue *for* a position proves that you understand it.My guest David Xu is a fellow AI doomer, and deep thinker, who volunteered to argue the doomer position against my version of non-doomer “Robin”.00:00 Upcoming Debate with Dr. Robin Hanson01:15 David Xu's Background and Perspective02:23 The Ideological Turing Test02:39 David's AI Doom Claim03:44 AI Takeover vs. Non-AI Descendants05:21 Paperclip Maximizer15:53 Economic Trends and AI Predictions27:18 Recursive Self-Improvement and Foom29:14 Comparing Models of Intelligence34:53 The Foom Scenario36:04 Coordination and Lawlessness in AI37:49 AI's Goal-Directed Behavior and Economic Models40:02 Multipolar Outcomes and AI Coordination40:58 The Orthogonality Thesis and AI Firms43:18 AI's Potential to Exceed Human Control45:03 The Argument for AI Misalignment48:22 Economic Trends vs. AI Catastrophes59:13 The Race for AI Dominance01:04:09 AI Escaping Control01:04:45 AI Liability and Insurance01:06:14 Economic Dynamics and AI Threats01:07:18 The Balance of Offense and Defense in AI01:08:38 AI's Potential to Disrupt National Infrastructure01:10:17 The Multipolar Outcome of AI Development01:11:00 Human Role in AI-Driven Future01:12:19 Debating the Discontinuity in AI Progress01:25:26 Closing Statements and Final Thoughts01:30:34 Reflecting on the Debate and Future DiscussionsFollow David: https://x.com/davidxu90The Ideological Turing Test (ITT) was coined by Bryan Caplan in this classic post: https://www.econlib.org/archives/2011/06/the_ideological.htmlI also did a Twitter version of the ITT here: https://x.com/liron/status/1789688119773872273Doom Debates catalogues all the different stops where people get off the "doom train", all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.If you'd like the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack - https://doomdebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos - https://youtube.com/@doomdebates4. Follow me on Twitter - https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 26, 2024 • 1h 5min

AI Doom Q&A

Today I'm answering questions from listener Tony Warren.1:16 Biological imperatives in machine learning2:22 Evolutionary pressure vs. AI training4:15 Instrumental convergence and AI goals6:46 Human vs. AI problem domains9:20 AI vs. human actuators18:04 Evolution and intelligence33:23 Maximum intelligence54:55 Computational limits and the futureFollow Tony: https://x.com/Pove_iOS---Doom Debates catalogues all the different stops where people get off the "doom train", all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.If you'd like the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack - https://doomdebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos - https://youtube.com/@doomdebates4. Follow me on Twitter - https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 22, 2024 • 57min

AI Doom Debate: Will AGI’s analysis paralysis save humanity?

My guest Rob thinks superintelligent AI will suffer from analysis paralysis from trying to achieve a 100% probability of killing humanity. Since AI won’t be satisfied with 99.9% of defeating us, it won’t dare to try, and we’ll live!Doom Debates catalogues all the different stops where people get off the “doom train”, all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.Follow Rob: https://x.com/LoB_BlacksageIf you want to get the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack - https://doomdebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos - https://youtube.com/@DoomDebates4. Follow me on Twitter - https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 21, 2024 • 28min

AI Doom Debate: Steven Pinker vs. Liron Shapira

Today I’m debating the one & only Professor Steven Pinker!!! Well, I kind of am, in my head. Let me know if you like this format…Dr. Pinker is optimistic that AI doom worries are overblown. But I find his arguments shallow, and I’m disappointed with his overall approach to the AI doom discourse.Here’s the full video of Steven Pinker talking to Michael C. Moynihan on this week’s episode of “Honestly with Bari Weiss”: https://youtube.com/watch?v=mTuH1Ucbif4If you want to get the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack - https://doomdebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos - https://youtube.com/@DoomDebates4. Follow me on Twitter - https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 20, 2024 • 26min

AI Doom Debate: What's a plausible alignment scenario?

RJ, a pseudonymous listener, volunteered to debate me.Follow RJ: https://x.com/impershblknightIf you want to get the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack - https://doomdebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos - https://youtube.com/@doomdebates4. Follow me on Twitter - https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 18, 2024 • 12min

Q&A: How scary is a superintelligent football coach?

Danny asks:> You've said that an intelligent AI would lead to doom because it would be an excellent goal-to-action mapper.  A great football coach like Andy Reid is a great goal-to-action mapper.  He's on the sidelines, but he knows exactly what actions his team needs to execute to achieve the goal and win the game. > But if he had a team of chimpanzees or elementary schoolers, or just players who did not want to cooperate, then his team would not execute his plans and they would lose.  And even his very talented team of highly motivated players who also want to win the game, sometimes execute his actions badly. Now an intelligent AI that does not control a robot army has very limited ability to perform precise acts in the physical world.  From within the virtual world, an AI would not be able to get animals or plants to carry out specific actions that it wants performed.  I don't see how the AI could get monkeys or dolphins to maintain power plants or build chips.> The AI needs humans to carry out its plans,  but in the real physical world, when dealing with humans, knowing what you want people to do is a small part of the equation.  Won't the AI in practice struggle to get humans to execute its plans in the precise way that it needs?Follow Danny: https://x.com/Danno28_Follow Liron: https://x.com/lironPlease join my email list: DoomDebates.com This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 17, 2024 • 1h 17min

AI Doom Debate: George Hotz vs. Liron Shapira

Today I’m going to play you my debate with the brilliant hacker and entrepreneur, George Hotz.This took place on an X Space last August.Prior to our debate, George had done a debate with Eliezer Yudkowsky on Dwarkesh Podcast:Follow George: https://x.com/realGeorgeHotzFollow Liron: https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 16, 2024 • 56min

Should we gamble on AGI before all 8 billion of us die?

Chase Mann claims accelerating AGI timelines is the best thing we can do for the survival of the 8 billion people alive today.I claim pausing AI is still the highest-expected-utility decision for everyone.Who do you agree with? Comment on my Substack/X/YouTube and let me know!Follow Chase:https://x.com/ChaseMannFollow Liron:https://x.com/lironLessWrong has some great posts about cryonics: https://www.lesswrong.com/tag/cryonics This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 14, 2024 • 33min

Can humans judge AI's arguments?

It’s a monologue episode!Robin Hanson’s blog: https://OvercomingBias.comRobin Hanson’s famous concept, the Great Filter: https://en.wikipedia.org/wiki/Great_FilterRobin Hanson’s groundbreaking 2021 solution to the Fermi Paradox: https://GrabbyAliens.comRobin Hanson’s conversation with Ronny Fernandez about AI doom from May 2023: My tweet about whether we can hope to control superintelligent AI by judging its explanations and arguments: https://x.com/liron/status/1798135026166698239Zvi Mowshowitz’s blog where he posts EXCELLENT weekly AI roundups: https://thezvi.wordpress.comA takedown of Chris Dixon (Andreessen Horowitz)’s book about the nonsensical “Web3” pitch, which despite being terribly argued, is able to trick a significant number of readers into thinking they just read a good argument: https://www.citationneeded.news/review-read-write-own-by-chris-dixon/(Or maybe you think Chris’s book makes total sense, in which case you can observe that a significant number of smart people somehow don’t get how much sense it makes.)Eliezer Yudkowsky’s famous post about Newcomb’s Problem: https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 12, 2024 • 40min

What this "Doom Debates" podcast is about

Welcome and thanks for listening!* Why is Liron finally starting a podcast?* Who does Liron want to debate?* What’s the debate format?* What are Liron’s credentials?* Is someone “rational” like Liron actually just a religious cult member?Follow Ori on Twitter: https://x.com/ygrowthcoMake sure to subscribe for more episodes! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode