Doom Debates cover image

Doom Debates

Latest episodes

undefined
Jun 22, 2024 • 57min

AI Doom Debate: Will AGI’s analysis paralysis save humanity?

My guest Rob thinks superintelligent AI will suffer from analysis paralysis from trying to achieve a 100% probability of killing humanity. Since AI won’t be satisfied with 99.9% of defeating us, it won’t dare to try, and we’ll live!Doom Debates catalogues all the different stops where people get off the “doom train”, all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.Follow Rob: https://x.com/LoB_BlacksageIf you want to get the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack - https://doomdebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos - https://youtube.com/@DoomDebates4. Follow me on Twitter - https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 21, 2024 • 28min

AI Doom Debate: Steven Pinker vs. Liron Shapira

Today I’m debating the one & only Professor Steven Pinker!!! Well, I kind of am, in my head. Let me know if you like this format…Dr. Pinker is optimistic that AI doom worries are overblown. But I find his arguments shallow, and I’m disappointed with his overall approach to the AI doom discourse.Here’s the full video of Steven Pinker talking to Michael C. Moynihan on this week’s episode of “Honestly with Bari Weiss”: https://youtube.com/watch?v=mTuH1Ucbif4If you want to get the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack - https://doomdebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos - https://youtube.com/@DoomDebates4. Follow me on Twitter - https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 20, 2024 • 26min

AI Doom Debate: What's a plausible alignment scenario?

RJ, a pseudonymous listener, volunteered to debate me.Follow RJ: https://x.com/impershblknightIf you want to get the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack - https://doomdebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos - https://youtube.com/@doomdebates4. Follow me on Twitter - https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 18, 2024 • 12min

Q&A: How scary is a superintelligent football coach?

Danny asks:> You've said that an intelligent AI would lead to doom because it would be an excellent goal-to-action mapper.  A great football coach like Andy Reid is a great goal-to-action mapper.  He's on the sidelines, but he knows exactly what actions his team needs to execute to achieve the goal and win the game. > But if he had a team of chimpanzees or elementary schoolers, or just players who did not want to cooperate, then his team would not execute his plans and they would lose.  And even his very talented team of highly motivated players who also want to win the game, sometimes execute his actions badly. Now an intelligent AI that does not control a robot army has very limited ability to perform precise acts in the physical world.  From within the virtual world, an AI would not be able to get animals or plants to carry out specific actions that it wants performed.  I don't see how the AI could get monkeys or dolphins to maintain power plants or build chips.> The AI needs humans to carry out its plans,  but in the real physical world, when dealing with humans, knowing what you want people to do is a small part of the equation.  Won't the AI in practice struggle to get humans to execute its plans in the precise way that it needs?Follow Danny: https://x.com/Danno28_Follow Liron: https://x.com/lironPlease join my email list: DoomDebates.com This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 17, 2024 • 1h 17min

AI Doom Debate: George Hotz vs. Liron Shapira

Today I’m going to play you my debate with the brilliant hacker and entrepreneur, George Hotz.This took place on an X Space last August.Prior to our debate, George had done a debate with Eliezer Yudkowsky on Dwarkesh Podcast:Follow George: https://x.com/realGeorgeHotzFollow Liron: https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 16, 2024 • 56min

Should we gamble on AGI before all 8 billion of us die?

Chase Mann claims accelerating AGI timelines is the best thing we can do for the survival of the 8 billion people alive today.I claim pausing AI is still the highest-expected-utility decision for everyone.Who do you agree with? Comment on my Substack/X/YouTube and let me know!Follow Chase:https://x.com/ChaseMannFollow Liron:https://x.com/lironLessWrong has some great posts about cryonics: https://www.lesswrong.com/tag/cryonics This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 14, 2024 • 33min

Can humans judge AI's arguments?

It’s a monologue episode!Robin Hanson’s blog: https://OvercomingBias.comRobin Hanson’s famous concept, the Great Filter: https://en.wikipedia.org/wiki/Great_FilterRobin Hanson’s groundbreaking 2021 solution to the Fermi Paradox: https://GrabbyAliens.comRobin Hanson’s conversation with Ronny Fernandez about AI doom from May 2023: My tweet about whether we can hope to control superintelligent AI by judging its explanations and arguments: https://x.com/liron/status/1798135026166698239Zvi Mowshowitz’s blog where he posts EXCELLENT weekly AI roundups: https://thezvi.wordpress.comA takedown of Chris Dixon (Andreessen Horowitz)’s book about the nonsensical “Web3” pitch, which despite being terribly argued, is able to trick a significant number of readers into thinking they just read a good argument: https://www.citationneeded.news/review-read-write-own-by-chris-dixon/(Or maybe you think Chris’s book makes total sense, in which case you can observe that a significant number of smart people somehow don’t get how much sense it makes.)Eliezer Yudkowsky’s famous post about Newcomb’s Problem: https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 12, 2024 • 40min

What this "Doom Debates" podcast is about

Welcome and thanks for listening!* Why is Liron finally starting a podcast?* Who does Liron want to debate?* What’s the debate format?* What are Liron’s credentials?* Is someone “rational” like Liron actually just a religious cult member?Follow Ori on Twitter: https://x.com/ygrowthcoMake sure to subscribe for more episodes! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jun 10, 2024 • 39min

AI Doom Debate: Liron Shapira vs. Kelvin Santos

Kelvin is optimistic that the forces of economic competition will keep AIs sufficiently aligned with humanity by the time they become superintelligent.He thinks AIs and humans will plausibly use interoperable money systems (powered by crypto).So even if our values diverge, the AIs will still uphold a system that respects ownership rights, such that humans may hold onto a nontrivial share of capital with which to pursue human values.I view these kinds of scenarios as wishful thinking with probability much lower than that of the simple undignified scenario I expect, wherein the first uncontrollable AGI correctly realizes what dodos we are in both senses of the word. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner