Doom Debates cover image

Doom Debates

Latest episodes

undefined
Aug 15, 2024 • 49min

David Brooks's Non-Doomer Non-Argument in the NY Times | Liron Reacts

John Sherman and I go through David Brooks’s appallingly bad article in the New York Times titled “Many People Fear AI. They Shouldn’t.”For Humanity is basically the sister podcast to Doom Debates. We have the same mission to raise awareness of the urgent AI extinction threat, and build grassroots support for pausing new AI capabilities development until it’s safe for humanity.Subscribe to it on YouTube: https://www.youtube.com/@ForHumanityPodcastFollow it on X: https://x.com/ForHumanityPod This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Aug 13, 2024 • 1h 26min

Richard Sutton Dismisses AI Extinction Fears with Simplistic Arguments | Liron Reacts

Dr. Richard Sutton is a Professor of Computing Science at the University of Alberta known for his pioneering work on reinforcement learning, and his “bitter lesson” that scaling up an AI’s data and compute gives better results than having programmers try to handcraft or explicitly understand how the AI works.Dr. Sutton famously claims that AIs are the “next step in human evolution”, a positive force for progress rather than a catastrophic extinction risk comparable to nuclear weapons.Let’s examine Sutton’s recent interview with Daniel Fagella to understand his crux of disagreement with the AI doom position.---00:00 Introduction03:33 The Worthy vs. Unworthy AI Successor04:52 “Peaceful AI”07:54 “Decentralization”11:57 AI and Human Cooperation14:54 Micromanagement vs. Decentralization24:28 Discovering Our Place in the World33:45 Standard Transhumanism44:29 AI Traits and Environmental Influence46:06 The Importance of Cooperation48:41 The Risk of Superintelligent AI57:25 The Treacherous Turn and AI Safety01:04:28 The Debate on AI Control01:13:50 The Urgency of AI Regulation01:21:41 Final Thoughts and Call to Action---Original interview with Daniel Fagella: youtube.com/watch?v=fRzL5Mt0c8AFollow Richard Sutton: x.com/richardssuttonFollow Daniel Fagella: x.com/danfaggellaFollow Liron: x.com/lironSubscribe to my YouTube channel for full episodes and other bonus content: youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Aug 8, 2024 • 1h 41min

AI Doom Debate: “Cards Against Humanity” Co-Creator David Pinsof

David Pinsof is co-creator of the wildly popular Cards Against Humanity and a social science researcher at UCLA Social Minds Lab. He writes a blog called “Everything Is B******t”.He sees AI doomers as making many different questionable assumptions, and he sees himself as poking holes in those assumptions.I don’t see it that way at all; I think the doom claim is the “default expectation” we ought to have if we understand basic things about intelligence.At any rate, I think you’ll agree that his attempt to poke holes in my doom claims on today’s podcast is super good-natured and interesting.00:00 Introducing David Pinsof04:12 David’s P(doom)05:38 Is intelligence one thing?21:14 Humans vs. other animals37:01 The Evolution of Human Intelligence37:25 Instrumental Convergence39:05 General Intelligence and Physics40:25 The Blind Watchmaker Analogy47:41 Instrumental Convergence01:02:23 Superintelligence and Economic Models01:12:42 Comparative Advantage and AI01:19:53 The Fermi Paradox for Animal Intelligence01:34:57 Closing StatementsFollow David: x.com/DavidPinsofFollow Liron: x.com/lironThanks for watching. You can support Doom Debates by subscribing to the Substack, the YouTube channel (full episodes and bonus content), subscribing in your podcast player, and leaving a review on Apple Podcasts. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Aug 5, 2024 • 52min

P(Doom) Estimates Shouldn't Inform Policy??

Princeton Comp Sci Ph.D. candidate Sayash Kapoor co-authored a blog post last week with his professor Arvind Narayanan called "AI Existential Risk Probabilities Are Too Unreliable To Inform Policy".While some non-doomers embraced the arguments, I see it as contributing nothing to the discourse besides demonstrating a popular failure mode: a simple misunderstanding of the basics of Bayesian epistemology.I break down Sayash's recent episode of Machine Learning Street Talk point-by-point to analyze his claims from the perspective of the one true epistemology: Bayesian epistemology.00:00 Introduction03:40 Bayesian Reasoning04:33 Inductive vs. Deductive Probability05:49 Frequentism vs Bayesianism16:14 Asteroid Impact and AI Risk Comparison28:06 Quantification Bias31:50 The Extinction Prediction Tournament36:14 Pascal's Wager and AI Risk40:50 Scaling Laws and AI Progress45:12 Final ThoughtsMy source material is Sayash's episode of Machine Learning Street Talk: https://www.youtube.com/watch?v=BGvQmHd4QPEI also recommend reading Scott Alexander’s related post: https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentistSayash's blogpost that he was being interviewed about is called "AI existential risk probabilities are too unreliable to inform policy": https://www.aisnakeoil.com/p/ai-existential-risk-probabilitiesFollow Sayash: https://x.com/sayashk This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jul 31, 2024 • 2h 37min

Liron Reacts to Martin Casado's AI Claims

Martin Casado is a General Partner at Andreessen Horowitz (a16z) who has strong views about AI.He claims that AI is basically just a buzzword for statistical models and simulations. As a result of this worldview, he only predicts incremental AI progress that doesn’t pose an existential threat to humanity, and he sees AI regulation as a net negative.I set out to understand his worldview around AI, and pinpoint the crux of disagreement with my own view.Spoiler: I conclude that Martin needs to go beyond analyzing AI as just statistical models and simulations, and analyze it using the more predictive concept of “intelligence” in the sense of hitting tiny high-value targets in exponentially-large search spaces.If Martin appreciated that intelligence is a quantifiable property that algorithms have, and that our existing AIs are getting close to surpassing human-level general intelligence, then hopefully he’d come around to raising his P(doom) and appreciating the urgent extinction risk we face.00:00 Introducing Martin Casado01:42 Martin’s AGI Timeline05:39 Martin’s Analysis of Self-Driving Cars15:30 Heavy-Tail Distributions38:03 Understanding General Intelligence38:29 AI's Progress in Specific Domains43:20 AI’s Understanding of Meaning47:16 Compression and Intelligence48:09 Symbol Grounding53:24 Human Abstractions and AI01:18:18 The Frontier of AI Applications01:23:04 Human vs. AI: Concept Creation and Reasoning01:25:51 The Complexity of the Universe and AI's Limitations01:28:16 AI's Potential in Biology and Simulation01:32:40 The Essence of Intelligence and Creativity in AI01:41:13 AI's Future Capabilities02:00:29 Intelligence vs. Simulation02:14:59 AI Regulation02:23:05 Concluding ThoughtsWatch the original episode of the Cognitive Revolution podcast with Martin and host Nathan Labenz.Follow Martin: @martin_casadoFollow Nate: @labenzFollow Liron: @lironSubscribe to the Doom Debates YouTube Channel to get full episodes plus other bonus content!Search “Doom Debates” to subscribe in your podcast player. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jul 26, 2024 • 1h 45min

AI Doom Debate: Tilek Mamutov vs. Liron Shapira

Tilek Mamutov is a Kyrgyzstani software engineer who worked at Google X for 11 years before founding his own international software engineer recruiting company, Outtalent.Since first encountering the AI doom argument at a Center for Applied Rationality bootcamp 10 years ago, he considers it a serious possibility, but he doesn’t currently feel convinced that doom is likely.Let’s explore Tilek’s worldview and pinpoint where he gets off the doom train and why!00:12 Tilek’s Background01:43 Life in Kyrgyzstan04:32 Tilek’s Non-Doomer Position07:12 Debating AI Doom Scenarios13:49 Nuclear Weapons and AI Analogies39:22 Privacy and Empathy in Human-AI Interaction39:43 AI's Potential in Understanding Human Emotions41:14 The Debate on AI's Empathy Capabilities42:23 Quantum Effects and AI's Predictive Models45:33 The Complexity of AI Control and Safety47:10 Optimization Power: AI vs. Human Intelligence48:39 The Risks of AI Self-Replication and Control51:52 Historical Analogies and AI Safety Concerns56:35 The Challenge of Embedding Safety in AI Goals01:02:42 The Future of AI: Control, Optimization, and Risks01:15:54 The Fragility of Security Systems01:16:56 Debating AI Optimization and Catastrophic Risks01:18:34 The Outcome Pump Thought Experiment01:19:46 Human Persuasion vs. AI Control01:21:37 The Crux of Disagreement: Robustness of AI Goals01:28:57 Slow vs. Fast AI Takeoff Scenarios01:38:54 The Importance of AI Alignment01:43:05 ConclusionFollow Tilekx.com/tilekLinksI referenced Paul Christiano’s scenario of gradual AI doom, a slower version that doesn’t require a Yudkowskian “foom”. Worth a read: What Failure Looks LikeI also referenced the concept of “edge instantiation” to explain that if you’re optimizing powerfully for some metric, you don’t get other intuitively nice things as a bonus, you *just* get the exact thing your function is measuring. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jul 18, 2024 • 1h 33min

Liron Reacts to Mike Israetel's "Solving the AI Alignment Problem"

Dr. Mike Israetel is a well-known bodybuilder and fitness influencer with over 600,000 Instagram followers, and a surprisingly intelligent commentator on other subjects, including a whole recent episode on the AI alignment problem:Mike brought up many interesting points that were worth responding to, making for an interesting reaction episode. I also appreciate that he’s helping get the urgent topic of AI alignment in front of a mainstream audience.Unfortunately, Mike doesn’t engage with the possibility that AI alignment is an intractable technical problem on a 5-20 year timeframe, which I think is more likely than not. That’s the crux of why he and I disagree, and why I see most of his episode as talking past most other intelligent positions people take on AI alignment. I hope he’ll keep engaging with the topic and rethink his position.00:00 Introduction03:08 AI Risks and Scenarios06:42 Superintelligence Arms Race12:39 The Importance of AI Alignment18:10 Challenges in Defining Human Values26:11 The Outer and Inner Alignment Problems44:00 Transhumanism and AI's Potential45:42 The Next Step In Evolution47:54 AI Alignment and Potential Catastrophes50:48 Scenarios of AI Development54:03 The AI Alignment Problem01:07:39 AI as a Helper System01:08:53 Corporations and AI Development01:10:19 The Risk of Unaligned AI01:27:18 Building a Superintelligent AI01:30:57 ConclusionFollow Mike Israetel:* instagram.com/drmikeisraetel* youtube.com/@MikeIsraetelMakingProgressGet the full Doom Debates experience:* Subscribe to youtube.com/@DoomDebates* Subscribe to this Substack: DoomDebates.com* Search "Doom Debates" to subscribe in your podcast player* Follow me at x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jul 12, 2024 • 1h 11min

Robin Hanson Highlights and Post-Debate Analysis

What did we learn from my debate with Robin Hanson? Did we successfully isolate the cruxes of disagreement? I actually think we did!In this post-debate analysis, we’ll review what those key cruxes are, and why I still think I’m right and Robin is wrong about them!I’ve taken the time to think much harder about everything Robin said during the debate, so I can give you new & better counterarguments than the ones I was able to make in realtime.Timestamps00:00 Debate Reactions06:08 AI Timelines and Key Metrics08:30 “Optimization Power” vs. “Innovation”11:49 Economic Growth and Diffusion17:56 Predicting Future Trends24:23 Crux of Disagreement with Robin’s Methodology34:59 Conjunction Argument for Low P(Doom)37:26 Headroom Above Human Intelligence41:13 The Role of Culture in Human Intelligence48:01 Goal-Completeness and AI Optimization50:48 Misaligned Foom Scenario59:29 Monitoring AI and the Rule of Law01:04:51 How Robin Sees Alignment01:09:08 Reflecting on the DebateLinksAISafety.info - The fractal of counterarguments to non-doomers’ argumentsFor the full Doom Debates experience:* Subscribe to youtube.com/@DoomDebates* Subscribe to this Substack: DoomDebates.com* Search "Doom Debates" to subscribe in your podcast player* Follow me at x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jul 8, 2024 • 2h 9min

Robin Hanson vs. Liron Shapira: Is Near-Term Extinction From AGI Plausible?

Robin Hanson is a legend in the rationality community and one of my biggest intellectual influences.In 2008, he famously debated Eliezer Yudkowsky about AI doom via a sequence of dueling blog posts known as the great Hanson-Yudkowsky Foom Debate. This debate picks up where Hanson-Yudkowsky left off, revisiting key arguments in the light of recent AI advances.My position is similar to Eliezer's: P(doom) is on the order of 50%. Robin's position is shockingly different: P(doom) is below 1%.00:00 Announcements03:18 Debate Begins05:41 Discussing AI Timelines and Predictions19:54 Economic Growth and AI Impact31:40 Outside Views vs. Inside Views on AI46:22 Predicting Future Economic Growth51:10 Historical Doubling Times and Future Projections54:11 Human Brain Size and Economic Metrics57:20 The Next Era of Innovation01:07:41 AI and Future Predictions01:14:24 The Vulnerable World Hypothesis01:16:27 AI Foom01:28:15 Genetics and Human Brain Evolution01:29:24 The Role of Culture in Human Intelligence01:31:36 Brain Size and Intelligence Debate01:33:44 AI and Goal-Completeness01:35:10 AI Optimization and Economic Impact01:41:50 Feasibility of AI Alignment01:55:21 AI Liability and Regulation02:05:26 Final Thoughts and Wrap-UpRobin's links:Twitter: x.com/RobinHansonHome Page: hanson.gmu.eduRobin’s top related essays:* What Are Reasonable AI Fears?* AIs Will Be Our Mind ChildrenPauseAI links:https://pauseai.info/https://discord.gg/2XXWXvErfACheck out https://youtube.com/@ForHumanityPodcast, the other podcast raising the alarm about AI extinction!For the full Doom Debates experience:* Subscribe to https://youtube.com/@DoomDebates* Subscribe to the Substack: https://DoomDebates.com* Search "Doom Debates" to subscribe in your podcast player* Follow me at https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jul 5, 2024 • 49min

Preparing for my AI Doom Debate with Robin Hanson

This episode is a comprehensive preparation session for my upcoming debate on AI doom with the legendary Robin Hanson.Robin’s P(doom) is <1% while mine is 50%. How do we reconcile this?I’ve researched past debates, blogs, tweets, and scholarly discussions related to AI doom, and plan to focus our debate on the cruxes of disagreement between Robin’s position and my own Eliezer Yudkowsky-like position.Key topics include the probability of humanity’s extinction due to uncontrollable AGI, alignment strategies, AI capabilities and timelines, the impact of AI advancements, and various predictions made by Hanson.00:00 Introduction03:37 Opening Statement04:29 Value-Extinction Spectrum05:34 Future AI Capabilities08:23 AI Timelines13:23 What can't current AIs do15:48 Architecture/Algorithms vs. Content17:40 Cyc18:55 Is intelligence many different things, or one thing?19:31 Goal-Completeness20:44 AIXI22:10 Convergence in AI systems23:02 Foom26:00 Outside view: Extrapolating robust trends26:18 Salient Events Timeline30:56 Eliezer's claim about meta-levels affecting capability growth rates33:53 My claim - the optimization power model trumps these outside-view trends35:19 Aren't there many other possible outside views?37:03 Is alignment feasible?40:14 What's the warning shot that would make you concerned?41:07 Future Foom evidence?44:59 How else have Robin's views changed in the last decade?Doom Debates catalogues all the different stops where people get off the "doom train", all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.If you'd like the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack — DoomDebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos — youtube.com/@doomdebates4. Follow me on Twitter — x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode