Doom Debates cover image

Doom Debates

Latest episodes

undefined
Sep 11, 2024 • 1h 6min

Yuval Noah Harari's AI Warnings Don't Go Far Enough | Liron Reacts

Yuval Noah Harari is a historian, philosopher, and bestselling author known for his thought-provoking works on human history, the future, and our evolving relationship with technology. His 2011 book, Sapiens: A Brief History of Humankind, took the world by storm, offering a sweeping overview of human history from the emergence of Homo sapiens to the present day. Harari just published a new book which is largely about AI. It’s called Nexus: A Brief History of Information Networks from the Stone Age to AI. Let’s go through the latest interview he did as part of his book tour to see where he stands on AI extinction risk.00:00 Introduction04:30 Defining AI vs. non-AI20:43 AI and Language Mastery29:37 AI's Potential for Manipulation31:30 Information is Connection?37:48 AI and Job Displacement48:22 Consciousness vs. Intelligence52:02 The Alignment Problem59:33 Final ThoughtsSource podcast: https://www.youtube.com/watch?v=78YN1e8UXdMFollow Yuval Noah Harari: x.com/harari_yuvalFollow Steven Bartlett, host of Diary of a CEO: x.com/StevenBartlettJoin the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Sep 10, 2024 • 7min

I talked to Dr. Phil about AI extinction risk!

It's finally here, the Doom Debates / Dr. Phil crossover episode you've all been asking for 😂 The full episode is called “AI: The Future of Education?"While the main focus was AI in education, I'm glad the show briefly touched on how we're all gonna die. Everything in the show related to AI extinction is clipped here.Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Sep 6, 2024 • 1h 31min

Debate with Roman Yampolskiy: 50% vs. 99.999% P(Doom) from AI

Dr. Roman Yampolskiy is the director of the Cyber Security Lab at the University of Louisville. His new book is called AI: Unexplainable, Unpredictable, Uncontrollable.Roman’s P(doom) from AGI is a whopping 99.999%, vastly greater than my P(doom) of 50%. It’s a rare debate when I’m LESS doomy than my opponent!This is a cross-post from the For Humanity podcast hosted by John Sherman. For Humanity is basically a sister show of Doom Debates. Highly recommend subscribing!00:00 John Sherman’s Intro05:21 Diverging Views on AI Safety and Control12:24 The Challenge of Defining Human Values for AI18:04 Risks of Superintelligent AI and Potential Solutions33:41 The Case for Narrow AI45:21 The Concept of Utopia48:33 AI's Utility Function and Human Values55:48 Challenges in AI Safety Research01:05:23 Breeding Program Proposal01:14:05 The Reality of AI Regulation01:18:04 Concluding Thoughts01:23:19 Celebration of LifeThis episode on For Humanity’s channel: https://www.youtube.com/watch?v=KcjLCZcBFoQFor Humanity on YouTube: https://www.youtube.com/@ForHumanityPodcastFor Humanity on X: https://x.com/ForHumanityPodBuy Roman’s new book: https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626XJoin the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Sep 4, 2024 • 1h 28min

Jobst Landgrebe Doesn't Believe In AGI | Liron Reacts

Jobst Landgrebe, co-author of Why Machines Will Never Rule The World: Artificial Intelligence Without Fear, argues that AI is fundamentally limited in achieving human-like intelligence or consciousness due to the complexities of the human brain which are beyond mathematical modeling.Contrary to my view, Jobst has a very low opinion of what machines will be able to achieve in the coming years and decades.He’s also a devout Christian, which makes our clash of perspectives funnier.00:00 Introduction03:12 AI Is Just Pattern Recognition?06:46 Mathematics and the Limits of AI12:56 Complex Systems and Thermodynamics33:40 Transhumanism and Genetic Engineering47:48 Materialism49:35 Transhumanism as Neo-Paganism01:02:38 AI in Warfare01:11:55 Is This Science?01:25:46 ConclusionSource podcast: https://www.youtube.com/watch?v=xrlT1LQSyNUJoin the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Aug 29, 2024 • 1h 9min

Arvind Narayanan's Makes AI Sound Normal | Liron Reacts

Today I’m reacting to the 20VC podcast with Harry Stebbings and Princeton professor Arvind Narayanan.Prof. Narayanan is known for his critical perspective on the misuse and over-hype of artificial intelligence, which he often refers to as “AI snake oil”. Narayanan’s critiques aim to highlight the gap between what AI can realistically achieve, and the often misleading promises made by companies and researchers. I analyze Arvind’s takes on the comparative dangers of AI and nuclear weapons, the limitations of current AI models, and AI’s trajectory toward being a commodity rather than a superintelligent god.00:00 Introduction01:21 Arvind’s Perspective on AI02:07 Debating AI's Compute and Performance03:59 Synthetic Data vs. Real Data05:59 The Role of Compute in AI Advancement07:30 Challenges in AI Predictions26:30 AI in Organizations and Tacit Knowledge33:32 The Future of AI: Exponential Growth or Plateau?36:26 Relevance of Benchmarks39:02 AGI40:59 Historical Predictions46:28 OpenAI vs. Anthropic52:13 Regulating AI56:12 AI as a Weapon01:02:43 Sci-Fi01:07:28 ConclusionOriginal source: https://www.youtube.com/watch?v=8CvjVAyB4O4Follow Arvind Narayanan: x.com/random_walkerFollow Harry Stebbings: x.com/HarryStebbingsJoin the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Aug 27, 2024 • 1h 2min

Bret Weinstein Bungles It On AI Extinction | Liron Reacts

Today I’m reacting to the Bret Weinstein’s recent appearance on the Diary of a CEO podcast with Steven Bartlett. Bret is an evolutionary biologist known for his outspoken views on social and political issues.Bret gets off to a promising start, saying that AI risk should be “top of mind” and poses “five existential threats”. But his analysis is shallow and ad-hoc, and ends in him dismissing the idea of trying to use regulation as a tool to save our species from a recognized existential threat.I believe we can raise the level of AI doom discourse by calling out these kinds of basic flaws in popular media on the subject.00:00 Introduction02:02 Existential Threats from AI03:32 The Paperclip Problem04:53 Moral Implications of Ending Suffering06:31 Inner vs. Outer Alignment08:41 AI as a Tool for Malicious Actors10:31 Attack vs. Defense in AI18:12 The Event Horizon of AI21:42 Is Language More Prime Than Intelligence?38:38 AI and the Danger of Echo Chambers46:59 AI Regulation51:03 Mechanistic Interpretability56:52 Final ThoughtsOriginal source: youtube.com/watch?v=_cFu-b5lTMUFollow Bret Weinstein: x.com/BretWeinsteinFollow Steven Bartlett: x.com/StevenBartlettJoin the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Aug 26, 2024 • 56min

SB 1047 AI Regulation Debate: Holly Elmore vs. Greg Tanaka

California's SB 1047 bill, authored by CA State Senator Scott Wiener, is the leading attempt by a US state to regulate catastrophic risks from frontier AI in the wake of President Biden's 2023 AI Executive Order.Today’s debate:Holly Elmore, Executive Director of Pause AI US, representing Pro- SB 1047Greg Tanaka, Palo Alto City Councilmember, representing Anti- SB 1047Key Bill Supporters: Geoffrey Hinton, Yoshua Bengio, Anthropic, PauseAI, and about a 2/3 majority of California voters surveyed.Key Bill Opponents: OpenAI, Google, Meta, Y Combinator, Andreessen HorowitzLinksGreg mentioned that the "Supporters & Opponents" tab on this page lists organizations who registered their support and opposition. The vast majority of organizations listed here registered support against the bill: https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047Holly mentioned surveys of California voters showing popular support for the bill:1. Center for AI Safety survey shows 77% support: https://drive.google.com/file/d/1wmvstgKo0kozd3tShPagDr1k0uAuzdDM/view2. Future of Life Institute survey shows 59% support: https://futureoflife.org/ai-policy/poll-shows-popularity-of-ca-sb1047/Follow Holly: x.com/ilex_ulmusFollow Greg: x.com/GregTanakaJoin the conversation on DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Aug 22, 2024 • 1h 8min

David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?

Today I’m reacting to David Shapiro’s response to my previous episode, and also to David’s latest episode with poker champion & effective altruist Igor Kurganov.I challenge David's optimistic stance on superintelligent AI inherently aligning with human values. We touch on factors like instrumental convergence and resource competition. David and I continue to clash over whether we should pause AI development to mitigate potential catastrophic risks. I also respond to David's critiques of AI safety advocates.00:00 Introduction01:08 David's Response and Engagement03:02 The Corrigibility Problem05:38 Nirvana Fallacy10:57 Prophecy and Faith-Based Assertions22:47 AI Coexistence with Humanity35:17 Does Curiosity Make AI Value Humans?38:56 Instrumental Convergence and AI's Goals46:14 The Fermi Paradox and AI's Expansion51:51 The Future of Human and AI Coexistence01:04:56 Concluding ThoughtsJoin the conversation on DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for listening. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Aug 19, 2024 • 1h 33min

Maciej Ceglowski, Pinboard Founder, Says the Idea of Superintelligence “Eats Smart People” | Liron Reacts

Maciej Ceglowski is an entrepreneur and owner of the bookmarking site Pinboard. I’ve been a long-time fan of his sharp, independent-minded blog posts and tweets.In this episode, I react to a great 2016 talk he gave at WebCamp Zagreb titled Superintelligence: The Idea That Eats Smart People. This talk was impressively ahead of its time, as the AI doom debate really only heated up in the last few years.---00:00 Introduction02:13 Historical Analogies and AI Risks05:57 The Premises of AI Doom08:25 Mind Design Space and AI Optimization15:58 Recursive Self-Improvement and AI39:44 Arguments Against Superintelligence45:20 Mental Complexity and AI Motivations47:12 The Argument from Just Look Around You49:27 The Argument from Life Experience50:56 The Argument from Brain Surgery53:57 The Argument from Childhood58:10 The Argument from Robinson Crusoe01:00:17 Inside vs. Outside Arguments01:06:45 Transhuman Voodoo and Religion 2.001:11:24 Simulation Fever01:18:00 AI Cosplay and Ethical Concerns01:28:51 Concluding Thoughts and Call to Action---Follow Maciej: x.com/pinboardFollow Doom Debates:* youtube.com/@DoomDebates* DoomDebates.com* x.com/liron* Search “Doom Debates” in your podcast player This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Aug 16, 2024 • 57min

David Shapiro Doesn't Get PauseAI | Liron Reacts

Today I’m reacting to David Shapiro’s latest YouTube video: “Pausing AI is a spectacularly bad idea―Here's why”.In my opinion, every plan that doesn’t evolve pausing frontier AGI capabilities development now is reckless, or at least every plan that doesn’t prepare to pause AGI once we see a “warning shot” that enough people agree is terrifying.We’ll go through David’s argument point by point, to see if there are any good points about why maybe pausing AI might actually be a bad idea.00:00 Introduction01:16 The Pause AI Movement03:03 Eliezer Yudkowsky’s Epistemology12:56 Rationalist Arguments and Evidence24:03 Public Awareness and Legislative Efforts28:38 The Burden of Proof in AI Safety31:02 Arguments Against the AI Pause Movement34:20 Nuclear Proliferation vs. AI34:48 Game Theory and AI36:31 Opportunity Costs of an AI Pause44:18 Axiomatic Alignment47:34 Regulatory Capture and Corporate Interests56:24 The Growing Mainstream Concern for AI SafetyFollow David:* youtube.com/@DaveShap* x.com/DaveShapiFollow Doom Debates:* DoomDebates.com* youtube.com/@DoomDebates* x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode