Doom Debates cover image

Doom Debates

David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?

Aug 22, 2024
01:07:56

Today I’m reacting to David Shapiro’s response to my previous episode, and also to David’s latest episode with poker champion & effective altruist Igor Kurganov.

I challenge David's optimistic stance on superintelligent AI inherently aligning with human values. We touch on factors like instrumental convergence and resource competition. David and I continue to clash over whether we should pause AI development to mitigate potential catastrophic risks. I also respond to David's critiques of AI safety advocates.

00:00 Introduction

01:08 David's Response and Engagement

03:02 The Corrigibility Problem

05:38 Nirvana Fallacy

10:57 Prophecy and Faith-Based Assertions

22:47 AI Coexistence with Humanity

35:17 Does Curiosity Make AI Value Humans?

38:56 Instrumental Convergence and AI's Goals

46:14 The Fermi Paradox and AI's Expansion

51:51 The Future of Human and AI Coexistence

01:04:56 Concluding Thoughts

Join the conversation on DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for listening.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner