AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
In this episode of Doom Debates, I discuss AI existential risks with my pseudonymous guest Nethys.
Nethy shares his journey into AI risk awareness, influenced heavily by LessWrong and Eliezer Yudkowsky. We explore the vulnerability of society to emerging technologies, the challenges of AI alignment, and why he believes our current approaches are insufficient, ultimately resulting in 99.999% P(Doom).
00:00 Nethys Introduction
04:47 The Vulnerable World Hypothesis
10:01 What’s Your P(Doom)™
14:04 Nethys’s Banger YouTube Comment
26:53 Living with High P(Doom)
31:06 Losing Access to Distant Stars
36:51 Defining AGI
39:09 The Convergence of AI Models
47:32 The Role of “Unlicensed” Thinkers
52:07 The PauseAI Movement
58:20 Lethal Intelligence Video Clip
Show Notes
Eliezer Yudkowsky’s post on “Death with Dignity”: https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy
PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.