
Steelmanning the "Doomer" Argument: How Uncontrollable Super Intelligence (USI) could kill everyone - AI Masterclass
Artificial Intelligence Masterclass
Intro
This chapter explores the complexities of AI safety, focusing on the 'doomer' perspective of superintelligence and the speaker's accelerationist stance. It reflects on past concerns, the importance of understanding opposing arguments, and shares insights from an early AI experiment aimed at reducing suffering.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.