LessWrong (Curated & Popular) cover image

"An artificially structured argument for expecting AGI ruin" by Rob Bensinger

LessWrong (Curated & Popular)

00:00

The Importance of Premises 1 to 3

If no one stem-level AGI is capable of unilaterally killing humans, I still expect early stem- level AGIs to be able to coordinate to do so. And if they don't terminally value human empowerment and coordination is required to disempower humans, I think they will in fact coordinate to disem power humans. This scenario was noted by Elie Azur-Kalski at links here and here. If things are going well, it should look like a moving target. Bad outcomes look more over-determined, which strengthens the case for thinking we're in a dire situation, calling for an extraordinary response.

Play episode from 58:20
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app