LessWrong (Curated & Popular) cover image

"An artificially structured argument for expecting AGI ruin" by Rob Bensinger

LessWrong (Curated & Popular)

00:00

The Probability of AGI Ruin

In the modern world, significant amounts of infrastructure can be deployed with just an internet connection. I expect at least one early stem-level AGI to be capable of unilaterally killing humans if it wants to. This means that the existence of other misaligned AGIs doesn't give X any incentive to avoid killing humans. And in fact, we'll have an incentive to help X if they can, to reduce the number of potential competitors and threats.

Play episode from 56:04
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app