LessWrong (Curated & Popular) cover image

"An artificially structured argument for expecting AGI ruin" by Rob Bensinger

LessWrong (Curated & Popular)

00:00

The Importance of Goal-Oriented Behavior

If you don't try at all to instill the right goals into your stem-level AGI systems, and don't otherwise try to avert these default instrumental pressures, then your systems will be catastrophically dangerous. Even if we could limit AGI access to people who would never deliberately use an AGI to try to do evil, AGI systems own default incentives make them extremely dangerous. This issue blocks our ability to safely use AGI for pivotal acts as well.

Play episode from 15:09
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app