LessWrong (Curated & Popular) cover image

"An artificially structured argument for expecting AGI ruin" by Rob Bensinger

LessWrong (Curated & Popular)

00:00

AGI-ruin a List of Lethality's

Anticipating the full space of catastrophic hazards is hard, and it's non-trivial to specify individual hazards that we do anticipate. Powerful optimizers also tend to hack the repository of value. This makes it more difficult to achieve any desired property in stem-level AGI. We need to get alignment right on the first critical try at operating at a dangerous level of intelligence.

Play episode from 44:02
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app