LessWrong (Curated & Popular) cover image

"An artificially structured argument for expecting AGI ruin" by Rob Bensinger

LessWrong (Curated & Popular)

00:00

The Central AI Alignment Problem

Human operators are fallible, breakable, and manipulable. When you have a wrong belief, reality hits back at your wrong predictions. Capabilities generalise further than alignment once capabilities start to generalise far. A central AI alignment problem? Capabilities generalisation and the sharp left turn? It expands on a point from AGI ruin.

Play episode from 48:17
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app