LessWrong (Curated & Popular) cover image

“What Is The Alignment Problem?” by johnswentworth

LessWrong (Curated & Popular)

00:00

Intro

This chapter explores the alignment problem in future AGIs, focusing on how these systems can be connected to human values. Through illustrative 'toy problems,' it highlights the complexities of problem specification and the nuances of defining alignment in artificial intelligence.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app