LessWrong (Curated & Popular) cover image

[HUMAN VOICE] "A case for AI alignment being difficult" by jessicata

LessWrong (Curated & Popular)

00:00

Modeling Human Values and the Alignment of AI

Exploring different approaches to modeling the human brain as utility maximizers, discussing human values, and the criteria for aligning AI with human values.

Play episode from 02:07
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app