LessWrong (Curated & Popular) cover image

[HUMAN VOICE] "A case for AI alignment being difficult" by jessicata

LessWrong (Curated & Popular)

00:00

Exploring Alignment as a Normative Criterion and Human Values

This chapter explores the concept of alignment as a normative criterion for AI value systems, discussing the relevance of alignment, human values, and comparing the alignment of different intelligent beings.

Play episode from 04:40
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app