
Dylan Hadfield-Menell, UC Berkeley/MIT: The value alignment problem in AI
Generally Intelligent
00:00
Is This a Principal Agent Problem in Artificial Intelligence?
I'm interested in thinking about it as a type of principl agent problem combined with really a communication problem. There's a question about how much bandwith can you get about what's right and wrong, and how can you provide that information in a meaningful way? I think a lot about social media system and content recommendation in that space. And part of it is because there is simply not as much normative data in the system.
Play episode from 08:01
Transcript


