Manifold cover image

Geoffrey Miller: Evolutionary Psychology, Polyamorous Relationships, and Effective Altruism — #26

Manifold

00:00

AI and Conflicts of Interest

A lot of the AI alignment people run around saying we must we must align AI with human values but they seem to mean something very peculiar by that. Human values in general as a sort of low-less common denominator of what all humans would reasonably want if they were perfectly rational and farsight. A lot of those ideas are just incredibly naive about how human behavior works and how human conflicts of interest operate.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app