
Geoffrey Miller: Evolutionary Psychology, Polyamorous Relationships, and Effective Altruism — #26
Manifold
AI and Conflicts of Interest
A lot of the AI alignment people run around saying we must we must align AI with human values but they seem to mean something very peculiar by that. Human values in general as a sort of low-less common denominator of what all humans would reasonably want if they were perfectly rational and farsight. A lot of those ideas are just incredibly naive about how human behavior works and how human conflicts of interest operate.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.