This chapter explores the challenge of building AI that aligns with human desires, discussing whether AI should have its own mind or follow human instructions. The speakers also delve into the complexities of AI decision-making and the role of human intelligence in making ethical choices.
It seems obvious that moral artificial intelligence would be better than the alternative. But psychologist Paul Bloom of the University of Toronto thinks moral AI is not just a meaningless goal but a bad one. Listen as Bloom and EconTalk's Russ Roberts have a wide-ranging conversation about the nature of AI, the nature of morality, and the value of ensuring that we mortals can keep doing stupid or terrible things.