This chapter explores the ethical implications of artificial intelligence and its decision-making capabilities. It delves into topics such as the compatibility of morality and autonomy, the potential threats posed by moral AI systems, and the tension between morality and freedom in AI development.
It seems obvious that moral artificial intelligence would be better than the alternative. But psychologist Paul Bloom of the University of Toronto thinks moral AI is not just a meaningless goal but a bad one. Listen as Bloom and EconTalk's Russ Roberts have a wide-ranging conversation about the nature of AI, the nature of morality, and the value of ensuring that we mortals can keep doing stupid or terrible things.