The potential implementation of a social credit score system could lead to a reduction in personal autonomy and a combination of a nanny state and big brother surveillance. While this notion is frightening to many in the West, it may also be argued as a morally flawed system under a benign dictator, although such an ideal scenario is hypothetical.
It seems obvious that moral artificial intelligence would be better than the alternative. But psychologist Paul Bloom of the University of Toronto thinks moral AI is not just a meaningless goal but a bad one. Listen as Bloom and EconTalk's Russ Roberts have a wide-ranging conversation about the nature of AI, the nature of morality, and the value of ensuring that we mortals can keep doing stupid or terrible things.