A trolley problem is a way of comparing what different moral theories might say utilitarian will say you always kill one to save five. Harry Truman's decision to drop an atomic bomb in Japan was very much like a trolley problem, he says. The philosopher Robert Nozick made a big deal about this and said, look, it's wrong of me to push you in front of a train but not wrong for me to allow you to be hit by a train.
It’s hardly news that computers are exerting ever more influence over our lives. And we’re beginning to see the first glimmers of some kind of artificial intelligence: computer programs have become much better than humans at well-defined jobs like playing chess and Go, and are increasingly called upon for messier tasks, like driving cars. Once we leave the highly constrained sphere of artificial games and enter the real world of human actions, our artificial intelligences are going to have to make choices about the best course of action in unclear circumstances: they will have to learn to be ethical. I talk to Derek Leben about what this might mean and what kind of ethics our computers should be taught. It’s a wide-ranging discussion involving computer science, philosophy, economics, and game theory. Support Mindscape on Patreon or Paypal. Derek Leben received his Ph.D. in philosopy from Johns Hopkins University in 2012. He is currently an Associate Professor of Philosophy at the University of Pittsburgh at Johnstown. He is the author of Ethics for Robots: How to Design a Moral Algorithm. PhilPapers profile University web page Ethics for Robots “A Rawlsian Algorithm for Autonomous Vehicles”
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.