Is it a problem if an artificially intelligent system does things that seem to be ethical to us, but it can't articulate why it's doing them? This is an issue for deep learning systems. There's a Kantian position here that says that it's not a real decision getting back to the very first thing we talked about. You can tell me why you did it. Otherwise, you're just sort of an animal or a child acting on instinct.
It’s hardly news that computers are exerting ever more influence over our lives. And we’re beginning to see the first glimmers of some kind of artificial intelligence: computer programs have become much better than humans at well-defined jobs like playing chess and Go, and are increasingly called upon for messier tasks, like driving cars. Once we leave the highly constrained sphere of artificial games and enter the real world of human actions, our artificial intelligences are going to have to make choices about the best course of action in unclear circumstances: they will have to learn to be ethical. I talk to Derek Leben about what this might mean and what kind of ethics our computers should be taught. It’s a wide-ranging discussion involving computer science, philosophy, economics, and game theory. Support Mindscape on Patreon or Paypal. Derek Leben received his Ph.D. in philosopy from Johns Hopkins University in 2012. He is currently an Associate Professor of Philosophy at the University of Pittsburgh at Johnstown. He is the author of Ethics for Robots: How to Design a Moral Algorithm. PhilPapers profile University web page Ethics for Robots “A Rawlsian Algorithm for Autonomous Vehicles”
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.