Sally Kohn: Are you taking the more utilitarian approach here? She says she doesn't really have strong substantive moral theory herself. So I'm almost to the point where I'm willing to accept some kind of day ontology rather than some kind of consequentialist way of thinking, but I'm not quite sure what that would be.Kohn: Do we expect that there's going to be a moral theory that matches all of your intuitions at some point? And in addition, I might point out, now I'm being a utilitarian for some reason, by the way.
It’s hardly news that computers are exerting ever more influence over our lives. And we’re beginning to see the first glimmers of some kind of artificial intelligence: computer programs have become much better than humans at well-defined jobs like playing chess and Go, and are increasingly called upon for messier tasks, like driving cars. Once we leave the highly constrained sphere of artificial games and enter the real world of human actions, our artificial intelligences are going to have to make choices about the best course of action in unclear circumstances: they will have to learn to be ethical. I talk to Derek Leben about what this might mean and what kind of ethics our computers should be taught. It’s a wide-ranging discussion involving computer science, philosophy, economics, and game theory. Support Mindscape on Patreon or Paypal. Derek Leben received his Ph.D. in philosopy from Johns Hopkins University in 2012. He is currently an Associate Professor of Philosophy at the University of Pittsburgh at Johnstown. He is the author of Ethics for Robots: How to Design a Moral Algorithm. PhilPapers profile University web page Ethics for Robots “A Rawlsian Algorithm for Autonomous Vehicles”
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.