I find that in talking to most sort of well-educated people and my friend circles, that they are explicitly moral relativists, but implicitly utilitarian. And so if you push them far enough, maybe they'll admit that explicitly, but then they might fall back on relativism or something like that. Well, they're relative when it's convenient. Yeah, for me, utilitarianism is an example of something that I reason to myself out of as far as I'm concerned. It sounds superficially like the right thing to do, but I think the objections to it are good enough that I'm looking for something better.
It’s hardly news that computers are exerting ever more influence over our lives. And we’re beginning to see the first glimmers of some kind of artificial intelligence: computer programs have become much better than humans at well-defined jobs like playing chess and Go, and are increasingly called upon for messier tasks, like driving cars. Once we leave the highly constrained sphere of artificial games and enter the real world of human actions, our artificial intelligences are going to have to make choices about the best course of action in unclear circumstances: they will have to learn to be ethical. I talk to Derek Leben about what this might mean and what kind of ethics our computers should be taught. It’s a wide-ranging discussion involving computer science, philosophy, economics, and game theory. Support Mindscape on Patreon or Paypal. Derek Leben received his Ph.D. in philosopy from Johns Hopkins University in 2012. He is currently an Associate Professor of Philosophy at the University of Pittsburgh at Johnstown. He is the author of Ethics for Robots: How to Design a Moral Algorithm. PhilPapers profile University web page Ethics for Robots “A Rawlsian Algorithm for Autonomous Vehicles”
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.