In the 1950s, John Nash described a method of showing how self-interested agents would come to certain equilibria in interactions with each other. In games doesn't just mean poker and blackjack, but it can just mean any situation where two or more people are interacting. If both of you confess to the crime, then you both get a low sentence, but if both of you were to stay quiet, you would have both gotten a low sentence. Now, that's what's called a Pareto improvement because it's an improvement for everyone. And so I'm defining cooperation here as ones where self interest is measured as a Nash equilibrium.
It’s hardly news that computers are exerting ever more influence over our lives. And we’re beginning to see the first glimmers of some kind of artificial intelligence: computer programs have become much better than humans at well-defined jobs like playing chess and Go, and are increasingly called upon for messier tasks, like driving cars. Once we leave the highly constrained sphere of artificial games and enter the real world of human actions, our artificial intelligences are going to have to make choices about the best course of action in unclear circumstances: they will have to learn to be ethical. I talk to Derek Leben about what this might mean and what kind of ethics our computers should be taught. It’s a wide-ranging discussion involving computer science, philosophy, economics, and game theory. Support Mindscape on Patreon or Paypal. Derek Leben received his Ph.D. in philosopy from Johns Hopkins University in 2012. He is currently an Associate Professor of Philosophy at the University of Pittsburgh at Johnstown. He is the author of Ethics for Robots: How to Design a Moral Algorithm. PhilPapers profile University web page Ethics for Robots “A Rawlsian Algorithm for Autonomous Vehicles”
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.