I don't understand where the whole idea of preferences comes from. Why would this really smart machine have preferences, values and motivations other than what we've told it to do? You seem to suggest it could develop its own independently of what it's doing. No, no, no, well, I agree that it wouldn't necessarily have anything resembling human like emotions and drives and all of that. Nevertheless, if you have an intelligent system, a very general kind of that system is a system that is seeking to maximize utility function.
Nick Bostrom of the University of Oxford talks with EconTalk host Russ Roberts about his book, Superintelligence: Paths, Dangers, Strategies. Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now to reduce the risk. The conversation covers the likelihood of the worst scenarios, strategies that might be used to reduce the risk and the implications for labor markets, and human flourishing in a world of superintelligent machines.