We have a big, unsolved technical problem. We need to figure out how to engineer the multivision system of an AI so that it would even agree with one human. The best current thinking about how you go about this is to adopt maybe some form of indirect primativities. Rather than specifying an end state, you might specify a process whereby the AI could figure out what it is that you were trying to refer to.
Nick Bostrom of the University of Oxford talks with EconTalk host Russ Roberts about his book, Superintelligence: Paths, Dangers, Strategies. Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now to reduce the risk. The conversation covers the likelihood of the worst scenarios, strategies that might be used to reduce the risk and the implications for labor markets, and human flourishing in a world of superintelligent machines.