In the aftermath of World War II, there were some interesting models which I had not been aware of for dealing with nuclear power. They hope would be that you could maybe persuade the Soviet Union other key nations to put atomic energy under international control. And this was a fairly serious proposal that that was actually floated and then with quite high level backing. In the end it didn't work, partly because like Stalin didn't really trust the Western powers. But it does remind us that if we could develop super intelligence sooner than later, you might care about where it originates.
Nick Bostrom of the University of Oxford talks with EconTalk host Russ Roberts about his book, Superintelligence: Paths, Dangers, Strategies. Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now to reduce the risk. The conversation covers the likelihood of the worst scenarios, strategies that might be used to reduce the risk and the implications for labor markets, and human flourishing in a world of superintelligent machines.