The problem of how to ensure that a super intelligence would be safe and beneficial. There is this big class of capability control methods. It doesn't look possible to me that we will forever manage to keep our super intelligence bottled up. And at the same time, prevent anybody else from building another super intelligence.
Nick Bostrom of the University of Oxford talks with EconTalk host Russ Roberts about his book, Superintelligence: Paths, Dangers, Strategies. Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now to reduce the risk. The conversation covers the likelihood of the worst scenarios, strategies that might be used to reduce the risk and the implications for labor markets, and human flourishing in a world of superintelligent machines.