Super intelligence might become real before there's competitors. How long will it take to go from something less than human to something radically super intelligent? Another variable is how long is the typical gap between different products that are striving to develop the same technology. If we have a fast take off scenario in which you go from below human to radically super human levels of intelligence in a very short period of time, like days or weeks, then it's likely that there will only be one product that has radical intelligence at first.
Nick Bostrom of the University of Oxford talks with EconTalk host Russ Roberts about his book, Superintelligence: Paths, Dangers, Strategies. Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now to reduce the risk. The conversation covers the likelihood of the worst scenarios, strategies that might be used to reduce the risk and the implications for labor markets, and human flourishing in a world of superintelligent machines.