Once you have a machine intelligence that reaches sort of human level, or maybe somewhat above human level, you might get a very rapid feedback loop. So if you have a fast transition from human level machine intelligence to super intelligence, then it's likely that you will only have one super intelligence at first before any other system is even roughly comparable. And then this first super intelligence might be very powerful. It could develop all kinds of new technologies very quickly and strategize and plan. But for reasons that I go into some depth about in the book, it looks really hard to engineer a seed AI such that it will result in a super intelligence with human friendly preferences. Maybe we can call it and what
Nick Bostrom of the University of Oxford talks with EconTalk host Russ Roberts about his book, Superintelligence: Paths, Dangers, Strategies. Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now to reduce the risk. The conversation covers the likelihood of the worst scenarios, strategies that might be used to reduce the risk and the implications for labor markets, and human flourishing in a world of superintelligent machines.