Science and engineering are really different from most of the problems we have. And it's quite possible to think that a super intelligent machine would be able to find a way to match its preference function with reality. So an AI could propose to do all of the same things, except maybe develop them much faster if it thinks of digital timescales rather than biological timescales. With advanced molecular nanotechnology, the ability to construct like self-replicating molecularly precise robotic machinery, then that already might be with sufficient power to take over the world. But at least in 20,000 years, if we invest a lot in science and technology, we could do like almost medical technology limited by the
Nick Bostrom of the University of Oxford talks with EconTalk host Russ Roberts about his book, Superintelligence: Paths, Dangers, Strategies. Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now to reduce the risk. The conversation covers the likelihood of the worst scenarios, strategies that might be used to reduce the risk and the implications for labor markets, and human flourishing in a world of superintelligent machines.