If there was absolutely no way for this AI in the box to interact with the rest of the world, then maybe it would be completely inert. So at some point, you get a new box, but yeah, a really smart box. And you might, like, depending on your moral thoughts of you, you might care about what happens inside the box for its own sake. We give some interesting examples such as the super intelligence could hack into the financial system and bribe a real flesh and blood human to do some things that it would help it without even maybe the person's knowledge.
Nick Bostrom of the University of Oxford talks with EconTalk host Russ Roberts about his book, Superintelligence: Paths, Dangers, Strategies. Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now to reduce the risk. The conversation covers the likelihood of the worst scenarios, strategies that might be used to reduce the risk and the implications for labor markets, and human flourishing in a world of superintelligent machines.