In many ways, I feel like super intelligence in your story is not limited really by the laws of physics in the full sense. That's not a question that is amenable to super intelligence. The human values are complex things. This is something we don't know how to do. Maybe we could create an AI today that would want to maximize the number of digits of pie that is calculated. Some very simple call like that would be maybe within our current reach to program,. But we couldn't make an AI that would like maximize justice or love or artistic beauty because these are complex human conceptsthat we don't yet know how to represent.
Nick Bostrom of the University of Oxford talks with EconTalk host Russ Roberts about his book, Superintelligence: Paths, Dangers, Strategies. Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now to reduce the risk. The conversation covers the likelihood of the worst scenarios, strategies that might be used to reduce the risk and the implications for labor markets, and human flourishing in a world of superintelligent machines.