The pie is so enormously large that it would be easy to give each person like a whole galaxy to use for their own benefit. And there would still be a lot of galaxies left over for whatever your like pet product were. We're not so good at splitting up. It's not our strong suit as human beings. But in general, I think that and not just resolving the problem of existential risks may I, but other really big existential risks arising from other possible technologies in the century.
Nick Bostrom of the University of Oxford talks with EconTalk host Russ Roberts about his book, Superintelligence: Paths, Dangers, Strategies. Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now to reduce the risk. The conversation covers the likelihood of the worst scenarios, strategies that might be used to reduce the risk and the implications for labor markets, and human flourishing in a world of superintelligent machines.