The transition to the machine intelligence era will be a unique point in all of human history, maybe comparable to the rights of the human species. It's really a one of a kind thing. I'm specifically concerned with the dangers that are right only when you have a system that reaches human level intelligence or are super intelligent. So that's the distinctive kind of risk that I focus on in the book.
Nick Bostrom of the University of Oxford talks with EconTalk host Russ Roberts about his book, Superintelligence: Paths, Dangers, Strategies. Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now to reduce the risk. The conversation covers the likelihood of the worst scenarios, strategies that might be used to reduce the risk and the implications for labor markets, and human flourishing in a world of superintelligent machines.