"I can't really expect AI to have norms or ethics," he says. "If people are right, then it's going to have some sense and say it could develop norms." But why would such a thing be good for humans? He adds that if we get this kind of lucky, even if they happen to evolve these norms, we'd have to do it intentionally and carefully.
The future of AI keeps Zvi Mowshowitz up at night. He also wonders why so many smart people seem to think that AI is more likely to save humanity than destroy it. Listen as Mowshowitz talks with EconTalk's Russ Roberts about the current state of AI, the pace of AI's development, and where--unless we take serious action--the technology is likely to end up (and that end is not pretty). They also discuss Mowshowitz's theory that the shallowness of the AI extinction-risk discourse results from the assumption that you have to be either pro-technological progress or against it.