There are three ways maybe to think about regulating this that might be effective. One way is to limit the size of corporations, which is a repugnant thought to me but if I thought the human race was at stake, maybe I'd consider it. The second would be to do the kind of standard types of regulation that we think of in other areas. And the third way, which he thinks is never going to happen, speaks to him as listeners will know and listening for a long time. You'd think people would want to restrain their urge to do to find poisons, but that's never been a part of the human condition. That's why we have weapons that we have that are
They operate according to rules we can never fully understand. They can be unreliable, uncontrollable, and misaligned with human values. They're fast becoming as intelligent as humans--and they're exclusively in the hands of profit-seeking tech companies. "They," of course, are the latest versions of AI, which herald, according to neuroscientist and writer Erik Hoel, a species-level threat to humanity. Listen as he tells EconTalk's Russ Roberts why we need to treat AI as an existential threat.