We are at the point where we are finally making things that are arguably as intelligent as a human being. But one could easily see that changing, and fundamentally, intelligence is the most dangerous thing in the universe. We don't know if it's going to be a problem in five years or 50 years but I think the concern over this is something you have to deal with for a long time.
They operate according to rules we can never fully understand. They can be unreliable, uncontrollable, and misaligned with human values. They're fast becoming as intelligent as humans--and they're exclusively in the hands of profit-seeking tech companies. "They," of course, are the latest versions of AI, which herald, according to neuroscientist and writer Erik Hoel, a species-level threat to humanity. Listen as he tells EconTalk's Russ Roberts why we need to treat AI as an existential threat.