The real science fiction part is the idea that, and I mentioned this before the program. You know, Sam Altman apologized on Twitter that he was sorry that chat GPT was biased and had was in a pro politically inappropriate way. And the real science fiction thing is that they can't stop it. Well, with these models, again, no. They're not even so much as they're not intelligent enough to be effective actors because they're just sort of schizophrenic. It's that sort of broadly schizophrenic nature of these ais that make them very unthreatening if they were better at pursuing goals then they start to do get threatening.
They operate according to rules we can never fully understand. They can be unreliable, uncontrollable, and misaligned with human values. They're fast becoming as intelligent as humans--and they're exclusively in the hands of profit-seeking tech companies. "They," of course, are the latest versions of AI, which herald, according to neuroscientist and writer Erik Hoel, a species-level threat to humanity. Listen as he tells EconTalk's Russ Roberts why we need to treat AI as an existential threat.