This Sydney, New York Times reporter interchange reads like the transcript of a psychotic person to be blunt about it. I think that at this point they're so good that for people who haven't interacted with these systems, they often think, this just can't be real or it's very strange or something. And one of the first things that they did with the system in order to prevent these cases of misalignment was to limit how long it could the conversations go on and also to limit self reference.
They operate according to rules we can never fully understand. They can be unreliable, uncontrollable, and misaligned with human values. They're fast becoming as intelligent as humans--and they're exclusively in the hands of profit-seeking tech companies. "They," of course, are the latest versions of AI, which herald, according to neuroscientist and writer Erik Hoel, a species-level threat to humanity. Listen as he tells EconTalk's Russ Roberts why we need to treat AI as an existential threat.