There's been discussions on the GPT side of whether there's consciousness inside these language models. People do anthropomorphize robots, and it's difficult not to project some level of sentience onto them. I don't know much about how these large language models really work. It feels to me like there's something about truth or emotions that's just a very different kind of knowledge that is absolute.