AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How to Overanthropomorphize a Language Model
The language models don't produce a reference typically to you know there's no set sense of like this is why they use the word hallucination. You can do a thing where you can like ask it to produce lines of books and you can ask it like what's the first line of George Orwell's 1984, for example. I think actually it's very tempting but very dangerous to anthropomorphize these systems so if we say oh ask you know ask the language model it makes it sound like you're talking to a person and I think that gives a very misleading intuition about what it's doing.