AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Provocation of Large Language Models
There's another item on your list that seems like directly contradictory to what I have read, specifically to this idea that all large language models are doing is guessing what the next word in a series is likely to be. And that list item is this. LLMs often appear to learn and use representations of the outside world. So that sounds quite different from just guessing the next word. Is it or is it not different in a way that I just don't understand? It turns out it's not that different. This is, I want to say it's the big discovery, but it's this big discovery that spread out over dozens of experiments over the last few years. Can you give me a