
106 - Why GPT and other LLMs (probably) aren't sentient
Philosophical Disquisitions
00:00
The Problem With Stochastic Parrots
I think we have reasonably good evidence at this stage that they do actually have understanding of some sort, right? I absolutely agree. There's extremely now like very compelling evidence that LLMs are doing something that very much deserves the name of a world model. Their representations of worldly entities seem to reflect the structure of the world. And yeah, it seems like the way they accomplish that is in some sense by building world models.
Transcript
Play full episode