Philosophical Disquisitions cover image

106 - Why GPT and other LLMs (probably) aren't sentient

Philosophical Disquisitions

CHAPTER

The Problem With Stochastic Parrots

I think we have reasonably good evidence at this stage that they do actually have understanding of some sort, right? I absolutely agree. There's extremely now like very compelling evidence that LLMs are doing something that very much deserves the name of a world model. Their representations of worldly entities seem to reflect the structure of the world. And yeah, it seems like the way they accomplish that is in some sense by building world models.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner