Philosophical Disquisitions cover image

106 - Why GPT and other LLMs (probably) aren't sentient

Philosophical Disquisitions

00:00

The Problem With Stochastic Parrots

I think we have reasonably good evidence at this stage that they do actually have understanding of some sort, right? I absolutely agree. There's extremely now like very compelling evidence that LLMs are doing something that very much deserves the name of a world model. Their representations of worldly entities seem to reflect the structure of the world. And yeah, it seems like the way they accomplish that is in some sense by building world models.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app