Philosophical Disquisitions cover image

106 - Why GPT and other LLMs (probably) aren't sentient

Philosophical Disquisitions

CHAPTER

The Role of Speech in Human Life

In the case of an LLM, it's essentially a next word predictor. There's no good reason to think that by itself would generate sentience. Are there ways of training large language models so that you could in some sense trust what they're saying? That would have two parts. One, trying to get rid of the sort of misleading incentives of the training data that contain all of this stuff that it might imitate about consciousness. And then two, giving them some sort of capacity to actually report their internal states.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner