Philosophical Disquisitions cover image

106 - Why GPT and other LLMs (probably) aren't sentient

Philosophical Disquisitions

00:00

The Role of Speech in Human Life

In the case of an LLM, it's essentially a next word predictor. There's no good reason to think that by itself would generate sentience. Are there ways of training large language models so that you could in some sense trust what they're saying? That would have two parts. One, trying to get rid of the sort of misleading incentives of the training data that contain all of this stuff that it might imitate about consciousness. And then two, giving them some sort of capacity to actually report their internal states.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app