The Inside View cover image

Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision

The Inside View

CHAPTER

Is Your Paper Doing What It's Not Doing Right?

i find our paper very exciting but it is still important to recognize like they're limitations and i think it's easy to misinterpret the like what it is doing. It does not show that models have beliefs in any meaningful sense right now, so we were literally just finding something like a direction or a classifier on the hidden states that achieves good accuracy. We found other properties of this direction that suggests it's more meaningful than just saying yeah is this true or false for this particular type of input or something like that. I would say current models probably do not have beliefs in some in any super meaningful sense yet  but it seems like it's probably not beliefs.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner