The Inside View cover image

Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision

The Inside View

CHAPTER

Is This Input True or False?

I think if you literally have gptn um and you give it some superhuman input and you want to know like does the model represent in a simple way in its hidden states is this input actually true or false even if human value humans can't tell. I'm relatively more optimistic about there being some way of training the model or prompting the model so that it actually actively thinks about is this input true or false. It seems much more plausible to me that you can get it to think about this text maybe there's like a 1% chance that this text is generated by an aligned AI system which could be useful for the sake of getting a little perplexity to simulate that AI system.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner