The Inside View cover image

Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision

The Inside View

00:00

Is This Input True or False?

I think if you literally have gptn um and you give it some superhuman input and you want to know like does the model represent in a simple way in its hidden states is this input actually true or false even if human value humans can't tell. I'm relatively more optimistic about there being some way of training the model or prompting the model so that it actually actively thinks about is this input true or false. It seems much more plausible to me that you can get it to think about this text maybe there's like a 1% chance that this text is generated by an aligned AI system which could be useful for the sake of getting a little perplexity to simulate that AI system.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app