The Inside View cover image

Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision

The Inside View

00:00

The Hidden State Is More Robust Than the Outputs

The method works like this. is you run the few shot prompts um like through the transformer or whatever and then on the I don't know the last example which is the thing you want to clarify as true of us you look at the hidden state. And with the incorrect answers did that affect the hidden state at all? It turns out not by very much so while our method basically it only changes by like one or two percent in fact for some reason increases very slightly. The type of application I have in mind at least in the very near term would be a chat GPT chatbots where you try to classify what it's saying is true and false.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app