The Inside View cover image

Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision

The Inside View

00:00

Unsupervised Modeling

The method is completely unsupervised so you don't require uh more annotation and you can do what you call like mind reading and and seeing what the model knows. The hope is we can actually find latent representations of something kind of like truth to help us identify texture or false even when we can't evaluate it ourselves. i think deep learning representations often have useful structure to them and they're often represented in a simple way like a linear network. It should be easier to access features if it's linearly represented in space, he says.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app