The Inside View cover image

Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision

The Inside View

CHAPTER

Unsupervised Modeling

The method is completely unsupervised so you don't require uh more annotation and you can do what you call like mind reading and and seeing what the model knows. The hope is we can actually find latent representations of something kind of like truth to help us identify texture or false even when we can't evaluate it ourselves. i think deep learning representations often have useful structure to them and they're often represented in a simple way like a linear network. It should be easier to access features if it's linearly represented in space, he says.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner