The Inside View cover image

Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision

The Inside View

00:00

Is There a Difference Between Predicting and Not Prompting?

Pre-chaining data isn't literally just what humans say it is also for example you know what would be said in like the news or something. You could imagine training a model to like predict news articles conditioned on dates and then learning how to do that with real world events. In general prompting is kind of hacky but i don't think it will be enough to actually get monolous speechless in general one way or another.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app