The Inside View cover image

Ethan Perez–Inverse Scaling, Language Feedback, Red Teaming

The Inside View

00:00

Cal Penate, Just Using the Cal Distance?

Cale distance is a measure of how different two distributions are. You might have one distribution over next tokens from your pre-trained language model, and another that's training. And so for example, maybe yu're like trying to get it to not generate some offensive text. That kind of pushes the second model to be, to be very different. I think that's a great explanation for cal deresions. So i'd like really encourage people to just have a go at it. It's something you could do in an hour with 300 examples.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app