3min chapter

The Inside View cover image

Ethan Perez–Inverse Scaling, Language Feedback, Red Teaming

The Inside View

CHAPTER

Cal Penate, Just Using the Cal Distance?

Cale distance is a measure of how different two distributions are. You might have one distribution over next tokens from your pre-trained language model, and another that's training. And so for example, maybe yu're like trying to get it to not generate some offensive text. That kind of pushes the second model to be, to be very different. I think that's a great explanation for cal deresions. So i'd like really encourage people to just have a go at it. It's something you could do in an hour with 300 examples.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode