Towards Data Science cover image

96. Jan Leike - AI alignment at OpenAI

Towards Data Science

00:00

Reward Modelling in Reenforcement Learning

reward modelling is one of our staple techniques for doing a linment to day. And i expect et'll be a very important building block for like, future alignment silutions. It's basically a general purpose technique to solve problems that we don't have a procedure reward for. But instead of having a reward signal, and that is coming from the environment, we get it from the human nd. The model essentially understands what is it that the human wants? Ik, what does good behaviour look like? An important year is, like, the model doesn't even have to understand how to get good behavior.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app