TalkRL: The Reinforcement Learning Podcast cover image

Natasha Jaques 2

TalkRL: The Reinforcement Learning Podcast

CHAPTER

The Reward Model Isn't Perfect, Right?

The accuracy of the reward model on validation data, it's like in the 70s or something. So you really overfit to that reward model. It's not clear that it's going to be comprehensive enough to describe good, good outputs. But at the end of the day, your loss is still being propagated into your model by increasing or decreasing token level probabilities.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner