AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Reward Model Is a Discriminator in a Reward Learning Loop
In a reinforcement learning loop, you would have a kind of policy, which outputs like what you should do next sort of thing. And then you have some type of reward system that rewards the agent for acting according to the policy or not. So in this case, the reward model is outputting that reward for that preference and the language model is actually acting as the policy here. The fine tuning of the policy is done kind of with this automated reinforcement learning loop but you do need humans to generate enough data to train your reward models.