AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
RL With Language Models
The idea in our work was that instead of viewing RL with language models as this bandit problem where you optimize for the highest reward utterance, you can instead take it to the other extreme and view every single token as a decision. And the way this was done was actually... Was this across a series of dialogue exchanges or tokens within a single utterance or turn? Yeah, so it's across the entire dialogue. So if the next token is the model's choice, then it know that and if it ends the line, then it knows that the next token will come from the human.