AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Is Your Model Modeling the Human Ability to Plan Getting Better Performance?
The search ends up looking very similar to the kind of search that we do in poker. It's a regret minimization algorithm. We don't look far into the future because diplomacy has such a huge branching factor. The other major difference is that we add this KL penalty for deviating from the human imitation learning policy. That modifies the algorithm to, instead of just computing equilibrium, that ignores the human behavior and tries to find a human compatible policy.