
Episode 27: Noam Brown, FAIR, on achieving human-level performance in poker and Diplomacy, and the power of spending compute at inference time
Generally Intelligent
Is Your Model Modeling the Human Ability to Plan Getting Better Performance?
The search ends up looking very similar to the kind of search that we do in poker. It's a regret minimization algorithm. We don't look far into the future because diplomacy has such a huge branching factor. The other major difference is that we add this KL penalty for deviating from the human imitation learning policy. That modifies the algorithm to, instead of just computing equilibrium, that ignores the human behavior and tries to find a human compatible policy.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.