
Episode 28: Sergey Levine, UC Berkeley, on the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems
Generally Intelligent
00:00
RL challenges and the importance of architecture selection in large-scale offline RL
In RL, there are two common pitfalls: overfitting to target values and discarding too much detail./nAdding data diversity and using larger models can improve RL performance./nSelecting architectures that are easy to optimize and going slightly larger in size can mitigate some of the difficulties in large scale RL efforts./nReinforcement learning requires representation of both optimal and sub-optimal behaviors, with the latter often being more complex.
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.