
Episode 28: Sergey Levine, UC Berkeley, on the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems
Generally Intelligent
Offline Reinforcement Learning - What's Next?
In my group at Berkeley, we're focusing a lot on what we call offline reinforcement learning algorithms. And the idea is that traditionally reinforcement learning is thought of as a very online and interactive learning regime. The most successful large-scale machine learning systems train on datasets that are stored to disk and then reused repeatedly because they don't want to recollect those interactively each time you retrain your system. That's the premise behind offline reinforcement learning.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.