
Episode 28: Sergey Levine, UC Berkeley, on the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems
Generally Intelligent
Is There a Middle Ground to Scaling Up Reaper Learning?
I think it's entirely possible to take that problem and divide it into its constituent parts so that if we're developing an algorithm that is supposed to enable reinforcement learning with language models, well, that can be done with a smaller model. So some dividing the problem appropriately can make this quite feasible. It does seem like in reinforcement learning, the models are much, much smaller than they are in many other parts of machine learning. Do you have any sense for exactly why that is just historical? Is it merely a performance thing?
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.