
Episode 28: Sergey Levine, UC Berkeley, on the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems
Generally Intelligent
Metal Learning
The ideal metal learning method is to have something that can get a little bit of data for a new problem, use that to solve that problem, but also use it to improve the model. It's not actually obvious how to do that or whether the advent of large language models makes that easier or harder, but it's an important problem. The logical conclusion of this kind of stuff is a kind of a lifelong online metal learning procedure where every new task that you're exposed to, you can adapt to it more quickly and better your model so that it can adapt to the next task even more quickly.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.