On the face of it, there’s no obvious limit to the reinforcement learning paradigm: you put an agent in an environment and reward it for taking good actions until it masters a task.
And by last year, RL had achieved some amazing things, including mastering Go, various Atari games, Starcraft II and so on. But the holy grail of AI isn’t to master specific games, but rather to generalize — to make agents that can perform well on new games that they haven’t been trained on before.
Fast forward to July of this year though and a team of DeepMind published a paper called “Open-Ended Learning Leads to Generally Capable Agents”, which takes a big step in the direction of general RL agents. Joining me for this episode of the podcast is one of the co-authors of that paper, Max Jaderberg. Max came into the Google ecosystem in 2014 when they acquired his computer vision company, and more recently, he started DeepMind’s open-ended learning team, which is focused on pushing machine learning further into the territory of cross-task generalization ability. I spoke to Max about open-ended learning, the path ahead for generalization and the future of AI.
---
Intro music by:
➞ Artist: Ron Gelinas
➞ Track Title: Daybreak Chill Blend (original mix)
➞ Link to Track: https://youtu.be/d8Y2sKIgFWc
---
Chapters:
- 0:00 Intro
- 1:30 Max’s background
- 6:40 Differences in procedural generations
- 12:20 The qualitative side
- 17:40 Agents’ mistakes
- 20:00 Measuring generalization
- 27:10 Environments and loss functions
- 32:50 The potential of symbolic logic
- 36:45 Two distinct learning processes
- 42:35 Forecasting research
- 45:00 Wrap-up