
Jacob Beck and Risto Vuorio
TalkRL: The Reinforcement Learning Podcast
Is It a Meta Rl Problem Setting or an Algorithm for Deep Learning?
Deep neural networks are very finicky and they generalize a little bit but they don't really extrapolate. They mostly interpolate is the way i understand it so do you think that the the facts that our current function approximators um have limited generalization forces us to look more towards meta rl? If we were to somehow improve uh come up with improved function approximator that could maybe generalize a bit better than we wouldn't need as much meta rl, I ask if there's any truth to that.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.