
Episode 24: Jack Parker-Holder, DeepMind, on open-endedness, evolving agents and environments, online adaptation, and offline learning
Generally Intelligent
00:00
Is It Possible to Improve the Robustness of Your Model?
I think it's a huge imagine a situation where your model doesn't personally represent your test tasks that you care about. But if it becomes more robust to things, even if they're fake, I don't see how it's negative. We also did try with some pixels by perturbing latent states. And then we realized that actual the baselines for pixel based offline model was so well learned. That actually didn't seem to work pretty well for their image classification as well, supervised tasks at least. It could also be good to do it from observations too. You might be able to sample like lighting conditions, different backgrounds, different distractions you've never seen before. And then combine that
Transcript
Play full episode