Generally Intelligent cover image

Episode 25: Nicklas Hansen, UCSD, on long-horizon planning and why algorithms don't drive research progress

Generally Intelligent

CHAPTER

The Risk of Cataclysmic Rememberting When You Reverse the Environment

During training time, we added a self-supervised auxiliary objective and picked into inverse dynamics. And then during test time, you were updating the self- supervised objective on a single example. But of course, just doing gradient descent on your test images, you have some risk of catastrophic forgetting. So basically what we're wanting to do is basically just overfit very strongly to the environment that we're in at that specific time point. If you change the test environment or you could even go back to the training environment, it would perform worse than it did initially.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner