Generally Intelligent cover image

Episode 25: Nicklas Hansen, UCSD, on long-horizon planning and why algorithms don't drive research progress

Generally Intelligent

00:00

The Risk of Cataclysmic Rememberting When You Reverse the Environment

During training time, we added a self-supervised auxiliary objective and picked into inverse dynamics. And then during test time, you were updating the self- supervised objective on a single example. But of course, just doing gradient descent on your test images, you have some risk of catastrophic forgetting. So basically what we're wanting to do is basically just overfit very strongly to the environment that we're in at that specific time point. If you change the test environment or you could even go back to the training environment, it would perform worse than it did initially.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app