Brain Inspired cover image

BI 123 Irina Rish: Continual Learning

Brain Inspired

00:00

Scaling and Continuous Learning for a Fixed Set of Tasks?

Will your pretrained model hit the wall at some point or not? And that's a good question, and i think it's interplay between their modal capacity. So to me, a relative scaling loss would be the most interesting thing to dive into. Maybe it's enough just to really petrain a humongo swell foundation models on multimodal data. But once you pretrain it, it essentially solved continual learning.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app