2min chapter

Brain Inspired cover image

BI 123 Irina Rish: Continual Learning

Brain Inspired

CHAPTER

Scaling and Continuous Learning for a Fixed Set of Tasks?

Will your pretrained model hit the wall at some point or not? And that's a good question, and i think it's interplay between their modal capacity. So to me, a relative scaling loss would be the most interesting thing to dive into. Maybe it's enough just to really petrain a humongo swell foundation models on multimodal data. But once you pretrain it, it essentially solved continual learning.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode