37sec snip

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0 cover image

The End of Finetuning — with Jeremy Howard of Fast.ai

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0

NOTE

Embrace Continued Pre-training Over Fine-tuning

The evolution of machine learning tasks has moved from specific functions, like sentiment classification, to more generalized tasks such as Reinforcement Learning from Human Feedback (RLHF), which emphasizes generating responses that positively resonate with users. However, this approach can lead to phenomena like catastrophic forgetting, where previously learned information is lost. To address this, it is suggested to abandon the conventional notion of fine-tuning altogether, focusing instead on continued pre-training as a more effective strategy for model improvement.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode