Latent Space: The AI Engineer Podcast cover image

The End of Finetuning — with Jeremy Howard of Fast.ai

Latent Space: The AI Engineer Podcast

00:00

The Importance of Continued Pre-training in Model Training

Catastrophic forgetting is fixable by training a model on a diverse mix of data. The three-step approach of task-specific models is now considered wrong because it's being used differently from the intended purpose. Models are breaking down due to tasks like RLHF, which require a more general approach. Fine-tuning should be disregarded in favor of continued pre-training, where all types of data and problems are included from the start.

Play episode from 35:53
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app