Latent Space: The AI Engineer Podcast cover image

The End of Finetuning — with Jeremy Howard of Fast.ai

Latent Space: The AI Engineer Podcast

00:00

The Importance of Continued Pre-training in Model Training

Catastrophic forgetting is fixable by training a model on a diverse mix of data. The three-step approach of task-specific models is now considered wrong because it's being used differently from the intended purpose. Models are breaking down due to tasks like RLHF, which require a more general approach. Fine-tuning should be disregarded in favor of continued pre-training, where all types of data and problems are included from the start.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app