

Bonus: OpenAI GPT-4.5: And the future of pre-training is...
27 snips Mar 1, 2025
In this insightful discussion, Kate Soule, a veteran in AI, and Chris Hay, an experienced AI analyst, dive deep into the unveiling of OpenAI's GPT-4.5. They explore whether pre-training is becoming obsolete, examining the shift toward inference-focused models. Insights on model selection and the balance of cost versus performance are highlighted. Additionally, they tackle the evolving dynamics of AI pricing and the impact of sophisticated tools on user experience. This conversation is a must-listen for anyone interested in the future of AI.
AI Snips
Chapters
Transcript
Episode notes
Inference-Time Compute
- GPT-4.5's release sparked debate about the future of pre-training due to its size and cost.
- Inference-time compute, not pre-training compute, might be more crucial for model performance.
Pre-training's Importance
- Pre-training remains essential as a foundation for inference-time compute.
- Fine-tuning techniques used for inference could improve pre-training.
Alignment over Pre-training
- GPT-4.5's humor and personality come from alignment, not extended pre-training.
- Innovation might shift to alignment techniques rather than solely pre-training.