The Real Python Podcast

Large Language Models on the Edge of the Scaling Laws

67 snips
Sep 5, 2025
Jodie Burchell, an AI tooling specialist passionate about learning C and CUDA programming, shares insights on the rapidly evolving landscape of large language models. They discuss the GPT-5 release and the industry's struggle with diminishing returns in scaling. Jodie highlights flaws in model assessments and the complexities of measuring AI intelligence. The conversation also touches on economic factors influencing job markets and the challenges developers face with AI integration and productivity in software development.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Post‑GPT‑5 Caution Replaces Hype

  • The GPT-5 aftermath made the field more cautious and less hyped than earlier leaps.
  • Jodie Burchell says capabilities were overstated despite useful applications.
INSIGHT

Scaling Laws Shaped The LLM Boom

  • Scaling laws drove the era: bigger models and more data improved performance historically.
  • Burchell explains this trend began with transformers and the 2020 scaling laws paper.
INSIGHT

Post‑Training Pushed Specialization

  • When scaling plateaued, teams shifted to post‑training and fine‑tuning to squeeze performance.
  • Burchell warns post‑training tends to specialize models rather than produce general intelligence.
Get the Snipd Podcast app to discover more snips from this episode
Get the app