/ideas

The state of AI in 2025: Reasoning, regulation, and rethinking scale

Sep 7, 2025
Fergal Reid, Chief AI Officer at Intercom and a renowned AI strategist, joins VP of Design Emmet Connolly to discuss the evolving landscape of AI. They explore the limits of scaling pre-training, the rise of reasoning models, and the implications of DeepSeek’s industry-shifting optimizations. The duo delves into the shrinking gap between frontier and open models, the importance of the AI layer, and the philosophical underpinnings of verification in AI advancements, emphasizing practical guidance for builders navigating the post-training-first world.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Pre-Training Plateau And Post-Training Rise

  • Pre-training scale-ups (bigger models) no longer promise large IQ jumps; progress stalled compared to earlier leaps.
  • Instead, reasoning at test time and massive post-training reinforcement learning now drive capability gains.
INSIGHT

How Pre-Training And RL Complement

  • Pre-training trains next-token prediction across huge corpora and produces world-model-like representations.
  • Reinforcement learning then sharpens capabilities by iteratively rewarding successful multi-step reasoning.
INSIGHT

DeepSeek's Cost Revelation

  • DeepSeek showed frontier-level models could be trained far cheaper with engineering and openness about costs.
  • That revealed training-cost ambiguity and pushed labs to rethink strategic bets on scale and secrecy.
Get the Snipd Podcast app to discover more snips from this episode
Get the app