/ideas

The state of AI in 2025: Reasoning, regulation, and rethinking scale

Sep 7, 2025
Fergal Reid, Chief AI Officer at Intercom and a renowned AI strategist, joins VP of Design Emmet Connolly to discuss the evolving landscape of AI. They explore the limits of scaling pre-training, the rise of reasoning models, and the implications of DeepSeek’s industry-shifting optimizations. The duo delves into the shrinking gap between frontier and open models, the importance of the AI layer, and the philosophical underpinnings of verification in AI advancements, emphasizing practical guidance for builders navigating the post-training-first world.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

DeepSeek's Engineering And Transparency

  • DeepSeek V3 paired hardware optimizations and engineering talent to boost performance.
  • They also published training costs, revealing a lower bar for frontier model development.
INSIGHT

Representation Convergence Hypothesis

  • Representation convergence suggests different models trained on varied data can learn similar world representations.
  • That may enable specialist RL training to yield broadly general reasoning capabilities.
INSIGHT

Models Resemble Each Other And Humans

  • Models across labs show surprising similarity in capabilities and human-like error patterns.
  • This empirical similarity strengthens the idea of a shared, human-interpretable internal representation.
Get the Snipd Podcast app to discover more snips from this episode
Get the app