LessWrong (Curated & Popular)

“Slowdown After 2028: Compute, RLVR Uncertainty, MoE Data Wall” by Vladimir_Nesov

4 snips
May 3, 2025
The discussion explores the anticipated slowdown in AI training compute around 2029, raising concerns about resource limitations and diminishing natural text data. It highlights the uncertain potential of reasoning training and its inability to generate new capabilities. The hosts analyze the implications of scaling challenges, suggesting that advancements may take decades rather than years. They also touch on the growing data inefficiency in current methods, emphasizing the urgency of transformative breakthroughs for future progress.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI Training Compute Slowdown

  • Training compute for AI is expected to slow sharply after 2028 as funding limits are reached and data runs out.
  • The scaling pace of training systems may regress from 3.55x per year to 1.4x, extending progress timelines to around 2050.
INSIGHT

Limits of Reasoning Training

  • Current RL with verifiable rewards (RLVR) methods may mostly elicit capabilities from base models instead of creating new ones.
  • The ceiling for RLVR capabilities may be limited by the base model's power, constraining future progress.
INSIGHT

Data Inefficiency in MoE Models

  • Mixture of Experts (MoE) models require significantly more data per active parameter compared to dense models, worsening data scarcity.
  • This data inefficiency threatens to exhaust natural text data faster and limit training as compute scales beyond 2028.
Get the Snipd Podcast app to discover more snips from this episode
Get the app