
LessWrong (Curated & Popular)
“Slowdown After 2028: Compute, RLVR Uncertainty, MoE Data Wall” by Vladimir_Nesov
Podcast summary created with Snipd AI
Quick takeaways
- By 2028, the rapid scaling of AI training compute is expected to slow significantly, potentially delaying advancement until 2050 without transformative success.
- Current long-horizon reasoning training methods like RLVR show uncertain effectiveness, risking limitations in AI model capabilities and performance enhancements.
Deep dives
Forecasting AI Compute Trends
By 2028, the growth rate of AI training compute is expected to slow significantly, transitioning from current levels of rapid scaling to a more modest pace. Funding limitations and the exhaustion of natural text data contribute to this anticipated slowdown, with predictions suggesting that achieving the compute levels experienced in the current decade may take until 2050. For instance, the cost of training systems is projected to increase exponentially, with estimates indicating that a $140 billion training system in 2028 could produce only a fraction of the compute needed for future advancements. The overall implication is that without transformative commercial successes in AI, the path to reaching crucial capability thresholds could extend for decades, challenging the sustainability of current growth rates.