ForeCast

Inference Scaling, AI Agents, and Moratoria (with Toby Ord)

29 snips
Jun 16, 2025
Toby Ord, a Senior Researcher at Oxford University focused on existential risks, dives into the intriguing concept of the ‘scaling paradox’ in AI. He discusses how scaling challenges affect AI performance, particularly the diminishing returns of deep learning models. The conversation also touches on the ethical implications of AI governance and the importance of moratoria on advanced technologies. Moreover, Toby examines the shifting landscape of AI's capabilities and the potential risks for humanity, emphasizing the need for a balance between innovation and safety.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI Scaling Paradox Explored

  • The AI scaling paradox shows that huge compute increases yield diminishing returns on accuracy.
  • Current deep learning models are vastly less data-efficient than the human brain's learning process.
INSIGHT

Diminishing Returns on Pre-training

  • Recent AI models like GPT-4.5 demonstrate slower capability gains despite much more compute.
  • This slowdown may result from exhausting high-quality training data, forcing use of lower quality sources.
INSIGHT

Economic Impact of Inference Scaling

  • AI inference scaling allows adjustable compute investment per task, leading to a distribution of model performance by user payment.
  • This results in variable access reflecting socio-economic differences, unlike previous more egalitarian AI access.
Get the Snipd Podcast app to discover more snips from this episode
Get the app