Big Technology Podcast

Is AI Scaling Dead? — With Gary Marcus

163 snips
May 7, 2025
Cognitive scientist and AI skeptic Gary Marcus joins the discussion to explore whether large language model scaling has hit a ceiling. He shares insights on diminishing returns in AI effectiveness and critiques the reliance on GPU scaling. The conversation touches on data privacy issues, ethical considerations in AI development, and the risks associated with both open-source and proprietary models. Marcus emphasizes the need for transparency and a more nuanced understanding of AI's future trajectory.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Diminishing Returns from AI Scaling

  • The scaling laws of AI models suggested predictable improvement with more data and compute but are now showing diminishing returns.
  • The industry is quietly acknowledging that making models bigger no longer yields the past exponential performance gains.
INSIGHT

Scaling Up Models Yields Small Gains

  • Recent huge increases in model size do not correspond to proportionate improvements in performance.
  • The era of large jumps in AI model quality from just scaling up appears to be over; gains now are incremental and costly.
INSIGHT

Test Time Compute Offers Narrow Gains

  • Adding more compute at test time (reasoning steps) helps improve performance only in narrow, closed domains.
  • This method doesn't yield broad reasoning improvements and the models often just mimic human reasoning patterns, not deeply understand them.
Get the Snipd Podcast app to discover more snips from this episode
Get the app