LessWrong (30+ Karma)

“On Dwarkesh Patel’s Second Interview With Ilya Sutskever” by Zvi

Dec 4, 2025
This discussion dives deep into the future of AI, with Ilya Sutskever predicting a shift back towards innovative research rather than sheer scale. Emotions are likened to value functions, offering insight into human learning. Sutskever believes that while current models shine in assessments, they fall short in real-world impact and generalization. He expresses hopes for superintelligent systems within 5-20 years but acknowledges challenges in alignment and ethical frameworks. The conversation pushes boundaries on how AI might redefine autonomy and intelligence.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Impact Gap Between Benchmarks And Reality

  • Current pretraining scaling yields strong evals but weaker real-world impact due to benchmarking and reward-hacking.
  • Zvi highlights an 'impact gap' where models underperform on general usefulness despite high benchmark scores.
INSIGHT

Human Learning Is Data-Efficient And Rich

  • Humans learn with far less data and retain deeper, more robust knowledge than current models.
  • Ilya frames emotions as dense value signals that guide learning and uncertainty resolution.
INSIGHT

Scaling Alone Will Eventually Plateau

  • Simply scaling pretraining (data/params/compute) further likely yields diminishing returns.
  • Ilya expects we'll need new ideas and a return to research-driven innovation rather than blind scale.
Get the Snipd Podcast app to discover more snips from this episode
Get the app