Dwarkesh Podcast

Why I don’t think AGI is right around the corner

1288 snips
Jul 3, 2025
The discussion dives into skepticism surrounding the current capabilities of AI in achieving artificial general intelligence. Insights are shared on how timelines for AGI vary wildly among experts, with some believing it's just years away. The challenges faced by large language models in learning and adapting like humans are explored, shedding light on their limitations. Predictions about future advancements in AI emphasize the need for improved continuous learning capabilities.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

LLMs Lack Continual Learning

  • Current LLMs struggle to perform normal human-like labor despite impressive language tasks.
  • Their fundamental lack of continual learning limits transformative economic impact for Fortune 500 workflows.
ANECDOTE

Hands-on Experience with LLMs

  • Dwarkesh spent around 100 hours integrating LLM tools into his post-production work.
  • He found LLMs only '5 out of 10' effective even on simple language tasks like rewriting transcripts.
INSIGHT

Importance of Organic Learning

  • Human value chiefly comes from continual learning and self-correction, not raw intelligence.
  • LLMs can't learn organically from their failures like humans do during real-world practice.
Get the Snipd Podcast app to discover more snips from this episode
Get the app