

Why I don’t think AGI is right around the corner
1288 snips Jul 3, 2025
The discussion dives into skepticism surrounding the current capabilities of AI in achieving artificial general intelligence. Insights are shared on how timelines for AGI vary wildly among experts, with some believing it's just years away. The challenges faced by large language models in learning and adapting like humans are explored, shedding light on their limitations. Predictions about future advancements in AI emphasize the need for improved continuous learning capabilities.
AI Snips
Chapters
Transcript
Episode notes
LLMs Lack Continual Learning
- Current LLMs struggle to perform normal human-like labor despite impressive language tasks.
- Their fundamental lack of continual learning limits transformative economic impact for Fortune 500 workflows.
Hands-on Experience with LLMs
- Dwarkesh spent around 100 hours integrating LLM tools into his post-production work.
- He found LLMs only '5 out of 10' effective even on simple language tasks like rewriting transcripts.
Importance of Organic Learning
- Human value chiefly comes from continual learning and self-correction, not raw intelligence.
- LLMs can't learn organically from their failures like humans do during real-world practice.