Don't Worry About the Vase Podcast

Dwarkesh Patel on Continual Learning

9 snips
Jun 9, 2025
Dwarkesh Patel, a writer focused on AI and influential researcher interviews, dives deep into the world of continual learning in artificial intelligence. He discusses the limitations of large language models and their need to replicate human learning abilities. Intriguingly, he predicts advancements in AI’s role in complex tasks like tax preparation by next year, while also addressing the challenges and risks involved. The conversation further explores the shift from reinforcement to experiential learning, highlighting both potential and safety concerns for future AI development.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

LLMs Lack Continual Learning

  • Large language models (LLMs) lack continual learning, so they do not improve over time like humans. - This absence of persistent learning is a major bottleneck limiting their usefulness and economic transformation.
INSIGHT

Tools Compensate for Learning Gaps

  • Large context windows and tools enable complex tasks despite lack of traditional continual learning. - Combining tools and repeated interactions can approximate skill growth like learning to play an instrument.
ANECDOTE

Essay Co-writing with LLMs

  • Dwarkesh co-writes essays with an LLM, initially rejecting the poor paragraphs. - After feedback, the model improves within the session but forgets it afterward, demonstrating limited session-based learning.
Get the Snipd Podcast app to discover more snips from this episode
Get the app