Interconnects

Contra Dwarkesh on Continual Learning

64 snips
Aug 15, 2025
The discussion centers on the concept of continual learning in AI and its implications for true artificial general intelligence. One thought-provoking argument suggests that continual learning may not be the primary bottleneck in AI advancement. Instead, the focus should be on scaling existing systems. The conversation also critiques the perceived limitations of current large language models in generating human-like responses, questioning why they haven't transformed Fortune 500 workflows despite their capabilities.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Continual Learning Isn't A Fundamental Bottleneck

  • Nathan argues continual learning as Dwarkesh frames it doesn't block AI progress and will be solved by new system designs.
  • He believes scaling and different architectures, not human-like learning, will drive progress.
ANECDOTE

100+ Hours Trying To Make LLM Tools Useful

  • Dwarkesh recounts spending 100+ hours building LLM tools and failing to get them reliably useful for post-production tasks.
  • He finds many short-horizon tasks work sometimes, but models don't improve over time like humans.
INSIGHT

Don't Force AI To Be Humanlike

  • Nathan rejects the goal of making LLMs mimic humans and says that constrains progress.
  • He accepts LLMs reason differently and will nonetheless achieve continual improvement.
Get the Snipd Podcast app to discover more snips from this episode
Get the app