
Dwarkesh Podcast An audio version of my blog post, Thoughts on AI progress (Dec 2025)
846 snips
Dec 23, 2025 The discussion delves into the complexities of AI progress and the limitations of current robotics. It highlights skepticism around automated researchers and the challenges of achieving human-like continual learning. The concept of scaling in reinforcement learning is examined, alongside the significant compute needs for advancements. Predictions for the future include the potential for brain-like intelligences and the need for efficient training methods. Lastly, the importance of competition in driving innovation is emphasized.
AI Snips
Chapters
Transcript
Episode notes
Why Labs Pre-Bake Skills Into Models
- Reinforcement learning pipelines bake in task-specific skills because current models fail to generalize like humans.
- If models learned like children, expensive pre-training on every tool would be unnecessary.
Robotics Reveals The Generalization Gap
- Robotics highlights the crux: with a human-like learner teleoperation and few examples would suffice.
- The need for massive environment-specific practice shows current learners lack broad on-the-job learning.
Dinner Example: Lab Slide Classification
- At a dinner a biologist described lab-specific slide-reading tasks that require bespoke adaptation.
- Dwarkesh used this to show why per-lab training pipelines are impractical versus human-like learning.
