

AI and the great developer speed-up, with Joel Becker of METR
47 snips Aug 21, 2025
Joel Becker, a researcher at METR, joins to discuss fascinating insights from recent AI research. They reveal that AI coding tools may hinder productivity rather than enhance it, contrary to common beliefs. The conversation dives into the complexities of measuring developer performance, particularly in a 'flow state,' and reflects on the unpredictable impact of AI on software development. Topics also include the challenges in AI benchmarking and the importance of evolving standards as AI technology develops.
AI Snips
Chapters
Transcript
Episode notes
Measure AI By Human Time Saved
- METR measures AI capability by human time-to-complete tasks rather than saturated benchmark scores.
- Time-to-complete reveals capability growth that benchmarks can obscure.
Real Open-Source Developers In An RCT
- METR recruited 16 experienced contributors from major open-source projects and randomized their issues to allow or disallow AI.
- Developers used tools like Cursor, ChatGPT, and Claude when AI was allowed.
Perceived Speed Versus Measured Slowdown
- Developers predicted a 24% speed-up and experts predicted ~40%, yet actual measured times showed a slowdown.
- Retrospective self-reports claimed ~20% speed-up despite objective time showing a 19% increase.