

OpenAI cofounder Greg Brockman on the scaling hypothesis and refactoring as a killer AI use case
207 snips Jun 18, 2025
Greg Brockman, cofounder of OpenAI and Stripe's first engineer, shares his journey through AI's evolution alongside John Collison. They discuss OpenAI's unique approach to the scaling hypothesis and the pivotal lessons learned from deep learning adventures in Dota. Brockman reflects on a moment when he thought OpenAI was doomed and explores the future of AI in math and science. Key topics include energy bottlenecks, personalization, and the idea of refactoring as a groundbreaking AI use case. His insights reveal both excitement and caution for AI's potential.
AI Snips
Chapters
Transcript
Episode notes
Dota 2 Revealed AI Scaling
- The Dota 2 AI project showed that continual doubling of compute improved performance without petering out.
- Unexpected strategies like baiting emerged, revealing the unpredictable nature of deep learning progress.
Manage AI By Inputs
- Control inputs and experiments, not outcomes, when managing AI research projects.
- Setting outcome-based milestones in AI research often fails due to unpredictability.
Reverse Approach to AI Product
- Successful AI product development blurs lines between research and product to rapidly respond to reality.
- OpenAI’s approach was backwards: chase technology first, then discover applicable problems.