
Scaling Laws Moving the AGI Goal Posts: AI Skepticism with Sayash Kapoor
Jul 31, 2025
Sayash Kapoor, co-author of 'AI Snake Oil', dives deep into the skepticism surrounding AGI claims and the complexities of AI adoption. He discusses how misaligned benchmarks fail to reflect real-world skills necessary in various sectors. Kapoor highlights the transformative potential of AI and the urgent need for informed policies amidst rapid militarization and evolving workforce dynamics. He advocates for regulatory humility, emphasizing that well-intentioned laws can have unexpected consequences. A nuanced understanding of AI's capabilities and limitations is essential for shaping its future.
AI Snips
Chapters
Books
Transcript
Episode notes
Accessibility Grounds AI Expectations
- Public access to generative AI limits overhyping by providing direct experience and evidence.
- Examples of AI failures educate users and help set realistic expectations.
Balance AI Harm and Benefit Tracking
- Track both AI harms and benefits for balanced policy discourse.
- Use empirical uplift studies rather than cherry-picked anecdotes to judge AI utility.
Navigating Public Misinterpretations
- Misreading of "AI Snake Oil" can lead to dismissing AI entirely.
- Writing for the best readers avoids defensive tone and clarifies nuanced message.







