
Don't Worry About the Vase Podcast AI #140: Trying To Hold The Line
Oct 30, 2025
Discover why caution is essential in building superintelligence as the discussion unpacks recent AI developments. Explore the strengths and weaknesses of language models, alongside the implications of recent upgrades from major players like OpenAI and Anthropic. The conversation dives into the notable lack of understanding in AI, the risks of deepfakes, and the cultural backlash against AI technology. Political influences and the challenges of aligning superhuman intelligence also take center stage, with a sprinkle of humor to ease the tension.
AI Snips
Chapters
Books
Transcript
Episode notes
Delay Superintelligence Until Safety Works
- Avoid building superintelligence until we know how to make it safe.
- Trying to hold the line can prevent society from making control mistakes that accelerate harm.
Release Waves Shift Incentives Fast
- Labs are releasing many product upgrades that reshape utility and risk trade-offs.
- These releases change incentives toward speed, monetization, and broader deployment.
Openness Doesn't Guarantee Less Censorship
- Freedom-of-speech scores vary widely across nations and models in practice.
- Closed American models often behave less restrictively than some 'open' models elsewhere.


