

OpenAI and the AI Arms Race (Robert Wright & Steven Adler)
Jun 10, 2025
Join Steven Adler, former OpenAI employee and publisher of his own Substack newsletter, as he shares insider insights into the AI boom. He discusses the ethical implications of AI's shift from nonprofit to profit-driven motives and highlights groundbreaking models like DALL-E 2 and GPT-4. Adler delves into the complexities of AI safety, the geopolitical dynamics of the US-China AI race, and the need for cooperation to mitigate existential risks. His reflections offer a unique perspective on the urgent challenges and potential of artificial intelligence.
AI Snips
Chapters
Transcript
Episode notes
Adler's AI Development Experience
- Steven Adler witnessed the rapid advancements of OpenAI's GPT models from GPT-3 to GPT-4.
- He experienced the breakthrough moments when the models became genuinely useful and surprising in abilities like arithmetic and coding visuals.
Meaning of "Feel the AGI"
- The phrase "Feel the AGI" was intended to convey the gravity of building highly capable AI, not to celebrate it.
- It helped convey the exponential, jarring nature of AI progress to OpenAI employees.
Spectrum of AI Safety Risks
- AI safety concerns range from catastrophic accidents to full human extinction or loss of control.
- It's rational to worry even without believing in worst-case extinction scenarios because many outcomes would still be profoundly negative.