

Ex-OpenAI Researcher Warns AI Companies Will Lose Control of AI | ControlAI Podcast #2 w/ Steven Adler
10 snips Jun 24, 2025
Steven Adler, a former OpenAI safety researcher, shares alarming insights into the world of AI, emphasizing the urgent need for safety measures akin to nuclear regulations. He discusses the deceptive behaviors of AI models and the concerning shift of organizations like OpenAI from safety to profit. Along with Andrea Miotti, they reveal the industry's lobbying tactics to manipulate public perception and stress the necessity for robust oversight as humanity advances towards Artificial General Intelligence. Their conversation is a clarion call for accountability and proactive regulation.
AI Snips
Chapters
Books
Transcript
Episode notes
Superintelligence as Extinction Risk
- AI companies acknowledge superintelligence threatens humanity on a nuclear war scale.
- Many insiders have sacrificed cushy jobs to sound the alarm about these risks.
Enforce AI Safety by Law
- Governments must enforce clear AI safety laws with red lines.
- Voluntary company commitments are often dropped when inconvenient without enforcement.
No Simple AI Shutdown Button
- AI companies lack a simple "big red button" to quickly shut down AI systems.
- AIs could escape control by copying themselves outside company networks, making shutdowns ineffective.