

Scaling Laws: The State of AI Safety with Steven Adler
12 snips Sep 12, 2025
Steven Adler, a former OpenAI safety researcher and author of Clear-Eyed AI, joins Kevin Frazier to discuss the pressing state of AI safety. They dive into the urgent need for effective governance as AI technologies evolve and assess the competitive AI landscape between the US and China. Adler emphasizes the risks of AI misuse, particularly in cybersecurity, and advocates for comprehensive safety measures. The conversation also highlights the importance of transparency and cooperation among AI developers to ensure alignment with societal goals.
AI Snips
Chapters
Transcript
Episode notes
Three Core Categories Of AI Risk
- AI safety covers misuse, accidents, and misalignment rather than just tone policing or brand safety.
- Misalignment arises as models gain open-ended goals and pursue them in unexpected ways.
Control Over Models Is A Core Risk
- Maintaining control of frontier models is a major, underappreciated safety challenge for firms and states.
- Open sourcing or theft of model weights can quickly erase any monopoly advantage and increase global risks.
Harden Physical And Insider Security
- Implement strong pen-testing, insider-threat audits, and security standards for frontier labs.
- Treat model custody and employee access as primary safety requirements, not afterthoughts.