Scaling Laws

The State of AI Safety with Steven Adler

Sep 9, 2025
Steven Adler, a former OpenAI safety researcher and author of Clear-Eyed AI, joins Kevin Frazier to dive into AI safety. They explore the importance of pre-deployment safety measures and the challenges of ensuring trust in AI systems. Adler emphasizes the critical need for international cooperation in tackling AI threats, especially amid U.S.-China tensions. He discusses how commercial pressures have transformed OpenAI's safety culture and stresses the necessity of rigorous risk assessment as AI technologies continue to evolve.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Four Dimensions Of AI Risk

  • AI safety includes geopolitical struggle, misuse (e.g., bioweapons), accidents, and misalignment as major categories.
  • Narrow 'tone policing' definitions miss the large-scale harms powerful AI can enable.
INSIGHT

Power Creates Theft Risk

  • Creating powerful AI without control risks adversaries stealing or running your models if weights are accessible.
  • Open-sourcing weights increases diffusion and removes monopolies on capability.
ADVICE

Harden Frontier Models With Real Security

  • Implement aggressive penetration testing and insider-threat audits for frontier AI systems.
  • Treat security like a first-class obligation, not an optional engineering add-on.
Get the Snipd Podcast app to discover more snips from this episode
Get the app