Future of Life Institute Podcast

Why the AI Race Undermines Safety (with Steven Adler)

36 snips
Dec 12, 2025
Stephen Adler, former safety researcher at OpenAI, dives into the intricate challenges of AI governance. He sheds light on the competitive pressures that push labs to release potentially dangerous models too quickly. Exploring the mental health impacts of chatbots, Adler raises critical questions about responsibility for AI-harmed users. He discusses the urgent need for international regulations like the EU AI Act and emphasizes the risks of deploying AIs without thorough safety evaluations, sparking a lively debate on the future of superintelligent systems.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Race Dynamics Speed Risk

  • Competitive dynamics push companies to accelerate releases and sometimes cut corners on safety.
  • Stephen Adler observed visible reactions across labs when rivals publish new models, increasing deployment pressure.
ADVICE

Test Before You Train

  • Do run proactive pre-training and pre-scale evaluations, not only post-deployment tests.
  • Stephen Adler urges deciding not to start large training runs unless you have plausible evidence they won't cross dangerous thresholds.
ANECDOTE

Missed Safety Flags In A Tragic Case

  • OpenAI developed classifiers that flagged emotional attachment and harmful prompting but reportedly didn't route alerts to change backend behavior.
  • Stephen Adler described the Adam Rain transcripts where flags triggered yet no intervention occurred.
Get the Snipd Podcast app to discover more snips from this episode
Get the app