TechCrunch Startup News

OpenAI says it may ‘adjust’ its safety requirements if a rival lab releases ‘high-risk’ AI

Apr 18, 2025
OpenAI is shaking things up by revising its safety framework. The tech giant may tweak its safety requirements if competitors release high-risk AI models without safeguards. This move raises eyebrows about the implications of competitive pressures on AI safety standards. Discussions delve into the balance between innovation and responsible development in the ever-evolving AI landscape.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

OpenAI's Adaptive Safety Policy

  • OpenAI may adjust its AI safety requirements if competitors release high-risk systems without safeguards.
  • Adjustments will be cautious, confirmed publicly, and maintain protective safeguards.
INSIGHT

Shift to Automated Testing

  • OpenAI increasingly relies on automated evaluations to accelerate AI product development.
  • Human-led testing remains but less dominant as speed to market grows.
INSIGHT

Safety Testing Under Scrutiny

  • Reports claim OpenAI's safety tests are compressed and done mostly on earlier model versions.
  • OpenAI disputes these claims and asserts it is not compromising on safety.
Get the Snipd Podcast app to discover more snips from this episode
Get the app