
Latent Space AI $555K Package: OpenAI Hunts AI Safety Czar
Jan 2, 2026
OpenAI is on the hunt for a Head of Safety, offering a hefty $555K salary to tackle AI risks like goal drift. Sam Altman emphasizes the urgency of addressing these fast-evolving safety challenges. The role includes preventing catastrophic AI issues and training models to identify vulnerabilities. As competition heats up, OpenAI risks prioritizing features over safety, raising questions about accountability and potential legal repercussions. The implications of this public-facing role are significant, especially in the wake of recent mental health concerns related to AI.
AI Snips
Chapters
Transcript
Episode notes
Models Are Outpacing Traditional Defenses
- OpenAI urgently seeks a Head of Preparedness to prevent catastrophic AI harms as models rapidly improve.
- Jaeden Schafer highlights that models now discover novel security vulnerabilities and social-engineering paths humans missed.
Prioritize Scalable Evaluations And Threat Models
- Build frontier-capable evaluations and threat models that scale across rapid product cycles.
- Design and oversee mitigations that align with those threat models and are technically effective.
Red Team AI Outsmarted Human Hackers
- OpenAI trained models to act as hackers in red-team exercises and found AI outperforming humans.
- The AI discovered multi-step, novel vulnerabilities and social-engineering techniques humans hadn't identified.
