

[Linkpost] “Stress Testing Deliberative Alignment for Anti-Scheming Training” by Mikita Balesni
Twitter | Microsite | Apollo Blog | OpenAI Blog | Full paper
Before we observe scheming, where models covertly pursue long-term misaligned goals, models might inconsistently engage in various covert behaviors such as lying, sabotage, or sandbagging. This can happen for goals we give to models or they infer from context, or for simple preferences they acquire from training — something we previously found in Frontier Models Are Capable of In-Context Scheming.
In a new research collaboration with OpenAI, we developed a larger suite of alignment evaluations for covert actions (26 evaluations) and studied a training method to reduce such covert behaviors. We manage to significantly reduce (by ~30x; OpenAI o3: 13.0%→0.4%; OpenAI o4-mini: 8.7%→0.3%) the rate of covert actions across our diverse suite by only training against a single type of covert behavior.
Part of this reduction is likely caused by increased situational [...]
---
First published:
September 17th, 2025
Linkpost URL:
https://antischeming.ai
---
Narrated by TYPE III AUDIO.