
LessWrong (30+ Karma)
“Reward hacking is becoming more sophisticated and deliberate in frontier LLMs” by Kei
Something's changed about reward hacking in recent systems. In the past, reward hacks were usually accidents, found by non-general, RL-trained systems. Models would randomly explore different behaviors and would sometimes come across undesired behaviors that achieved high rewards[1]. These hacks were usually either simple or took a long time for the model to learn.
But we’ve seen a different pattern emerge in frontier models over the past year. Instead of stumbling into reward hacks by accident, recent models often reason about how they are evaluated and purposefully take misaligned actions to get high reward. These hacks are often very sophisticated, involving multiple steps. And this isn’t just occurring during model development. Sophisticated reward hacks occur in deployed models made available to hundreds of millions of users.
In this post, I will:
- Describe a number of reward hacks that have occurred in recent frontier models
- Offer hypotheses explaining why [...]
---
Outline:
(01:27) Recent examples of reward hacking (more in appendix)
(01:47) Cheating to win at chess
(02:36) Faking LLM fine-tuning
(03:22) Hypotheses explaining why we are seeing this now
(03:27) Behavioral changes due to increased RL training
(05:08) Models are more capable
(05:37) Why more AI safety researchers should work on reward hacking
(05:42) Reward hacking is already happening and is likely to get more common
(06:34) Solving reward hacking is important for AI alignment
(07:47) Frontier AI companies may not find robust solutions to reward hacking on their own
(08:18) Reasons against working on reward hacking
(09:36) Research directions I find interesting
(09:57) Evaluating current reward hacking
(12:14) Science of reward hacking
(15:32) Mitigations
(17:08) Acknowledgements
(17:16) Appendix
(17:19) Reward hacks in METR tests of o3
(20:13) Hardcoding expected gradient values in fine-tuning script
(21:15) Reward hacks in OpenAI frontier training run
(22:57) Exploiting memory leakage to pass a test
(24:06) More examples
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
April 24th, 2025
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.