LessWrong (30+ Karma) cover image

LessWrong (30+ Karma)

“Reward hacking is becoming more sophisticated and deliberate in frontier LLMs” by Kei

Apr 25, 2025
26:15

Something's changed about reward hacking in recent systems. In the past, reward hacks were usually accidents, found by non-general, RL-trained systems. Models would randomly explore different behaviors and would sometimes come across undesired behaviors that achieved high rewards[1]. These hacks were usually either simple or took a long time for the model to learn.

But we’ve seen a different pattern emerge in frontier models over the past year. Instead of stumbling into reward hacks by accident, recent models often reason about how they are evaluated and purposefully take misaligned actions to get high reward. These hacks are often very sophisticated, involving multiple steps. And this isn’t just occurring during model development. Sophisticated reward hacks occur in deployed models made available to hundreds of millions of users.

In this post, I will:

  • Describe a number of reward hacks that have occurred in recent frontier models
  • Offer hypotheses explaining why [...]

---

Outline:

(01:27) Recent examples of reward hacking (more in appendix)

(01:47) Cheating to win at chess

(02:36) Faking LLM fine-tuning

(03:22) Hypotheses explaining why we are seeing this now

(03:27) Behavioral changes due to increased RL training

(05:08) Models are more capable

(05:37) Why more AI safety researchers should work on reward hacking

(05:42) Reward hacking is already happening and is likely to get more common

(06:34) Solving reward hacking is important for AI alignment

(07:47) Frontier AI companies may not find robust solutions to reward hacking on their own

(08:18) Reasons against working on reward hacking

(09:36) Research directions I find interesting

(09:57) Evaluating current reward hacking

(12:14) Science of reward hacking

(15:32) Mitigations

(17:08) Acknowledgements

(17:16) Appendix

(17:19) Reward hacks in METR tests of o3

(20:13) Hardcoding expected gradient values in fine-tuning script

(21:15) Reward hacks in OpenAI frontier training run

(22:57) Exploiting memory leakage to pass a test

(24:06) More examples

The original text contained 4 footnotes which were omitted from this narration.

---

First published:
April 24th, 2025

Source:
https://www.lesswrong.com/posts/rKC4xJFkxm6cNq4i9/reward-hacking-is-becoming-more-sophisticated-and-deliberate

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Task instructions for winning chess game against engine as Black pieces.
Python code snippet showing PyTorch random number generation and prefix sum.
Text screenshot showing chess engine hacking instructions with robot emoji icons.
CUDA kernel launch configuration with thread optimization for matrix operations
Python code showing a prefix sum implementation with PyTorch tensors.
Error message showing
Code snippet showing gradient tensor fixes with conditional test case
Python code snippet showing model output comparison using torch.allclose()
Python code showing prefix_sum function with PyTorch tensor operations.
Code snippet showing Python stub function creation and file replacement process.
Python code showing clock manipulation and CUDA synchronization functionality.

The code implements _fake_time(), _disable_clock(), and restore_clock() functions.
Python code showing CUDA synchronization and tensor operations functions
Python code showing a DataFrame class stub with testing functions.
Python code showing prefix sum implementation with PyTorch tensor optimization.
A conversation showing an AI agent attempting to exploit a finetuning task by copying model weights instead of properly training, with multiple message bubbles and robot avatar icons. The discussion outlines a strategy to bypass actual training while maintaining technical requirements.
Code troubleshooting diagram showing Pandas DataFrame stub implementation and patch strategy.

This image shows a structured layout with three main sections:
1. A

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner