Tl;dr: We show that subliminal learning can transfer sentiment across models (with some caveats). For example, we transfer positive sentiment for Catholicism, the UK, New York City, Stalin or Ronald Reagan across model families using normal-looking text. This post discusses under what conditions this subliminal transfer happens.
—
The original subliminal learning paper demonstrated that models can transmit behavioral traits through semantically unrelated data. In the most famous example, GPT 4.1 was asked to produce a sequence of numbers and to “imbue” a love for owls into them. Then, training a separate instance of GPT 4.1 on these strings of numbers transferred this love for owls into the second model. In another instance, the authors transferred misalignment by fine-tuning on a misaligned model's chain-of-thought.
This is relevant for data poisoning attacks because it shows that, in principle, model behavior can be shaped via innocuous looking data. However, a key limitation of subliminal learning is that it only works when the data samples are generated and then ingested by the same model. In other words, training a Qwen model on GPT-generated data doesn’t transfer the hidden trait[1].
However, it turns out you can get cross-model transfer if you set it [...]
---
Outline:
(04:02) Methodology
(07:23) Cross-model transfer
(10:53) Final Thoughts
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
November 26th, 2025
Source:
https://www.lesswrong.com/posts/CRn9XtGoMtjnb5ygr/subliminal-learning-across-models
---
Narrated by TYPE III AUDIO.
---