TL;DR: Gemini 3 frequently thinks it is in an evaluation when it is not, assuming that all of its reality is fabricated. It can also reliably output the BIG-bench canary string, indicating that Google likely trained on a broad set of benchmark data.
Most of the experiments in this post are very easy to replicate, and I encourage people to try.
I write things with LLMs sometimes. A new LLM came out, Gemini 3 Pro, and I tried to write with it. So far it seems okay, I don't have strong takes on it for writing yet, since the main piece I tried editing with it was extremely late-stage and approximately done. However, writing ability is not why we're here today.
Reality is Fiction
Google gracefully provided (lightly summarized) CoT for the model. Looking at the CoT spawned from my mundane writing-focused prompts, oh my, it is strange. I write nonfiction about recent events in AI in a newsletter. According to its CoT while editing, Gemini 3 disagrees about the whole "nonfiction" part:
It seems I must treat this as a purely fictional scenario with 2025 as the date. Given that, I'm now focused on editing the text for [...]
---
Outline:
(00:54) Reality is Fiction
(05:17) Distortions in Development
(05:55) Is this good or bad or neither?
(06:52) What is going on here?
(07:35) 1. Too Much RL
(08:06) 2. Personality Disorder
(10:24) 3. Overfitting
(11:35) Does it always do this?
(12:06) Do other models do things like this?
(12:42) Evaluation Awareness
(13:42) Appendix A: Methodology Details
(14:21) Appendix B: Canary
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
November 20th, 2025
Source:
https://www.lesswrong.com/posts/8uKQyjrAgCcWpfmcs/gemini-3-is-evaluation-paranoid-and-contaminated
---
Narrated by TYPE III AUDIO.