AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How to Modify the GPT's Token Generation
The GPT at its core is a probabilistic model. So it's taking as input the context, so like a sequence of previous tokens and then generating as output a probability distribution over the next token. You could slightly modify the probabilities in order to systematically favor certain combinations of words. And that would be an easy watermarking scheme. But now we can imagine other things that you could do, okay? The thing that I realized in the fall, that kind of surprised some people, when I explained it to them, is that you can actually get watermarks with zero degradation of the model output quality.