AXRP - the AI X-risk Research Podcast cover image

20 - 'Reform' AI Alignment with Scott Aaronson

AXRP - the AI X-risk Research Podcast

CHAPTER

How to Modify the GPT's Token Generation

The GPT at its core is a probabilistic model. So it's taking as input the context, so like a sequence of previous tokens and then generating as output a probability distribution over the next token. You could slightly modify the probabilities in order to systematically favor certain combinations of words. And that would be an easy watermarking scheme. But now we can imagine other things that you could do, okay? The thing that I realized in the fall, that kind of surprised some people, when I explained it to them, is that you can actually get watermarks with zero degradation of the model output quality.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner