AXRP - the AI X-risk Research Podcast cover image

20 - 'Reform' AI Alignment with Scott Aaronson

AXRP - the AI X-risk Research Podcast

00:00

How to Modify the GPT's Token Generation

The GPT at its core is a probabilistic model. So it's taking as input the context, so like a sequence of previous tokens and then generating as output a probability distribution over the next token. You could slightly modify the probabilities in order to systematically favor certain combinations of words. And that would be an easy watermarking scheme. But now we can imagine other things that you could do, okay? The thing that I realized in the fall, that kind of surprised some people, when I explained it to them, is that you can actually get watermarks with zero degradation of the model output quality.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app