4min chapter

AXRP - the AI X-risk Research Podcast cover image

20 - 'Reform' AI Alignment with Scott Aaronson

AXRP - the AI X-risk Research Podcast

CHAPTER

How to Modify the GPT's Token Generation

The GPT at its core is a probabilistic model. So it's taking as input the context, so like a sequence of previous tokens and then generating as output a probability distribution over the next token. You could slightly modify the probabilities in order to systematically favor certain combinations of words. And that would be an easy watermarking scheme. But now we can imagine other things that you could do, okay? The thing that I realized in the fall, that kind of surprised some people, when I explained it to them, is that you can actually get watermarks with zero degradation of the model output quality.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode