The Gradient: Perspectives on AI cover image

Scott Aaronson: Against AI Doomerism

The Gradient: Perspectives on AI

CHAPTER

Watermarking the Outputs of GPT

The way that GPT works is inherently probabilistic, right? It's constantly taking as input like the sequence of previous words or what are called tokens. And then it outputs a probability distribution over possibilities for the next token. In some cases, almost all of the probability will be concentrated on a single choice. The idea with watermarking is that you can pick the next token pseudo-randomly. This means that to a normal user, the output will be indistinguishable from normal GPT output.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner