
Transformer Memory as a Differentiable Search Index: memorizing thousands of random doc ids works!?
Neural Search Talks — Zeta Alpha
00:00
What Kind of Weird Things Language Models Memorize?
There's been some research in the past couple of years that looks into what sort of weird things language models memorize. And also, I think it's framed often from this negative perspective of like a privacy problem because they might memorize sensitive information. But these papers sort of flips that around and say, okay, if language models Memorize a really weird and weirdly precise things that don't make any sense, can you sort of use that to your advantage to memorize a lot of things? And it seems like it sort of works and I feel it's very interesting.
Transcript
Play full episode