
Transformer Memory as a Differentiable Search Index: memorizing thousands of random doc ids works!?
Neural Search Talks — Zeta Alpha
00:00
Is It Possible to Get Zero Shot Performance?
There's this model, it's also built on T5, Dr. Quirry, where the input is a document and then it tries to predict a query that that document would have been relevant to. So you could run this, get like three or five queries and maybe do like a set thing to give a set of terms that character is a document. I don't know if that would be ideal or not. But I mean, it's a reasonable thing to try. It seems to me.
Transcript
Play full episode