Neural Search Talks — Zeta Alpha cover image

Transformer Memory as a Differentiable Search Index: memorizing thousands of random doc ids works!?

Neural Search Talks — Zeta Alpha

00:00

Is There a Space Where This Could Be a Thing?

The main first obvious drawback of this system is you need to pretty much train a whole model with one corpus that should be static. You can imagine ways to add it, but they're not guaranteed to work as part of the issue. There's no clean answer, I think. Especially with atomic doc IDs, because now your vocab is growing also. Exactly. But I don't know, what I was thinking about is, okay, this approach certainly cannot scale and work for applications where you have indexes that are constantly changing.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app