Infinite Curiosity Pod with Prateek Joshi cover image

Connecting LLMs to your data, In-context learning | Jerry Liu, cofounder and CEO of LlamaIndex

Infinite Curiosity Pod with Prateek Joshi

00:00

The Limits of Vector Embedding in Retrieval

Take a PDF, break it into chunks, and then inject that along with your prompt. And then you give it to, you type in a chat GPD,. And then you get the answer. Now, that is fantastic. That's a really good explanation of how it actually works in practice. So what are the limitations of this approach? Yeah, that's a good question. It tends to work well for more kind of like simple questions that require you that that map nicely to fact based look up in retrieval. If you want to summarize anything, you shouldn't do top k retrieval, you should really just go through the entire document.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app