The Data Exchange with Ben Lorica cover image

LLMs Are the Key to Unlocking the Next Generation of Search

The Data Exchange with Ben Lorica

00:00

The Future of User Experience

The advantage of this architecture we sketched out is a lot of the, I mean, the validated answers reside in your own knowledge base or vector database. And you're just using the LLM to do some summarization at the back end. So then at that point, you may not need this one trillion parameter model. Maybe you can get by with like a low single digit billion Parameter Model. Today, today we actually stream because of the fact that the generative step is much more expensive and has higher latency. But I think that our current late latencies are not representative of the latencies you'll see in six months from now. It's going to get much quicker across the

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app