AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Retrieval Augmented Generation
With LLMs, you can add a third component to the pipeline, which is answer synthesis. So an LLM read through the top 10 candidates that you want to serve for the search and actually present the answer based on those retrieved results. And by the way, like I don't think some of the prior generation companies that were on our neural search map were doing this eight months ago,. A lot of the vendors are now moving in this direction and it's a really exciting development. The hard parts there are latency and cost but it turns out like you don't really need the three size models to do it.