AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Role of LLM in Retrieval Augmented Architectures
Some models are less good at following instructions. DaVinci models like GPT 3 tend to hallucinate answers even if the context is ever relevant to the question, but I think it's a function of the model too. In terms of latency, the bottleneck here seems to be will be with the LLM. And so for certain applications, this might be acceptable for certain applications,. This might be less acceptable in other use cases.