

Orchestration for LLM and RAG applications
Nov 16, 2023
Malte Pietsch, co-founder & CTO of Deepset, discusses the importance of orchestration frameworks for LLM applications, the usage patterns of the Haystack framework, and optimizing RAG applications with metadata and knowledge graphs. They also explore the evolution of data engineering pipelines, real-time indexing, and the highlights and features of Haystack 2.0.
AI Snips
Chapters
Transcript
Episode notes
Orchestration Enables Modular Pipelines
- Orchestration frameworks for LLMs focus on building customizable, modular pipelines using standardized interfaces.
- They enable easy swapping of components to optimize for use case without rewriting the entire system.
RAG Dominates Haystack Use Cases
- Around 60-70% of Haystack users employ retrieval augmented generation (RAG) for their applications.
- Other popular patterns include extractive QA, semantic search, information extraction, and summarization.
Start Simple Then Scale Evaluation
- Start RAG evaluation by collecting simple user feedback like thumbs up/down to gauge usefulness.
- Then establish an evaluation dataset for systematic experiments and measure performance of retrievers at node level.