This chapter explores the critical differences between evaluating language learning models (LLMs) and their applications, highlighting the diverse evaluation needs of researchers and application builders. It emphasizes the shift towards metrics-driven development and the necessity for intuitive evaluation tools that cater to the unique, non-deterministic nature of AI systems.
How do you systematically measure, optimize, and improve the performance of LLM applications (like those powered by RAG or tool use)? Ragas is an open source effort that has been trying to answer this question comprehensively, and they are promoting a “Metrics Driven Development” approach. Shahul from Ragas joins us to discuss Ragas in this episode, and we dig into specific metrics, the difference between benchmarking models and evaluating LLM apps, generating synthetic test data and more.
Leave us a comment
Changelog++ members save 5 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
- Assembly AI – Turn voice data into summaries with AssemblyAI’s leading Speech AI models. Built by AI experts, their Speech AI models include accurate speech-to-text for voice data (such as calls, virtual meetings, and podcasts), speaker detection, sentiment analysis, chapter detection, PII redaction, and more.
Featuring:
Show Notes:
Something missing or broken? PRs welcome!