Shahul Es, creator of the Ragas Project and evaluation expert, discusses open source model evaluation, including debugging, troubleshooting, and benchmark challenges. They highlight the importance of custom data distributions and fine-tuning for better model performance. They also explore the difficulties of evaluating LLM applications and the need for reliable leaderboards. Additionally, they discuss the security aspects of language models and the significance of data preparation and filtering. Lastly, they contrast fine-tuning with retrieval augmented generation and provide resources for evaluating LLM applications.
50:39
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Evaluation Defined
Evaluation means measuring and quantifying a system's performance to enable improvements.
Iterations and measurements help determine if changes are positive or negative.
insights INSIGHT
Leaderboard Gaming
Open-source LLM leaderboards can be unreliable due to over-optimization.
This "gaming" makes models less useful for real-world applications.
question_answer ANECDOTE
Kaggle's Approach
Kaggle uses public and private test sets to avoid overfitting in competitions.
This approach could be adopted by open-source LLM evaluators.
Get the Snipd Podcast app to discover more snips from this episode
MLOps Coffee Sessions #179 with Shahul Es, All About Evaluating LLM Applications.
// Abstract
Shahul Es, renowned for his expertise in the evaluation space and the creator of the Ragas Project. Shahul dives deep into the world of evaluation in open source models, sharing insights on debugging, troubleshooting, and the challenges faced when it comes to benchmarks. From the importance of custom data distributions to the role of fine-tuning in enhancing model performance, this episode is packed with valuable information for anyone interested in language models and AI.
// Bio
Shahul is a data science professional with 6+ years of expertise and has worked in data domains from structured, NLP to Audio processing. He is also a Kaggle GrandMaster and code owner/ ML of the Open-Assistant initiative that released some of the best open-source alternatives to ChatGPT.
// MLOps Jobs board
https://mlops.pallet.xyz/jobs
// MLOps Swag/Merch
https://mlops-community.myshopify.com/
// Related Links
All about evaluating Large language models blog: https://explodinggradients.com/all-about-evaluating-large-language-models
Ragas: https://github.com/explodinggradients/ragas
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Shahul on LinkedIn: https://www.linkedin.com/in/shahules/
Timestamps:
[00:00] Shahul's preferred coffee
[00:20] Takeaways
[01:46] Please like, share, and subscribe to our MLOps channels!
[02:07] Shahul's definition of Evaluation
[03:27] Evaluation metrics and Benchmarks
[05:46] Gamed leaderboards
[10:13] Best at summarizing long text open-source models
[11:12] Benchmarks
[14:20] Recommending evaluation process
[17:43] LLMs for other LLMs
[20:40] Debugging failed evaluation models
[24:25] Prompt injection
[27:32] Alignment
[32:45] Open Assist
[35:51] Garbage in, garbage out
[37:00] Ragas
[42:52] Valuable use case besides Open AI
[45:11] Fine-tuning LLMs
[49:07] Connect with Shahul if you need help with Ragas @Shahules786 on Twitter
[49:58] Wrap up