AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Security and Evaluation Metrics of Language Models
This chapter explores the security aspects of language models and the evaluation metrics that overlook prompt injection and model vulnerabilities.
MLOps Coffee Sessions #179 with Shahul Es, All About Evaluating LLM Applications. // Abstract Shahul Es, renowned for his expertise in the evaluation space and the creator of the Ragas Project. Shahul dives deep into the world of evaluation in open source models, sharing insights on debugging, troubleshooting, and the challenges faced when it comes to benchmarks. From the importance of custom data distributions to the role of fine-tuning in enhancing model performance, this episode is packed with valuable information for anyone interested in language models and AI. // Bio Shahul is a data science professional with 6+ years of expertise and has worked in data domains from structured, NLP to Audio processing. He is also a Kaggle GrandMaster and code owner/ ML of the Open-Assistant initiative that released some of the best open-source alternatives to ChatGPT. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links All about evaluating Large language models blog: https://explodinggradients.com/all-about-evaluating-large-language-models Ragas: https://github.com/explodinggradients/ragas --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Shahul on LinkedIn: https://www.linkedin.com/in/shahules/ Timestamps: [00:00] Shahul's preferred coffee [00:20] Takeaways [01:46] Please like, share, and subscribe to our MLOps channels! [02:07] Shahul's definition of Evaluation [03:27] Evaluation metrics and Benchmarks [05:46] Gamed leaderboards [10:13] Best at summarizing long text open-source models [11:12] Benchmarks [14:20] Recommending evaluation process [17:43] LLMs for other LLMs [20:40] Debugging failed evaluation models [24:25] Prompt injection [27:32] Alignment [32:45] Open Assist [35:51] Garbage in, garbage out [37:00] Ragas [42:52] Valuable use case besides Open AI [45:11] Fine-tuning LLMs [49:07] Connect with Shahul if you need help with Ragas @Shahules786 on Twitter [49:58] Wrap up
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode