
MLOps for GenAI Applications // Harcharan Kabbay // #256
MLOps.community
Navigating Data Monitoring Challenges in ML and AI
This chapter explores an upcoming virtual conference centered on data engineering for AI and machine learning, featuring notable speakers and practical discussions. It also delves into the complexities of integrating data monitoring within traditional DevOps frameworks, focusing on tracking prediction accuracy and data flow.
Harcharan Kabbay is a Data Scientist & AI/ML Engineer with Expertise in MLOps, Kubernetes, and DevOps, Driving End-to-End Automation and Transforming Data into Actionable Insights.
MLOps for GenAI Applications // MLOps Podcast #256 with Harcharan Kabbay, Lead Machine Learning Engineer at World Wide Technology.
// Abstract
The discussion begins with a brief overview of the Retrieval-Augmented Generation (RAG) framework, highlighting its significance in enhancing AI capabilities by combining retrieval mechanisms with generative models. The podcast further explores the integration of MLOps, focusing on best practices for embedding the RAG framework into a CI/CD pipeline. This includes ensuring robust monitoring, effective version control, and automated deployment processes that maintain the agility and efficiency of AI applications. A significant portion of the conversation is dedicated to the importance of automation in platform provisioning, emphasizing tools like Terraform. The discussion extends to application design, covering essential elements such as key vaults, configurations, and strategies for seamless promotion across different environments (development, testing, and production). We'll also address how to enhance the security posture of applications through network firewalls, key rotation, and other measures. Let's talk about the power of Kubernetes and related tools to aid a good application design. The podcast highlights the principles of good application design, including proper observability and eliminating single points of failure. I would share strategies to reduce development time by creating templates for GitHub repositories by application types to be reused, also templates for pull requests, thereby minimizing human errors and streamlining the development process.
// Bio
Harcharan is an AI and machine learning expert with a robust background in Kubernetes, DevOps, and automation. He specializes in MLOps, facilitating the adoption of industry best practices and platform provisioning automation. With extensive experience in developing and optimizing ML and data engineering pipelines, Harcharan excels at integrating RAG-based applications into production environments. His expertise in building scalable, automated AI systems has empowered the organization to enhance decision-making and problem-solving capabilities through advanced machine-learning techniques.
// MLOps Jobs board
// MLOps Swag/Merch
https://mlops-community.myshopify.com/
// Related Links
Harcharan's Medium - https://medium.com/@harcharan-kabbay
Data Engineering for AI/ML Conference: https://home.mlops.community/home/events/dataengforai
--------------- ✌️Connect With Us ✌️ -------------
Join our Slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Harcharan on LinkedIn: https://www.linkedin.com/in/harcharankabbay/locale=en_US
Timestamps:
[00:00] Harcharan's preferred coffee
[00:21] Takeaways
[01:03] Against local LLMs
[02:11] Creating bad habits
[02:42] Operationalizing RAG from CICD perspective
[09:39] Kubernetes vs LLM Deployment
[12:12] Tool preferences in ML
[14:39] DevOps perspective of deployment
[17:44] Terraform Licensing Controversy
[22:47] PR Review Template Guidance
[27:32] People process tech order
[29:22] Register for the Data Engineering for AI/ML Conference now!
[30:00] ML monitoring strategies explained
[39:39] Serverless vs Overprovisioning
[44:43] Model SLA's and Monitoring
[51:04] LLM to App transition
[52:42] Ensuring Robust Architecture
[58:53] Chaos engineering in ML
[1:04:43] Wrap up