MLOps.community

Demetrios
undefined
8 snips
Oct 6, 2023 • 51min

All About Evaluating LLM Applications // Shahul Es // #179

Shahul Es, creator of the Ragas Project and evaluation expert, discusses open source model evaluation, including debugging, troubleshooting, and benchmark challenges. They highlight the importance of custom data distributions and fine-tuning for better model performance. They also explore the difficulties of evaluating LLM applications and the need for reliable leaderboards. Additionally, they discuss the security aspects of language models and the significance of data preparation and filtering. Lastly, they contrast fine-tuning with retrieval augmented generation and provide resources for evaluating LLM applications.
undefined
Oct 3, 2023 • 46min

Building an ML Platform: Insights, Community, and Advocacy // Stephen Batifol // #178

Stephen Batifol, data scientist at Wolt, shares insights on building an ML platform, developer relations, and creating a thriving internal community. They discuss the challenges of onboarding data scientists, importance of documentation, simplifying the developer experience, and expanding services. They also touch upon MLflow, Qflow, observability, training models with multiple countries, building trust through feedback, and attracting talent through talks and content sharing.
undefined
Sep 18, 2023 • 52min

Collaboration and Strategy // Vin Vashishta // #176

Vin Vashishta, an expert in data science and AI, shares insights on collaboration, strategy, and maximizing data potential. They discuss the importance of technical strategists, deep understanding of data by product managers, and the opportunities in the generative AI era. They also explore mindset shifts to become multipliers, monetization of data and AI products, and the significance of leadership and strategy.
undefined
Sep 15, 2023 • 31min

Ux of an LLM User Panel // LLMs in Production Conference Part II

"Ux of a LLM User Panel" features Misty Free, Dina Yerlan, and Artem Harutyunyan discussing UX challenges and design strategies when working with LLMs. They talk about balancing user experience, implementing latency strategies, and using small models for specialized tasks.
undefined
Sep 12, 2023 • 52min

From Virtualization to AI Integration // Lamia Youseff // # 175

Lamia Youseff, an AI expert with extensive experience in academia and large tech companies, discusses the challenges faced by companies in integrating AI effectively. She explores the similarities between the early days of cloud computing and the current AI movement, emphasizing the need for a unifying layer in ML workloads. The concept of jazz computing is introduced as a means to connect investors, Fortune 500 companies, SMBs, and startups in the AI field. Collaboration and the importance of stakeholders working together to advance AI are emphasized.
undefined
Sep 8, 2023 • 34min

LLM on K8s Panel // LLMs in Conference in Production Conference Part II

In this podcast, Manjot Pahwa, Rahul Parundekar, and Patrick Barker discuss the integration of Kubernetes and large language models (LLMs), the challenges of using Kubernetes for data scientists, and the considerations for hosting LMM applications in production. They also explore the abstraction of LLMs on Kubernetes, the cost considerations, and the pros and cons of using Kubernetes for LLM training versus inferencing. Additionally, they touch on using Kubernetes for real-time online inferences and the availability of abstractions like Metaplow.
undefined
Sep 5, 2023 • 1h 5min

Harnessing MLOps in Finance // Michelle Marie Conway // MLOps Podcast Coffee #174

Michelle Marie Conway, a tech industry professional, shares insights on continuous learning, gender diversity in STEM, and the potential of AI tools in MLOps. Topics include staying up to date with documentation, understanding code logic, challenges and benefits of AI tools, and the importance of communication with stakeholders in the banking sector. The importance of diversity in the tech industry and efforts to create inclusive environments are also discussed.
undefined
Sep 1, 2023 • 36min

MLOps vs. LLMOps Panel // LLMs in Conference in Production Conference Part II

In this podcast, the MLOps vs. LLMOps Panel discuss the high-level differences between MLOps and LLMOps, the impact of ML ops on companies, the challenges of open source tools and data safety in financial firms, the cost and rationalization of MLOps, options for large enterprises in ML model development, and the use of foundational models and vector databases.
undefined
Aug 29, 2023 • 1h 2min

Building Cody, an Open Source AI Coding Assistant // Beyang Liu // MLOps Podcast #173

Beyang Liu, developer of Cody, an open-source AI coding assistant, discusses the challenges and process of incorporating AI into existing products, navigating complex code bases, and the technology used in building Cody. The chapter also touches on the complexity of fine-tuning AI models and supporting multiple language models in Cody.
undefined
Aug 25, 2023 • 32min

Evaluation Panel // Large Language Models in Production Conference Part II

Language model interpretability experts and AI researchers discuss challenges of evaluating large language models, the impact of chat GPT in the industry, evaluating model performance and data set quality, the use of large language models in machine learning, and tool sets, guardrails, and challenges in language models.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app