
The Confident Commit
ML for software engineers ft. Gideon Mendels of Comet ML
Nov 17, 2023
Gideon Mendels, Co-founder and CEO of Comet ML, discusses the intersection of machine learning and software engineering. Topics include model performance evaluation, monitoring ML models in production, basics of machine learning for software engineers, and building an effective machine learning team.
30:07
Episode guests
AI Summary
AI Chapters
Episode notes
Podcast summary created with Snipd AI
Quick takeaways
- Model drift is often used as a proxy for evaluating model performance in production, but definitive evaluation requires accessing data from the product or using proxy metrics to test model predictions.
- Software engineers working with machine learning teams should have a solid understanding of model evaluation, including the distinction between offline and online metrics, to prevent overfitting and ensure meaningful conversations.
Deep dives
The role of CometML in managing the process of developing machine learning models
CometML, a company founded by Gideon Mendels, assists machine learning teams in building and managing models. They provide a research-driven approach that helps teams track data sets, conduct experiments, and monitor model performance in production. Offline and online metrics play a crucial role in evaluating model performance. While offline metrics like accuracy and recall are measured during training, online metrics, such as model drift, become necessary in production when ground truth data is unavailable. CometML aims to bridge the gap between software engineering and data science by offering a comprehensive platform for managing machine learning projects.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.