AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Machine Learning
Fraud is a huge use case for machine learning. Hegan: Scammers approach problems the same as we do. They are using a scientific method of saying, ihave 30 hypotheses. If your model's not moderating this and not checking that, then it's going to become very fragile and just break," he says.
MLOps community meetup #58! Last Wednesday we talked to Ben Wilson, Practice Lead Resident Solutions Architect, Databricks.
Model Monitoring Deep Dive with the author of Machine Learning Engineering in Action. It was a pleasure getting to talk to Ben about difficulties in monitoring in machine learning. His expertise obviously comes from experience and as he said a few times in the meetup, I learned the hard way over 10 years as a data scientist so you don't have to!
Ben was also kind enough to give us a 35% off promo code for his book! Use the link: http://mng.bz/n2P5
//Abstract
A great deal of time is spent building out the most effectively tuned model, production-hardened code, and elegant implementation for a business problem. Shipping our precious and clever gems to production is not the end of the solution lifecycle, though, and many-an-abandoned projects can attest to this. In this talk, we will discuss how to think about model attribution, monitoring of results, and how (and when) to report those results to the business to ensure a long-lived and healthy solution that actually solves the problem you set out to solve.
//Bio
Ben Wilson has worked as a professional data scientist for more than ten years. He currently works as a resident solutions architect at Databricks, where he focuses on machine learning production architecture with companies ranging from 5-person startups to global Fortune 100. Ben is the creator and lead developer of the Databricks Labs AutoML project, a Scala-and Python-based toolkit that simplifies machine learning feature engineering, model tuning, and pipeline-enabled modelling. He's the author of Machine Learning Engineering in Action, a primer on building, maintaining, and extending production ML projects.
//Takeaways
Understanding why attribution and performance monitoring is critical for long-term project success
Borrowing hypothesis testing, stratification for latent confounding variable minimization, and statistical significance estimation from other fields can help to explain the value of your project to a business
Unlike in street racing, drifting is not cool in ML, but it will happen. Being prepared to know when to intervene will help to keep your project running.
----------- Connect With Us ✌️-------------
Join our Slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Ben on LinkedIn: www.linkedin.com/in/benjamin-wilson-arch/
Timestamps:
[00:00] Introduction to Ben Wilson
[00:11] Ben's background in tech
[03:40] Human aspect of Machine Learning in MLOps
[05:51] MLOps is an organizational problem
[09:27] Fragile Models
[12:36] Fraud Cases
[15:21] Data Monitoring
[18:37] Importance of knowing what to monitor for
[22:00] Monitoring for outliers
[24:16] Staying out of Alert Hell
[29:40] Ground Truth
[31:25] Model vs Data Drift on Ground Truth Unavailability
[34:25] Benefit to monitor system or business level metrics
[38:20] Experiment in the beginning, not at the end
[40:30] Adaptive windowing
[42:22] Bridge the gap
[46:42] What scarred you really bad?
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode