Practical AI

Testing ML systems

Jan 27, 2020
Tania Allard, a Developer Advocate at Microsoft and Google Machine Learning GDE, dives into the complexities of testing machine learning systems. She reveals a simple scoring formula to monitor system robustness while highlighting the continuous nature of model and infrastructure updates. The conversation touches on the collaboration needed between data scientists and engineers, the challenges of bias detection, and effective use of Jupyter Notebooks. Tania emphasizes the critical role of both manual and automated testing in ensuring quality over time.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ADVICE

Holistic ML System Testing

  • Test your entire machine learning system holistically.
  • Consider data, features, infrastructure, and costs, not just model performance.
INSIGHT

Testing for Explainability and Bias Detection

  • Testing ML systems improves explainability and bias detection.
  • This transparency is crucial for responsible AI.
ADVICE

Collaboration for Seamless Transition

  • Data scientists should collaborate closely with ML engineers or software engineers.
  • This fosters understanding of requirements and smooths the transition from R&D to production.
Get the Snipd Podcast app to discover more snips from this episode
Get the app