Software Engineering Radio - the podcast for professional software developers

SE Radio 677: Jacob Visovatti and Conner Goodrum on Testing ML Models for Enterprise Products

9 snips
Jul 15, 2025
Jacob Visovatti, Senior Engineering Manager at Deepgram with expertise in voice technology, and Conner Goodrum, Senior Data Scientist at Deepgram focused on testing ML models, dive into the critical role of testing machine learning models for enterprise products. They discuss unique challenges in handling unstructured data and the need for interdisciplinary collaboration. The conversation highlights iterative feedback loops, the significance of production-like testing environments, synthetic data generation, and the intricacies of deploying responsible AI, especially with sensitive enterprise data.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Challenges of Testing ML Models

  • ML models handle unstructured data and serve real-time outputs at scale, unlike traditional data models.
  • This scale and variability expose unique quality challenges needing different testing approaches.
INSIGHT

Adapting the Testing Pyramid for AI

  • The traditional software testing pyramid applies but adapts to AI with new layers.
  • Lower-level tests cover neural net operations, while higher-level tests address complex input flows and production integration.
ADVICE

Collaborative Testing Responsibility

  • Testing ownership requires clear problem identification and system observability.
  • Engage data scientists, ML engineers, and product teams together for interdisciplinary testing and improvements.
Get the Snipd Podcast app to discover more snips from this episode
Get the app