DataFramed

#98 Interpretable Machine Learning

5 snips
Aug 1, 2022
Serg Masis, a Climate & Agronomic Data Scientist at Syngenta and author of "Interpretable Machine Learning with Python," dives deep into the challenges of machine learning interpretability. He discusses the ethical ramifications of data bias, sharing insights into technical and non-technical solutions to address these issues. Serg highlights the real-world implications of misapplied AI, like in a home valuation case study. Plus, he sheds light on SHAP values and their role in understanding model predictions, advocating for fairness and transparency in AI.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Importance of Interpretable ML

  • Interpretable machine learning is crucial for understanding, trusting, and ethically applying models.
  • It addresses the incomplete nature of machine learning solutions and the need for transparency in automated decision-making.
INSIGHT

Explainable vs. Interpretable AI

  • Explainable AI (XAI) and Interpretable AI are often used interchangeably, causing confusion.
  • "Explainable" might suit intrinsically interpretable models, while "interpretable" might better fit black box models.
INSIGHT

Challenges to Model Interpretability

  • Model interpretability is challenged by nonlinearity, non-monotonicity, and interaction effects.
  • These complexities exist in both data and models, making interpretation difficult, especially with big data.
Get the Snipd Podcast app to discover more snips from this episode
Get the app