The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

AI for High-Stakes Decision Making with Hima Lakkaraju - #387

7 snips
Jun 29, 2020
Hima Lakkaraju, an Assistant Professor at Harvard University, specializes in fair and interpretable machine learning. In this discussion, she dives into the pitfalls of popular explainability techniques like LIME and SHAP, exposing their vulnerabilities to adversarial attacks. She shares her journey from India to academia, emphasizing the need for transparency in AI, especially in high-stakes areas like healthcare and criminal justice. By examining local and global explanation methods, she reveals critical insights into improving AI fairness and accountability.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Interpretable vs. Black Box Models

  • Hima Lakkaraju believes using interpretable models is ideal when possible.
  • However, real-world constraints like limited data often necessitate explaining black box models.
ANECDOTE

Bail Experiment

  • Hima Lakkaraju conducted an experiment with law students about trusting AI models for bail decisions.
  • Students distrusted models using race but trusted explanations hiding race with correlated features.
INSIGHT

Explainability Techniques

  • Hima Lakkaraju focuses on LIME and SHAP due to their widespread use.
  • Other less popular techniques try to address some of their limitations.
Get the Snipd Podcast app to discover more snips from this episode
Get the app