The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Model Explainability Forum - #401

Aug 17, 2020
Join a stellar panel featuring Raid Ghani from Carnegie Mellon, Solon Barrokas of Cornell and Microsoft, IBM's Kush Varshney, startup CEO Alyssa Labgenova, and Harvard's Hima Lakaraju. They tackle pressing issues in the realm of model explainability. Discussions dive into stakeholder-driven approaches, counterfactual explanations, and the impact of legal frameworks on automated decision-making. The experts also highlight vulnerabilities in AI explanations and the critical need for trust and fairness, emphasizing collaboration to enhance understanding and improve outcomes.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Explainability in Public Policy

  • Explainability is crucial in AI systems for public policy, especially for fairness and equity.
  • Different use cases require a taxonomy of explainability methods tailored to specific users and goals.
INSIGHT

Hidden Assumptions in Counterfactuals

  • Counterfactual explanations, while seemingly simple, have hidden assumptions about feature independence and actionability.
  • Focusing on "easiest" feature changes may not align with real-world costs or individual circumstances.
INSIGHT

Trustworthy AI

  • Trust in AI systems requires competence, reliability, openness, and selflessness, mirroring human trustworthiness.
  • Explainability addresses the "openness" aspect, crucial for building strong human-machine relationships.
Get the Snipd Podcast app to discover more snips from this episode
Get the app