The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

AI’s Legal and Ethical Implications with Sandra Wachter - #521

Sep 23, 2021
Sandra Wachter, an associate professor and senior research fellow at the University of Oxford, dives deep into the intersection of law and AI. She unpacks algorithmic accountability, focusing on issues like explainability, data protection, and biases in machine learning. Wachter discusses the challenge of black box algorithms and introduces counterfactual explanations to enhance transparency. She also highlights her conditional demographic disparity test, recently adopted by Amazon, aimed at combating bias in models and improving compliance with European non-discrimination laws.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Explainable AI in Legal Contexts

  • Sandra Wachter highlights the need for explainable AI, especially in legal contexts.
  • She emphasizes that unexplainability often stems from trade secrets or genuine complexity.
ANECDOTE

Counterfactual Explanations

  • Wachter's team developed counterfactual explanations to address the black box problem.
  • Google adopted this approach, integrating it into TensorFlow and Google Cloud.
INSIGHT

Understanding Counterfactuals

  • Counterfactual explanations reveal the smallest changes needed to alter a decision.
  • They provide actionable insights, helping individuals understand and contest outcomes.
Get the Snipd Podcast app to discover more snips from this episode
Get the app