

The Problem with Black Boxes with Cynthia Rudin - TWIML Talk #290
Aug 14, 2019
Cynthia Rudin, a Duke University professor specializing in interpretable machine learning, dives into the contentious topic of black box models in high-stakes decisions. She argues that simpler, interpretable models are essential for accountability, especially when human lives are at stake. The conversation explores the risks and ethical dilemmas posed by opaque algorithms, alongside her research on improving model transparency. Cynthia highlights real-world applications and advocates for a shift towards clarity in predictive modeling, impacting areas like healthcare and criminal justice.
AI Snips
Chapters
Transcript
Episode notes
Con Edison Project
- Cynthia Rudin transitioned from theoretical machine learning to an applied project with Con Edison.
- This project involved using machine learning to predict manhole explosions in NYC's power grid, using data from the 1890s.
Black Box Limitations
- Working with Con Edison, Cynthia Rudin realized the limitations of black box models in high-stakes decisions.
- This led her to focus on interpretable machine learning, prioritizing understanding how variables combine and influence predictions.
COMPAS Typo Error
- The COMPAS model, used in the US justice system, demonstrates the risks of black box models.
- A typographical error on a COMPAS score sheet led to an unfair parole denial, highlighting the lack of transparency and accountability.