
The Problem with Black Boxes with Cynthia Rudin - TWIML Talk #290
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
00:00
The Limitations of Explainability in Machine Learning
This chapter explores the effectiveness and shortcomings of explainability algorithms in neural networks, particularly examining saliency maps and their misleading outputs. It advocates for the necessity of interpretable models over black box solutions to enhance transparency and accountability in critical decision-making areas, such as credit scoring and parole systems. Additionally, the chapter highlights advancements in interpretable modeling techniques, specifically focusing on optimal decision trees and the CORALS project aimed at improving clarity in predictions.
Transcript
Play full episode