

Explaining AI explainability
Jun 8, 2020
Sheldon Fernandez, CEO of Darwin AI, explores the fascinating world of generative synthesis and its impact on creating compact, explainable AI networks. He highlights the critical role of AI explainability in tackling societal biases and emphasizes the need for transparency in machine learning models. The discussion dives into deploying AI at the edge, its challenges, and innovations in computer vision and natural language processing. Sheldon also reflects on the importance of human insight in AI and how fairness and bias considerations shape the future of this technology.
AI Snips
Chapters
Transcript
Episode notes
Explainability for Robustness
- Deep learning infers its own rules from data, making it powerful but opaque.
- Explainability helps understand these rules, identify potential failures, and build more robust models.
The Purple Sky
- An autonomous vehicle turned left more often when the sky was purple.
- Explainability revealed it learned this correlation during training in the Nevada desert.
Copyright Horses
- A horse-detecting model relied on copyright symbols in images, not horse features.
- Explainability exposed this flaw, enabling developers to correct the model's focus.