
How to Practice Responsible AI
Azeem Azhar's Exponential View
Why Explainability Is a Red Herring?
The tension between the performance ofa in learning system and yourability to diagnose what might go wrong. How big a problem is it that these popular systems are not inspectable to aheman? Most people who would intract with an alcarithm any meaningful way do not care, nor will this impact anything in their lives as much as the wholistic output of the model of the will. So i think there are ways in which we can induce explainability lakbots model without answering the literal, narrow cactibal question of how to me make this model explainable?"
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.