
Transformers On Large-Scale Graphs with Bayan Bruss - #641
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Using Contrastive Explanations to Enhance Interpretability
Mapping from representation dimension to natural language is not enough to provide clear explanations. Contrastive explanations, using both highly activated and lowly activated images, help refine understanding of the dimension. By subtracting descriptions of lowly activated images from highly activated images, we remove irrelevant and random dataset elements, enhancing the quality and interpretability of the descriptions.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.