AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Using Contrastive Explanations to Enhance Interpretability
Mapping from representation dimension to natural language is not enough to provide clear explanations. Contrastive explanations, using both highly activated and lowly activated images, help refine understanding of the dimension. By subtracting descriptions of lowly activated images from highly activated images, we remove irrelevant and random dataset elements, enhancing the quality and interpretability of the descriptions.