AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Benefits of Contrastive Concept Extraction in Explainability
The contrastive concept extraction is novel in this space. There's a broad set of research on explainability that we've been looking at, which goes beyond just individual dimensions. And so therefore having the ability to say not just these dimensions are important, but then to go backwards and say, okay, from the inputs, these are the subset of inputs that correspond to that dimension. Using this methodology, you can actually see what the model thought was in the picture. Right? So you can see the model thought that this was an airplane wing or something like that. Mm-hmm Awesome.