
[14] Been Kim - Interactive and Interpretable Machine Learning Models
The Thesis Review
00:00
How to Convince Yourself That T-Cav Works
T-CAV is best explained by contrasting with saliency methods which uses low level features to explain. So instead of pixels, what only gives them something that they can understand? Like, oh, here's a bird picture. Let's say the feather is important, although brick color was important and so on. That's a T-CAV. Now going back to your original question about how do I know this works? The thing that we did to test this theory is creating a synthetic data set where this synthetic data set is consists of a picture and a little caption in the left bottom corner. And I could train any model such that the model is trained or encouraged to look at the
Play episode from 27:06
Transcript


