AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
T-Cab - Concept Based Explanation
We have some internal work on how do you better normalize this often saturated gradient, but I think that Jessica's paper probably is a good prototype of a direction like this. One thing too that seems important to note about T-CAB is this in some ways does seem to be a little bit limited to concepts I can express visually, right? So if I have a more non-visual abstract concept, it might be alittle bit harder to represent that quantitatively and not really admit to chopping up via activations. But no human, no two human agree perfectly what a concept means. That doesn't hold Professor Lumbrozo's work. And borrowing that idea to concept based explanation