AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How to Interpret a Big Nural Net in a Test Set?
The problem of how to train a model for interpretability. How do you represent things in such a way that the human has meaningful understanding, and it's like reasonably efficient? And this, i think, ends up being pretty close to the sort of hard problems of interpretability. I mean, it seems like there are a lot of long treatises that you have to deal with. But we also, like, if you imagine representing everything that alphafold knows in text, it's just going to be horribly inefficient to try an like otise that and get the do it. The one that's like just trust this big black box, which seems sort of doable, but it