AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Importance of Explainability in Modeling
I think something that's been frustrating to me is that explainability, you obviously want to understand why the model produces something. But with data, we can also understand the distribution of the things model produces. So you still get some semblance of understanding essentially from what the output is. And we often don't necessarily evaluate that as much as we should. Like in our models, we actually do when we're trying to improve them, but public facing almost no companies are sharing the distributions of who's getting a lot or not.