I think this has been one of the problems when historically explainability in AI has been discussed. It started off as being almost at all just for the data scientists to check their own homework and make sure they're doing things correctly. So it's really important that not only do you have those explanations, but they're at the right level for the person that they're targeted to. And there's an awful lot of testing and feedback that's involved to get that level right. Whereas if somebody's trained to be able to analyze data correctly, then they can accept more information.
To trust something, you need to understand it. And, to understand something, someone often has to explain it. When it comes to AI, explainability can be a real challenge (definitionally, a "black box" is unexplainable)! With AI getting new levels of press and prominence thanks to the explosion of generative AI platforms, the need for explainability continues to grow. But, it's just as important in more conventional situations. Dr. Janet Bastiman, the Chief Data Scientist at Napier, joined Moe and Tim to, well, explain the topic! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.