There's been this known piece of information that if you want accuracy, you can't have explanations. And it's just quoted and everything leads back to this DARPA study when you follow the traces but there's no evidence for it. There's some evidence from a lot of labs that you can add interpretable layers and outputs even in complex neural networks for no loss of accuracy. I'm really glad you did call me on that because that is a really good clarification.
To trust something, you need to understand it. And, to understand something, someone often has to explain it. When it comes to AI, explainability can be a real challenge (definitionally, a "black box" is unexplainable)! With AI getting new levels of press and prominence thanks to the explosion of generative AI platforms, the need for explainability continues to grow. But, it's just as important in more conventional situations. Dr. Janet Bastiman, the Chief Data Scientist at Napier, joined Moe and Tim to, well, explain the topic! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.