I actually find that some of the junior people are usually the best. They're coming to it sometimes from a non-mathematical background. And as part of that, they've had to hone their understanding of what's happening and why it's happening. So they can then talk to end users on a much more natural level. Whereas someone that's only ever had to talk and explain things in a highly technical peer group, they can struggle with that.
To trust something, you need to understand it. And, to understand something, someone often has to explain it. When it comes to AI, explainability can be a real challenge (definitionally, a "black box" is unexplainable)! With AI getting new levels of press and prominence thanks to the explosion of generative AI platforms, the need for explainability continues to grow. But, it's just as important in more conventional situations. Dr. Janet Bastiman, the Chief Data Scientist at Napier, joined Moe and Tim to, well, explain the topic! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.