I think sometimes they're like, we want to do some of the sexier work. So I don't know if it's also like our action sometimes egg that on. You get more funding for the more exciting new stuff and you can get businesses to invest in your department. If you've got a complex problem, then you know, look for the more complex techniques. But yeah, many situations where a simple statistical technique will give an answer faster with smaller amounts of data, fewer resources.
To trust something, you need to understand it. And, to understand something, someone often has to explain it. When it comes to AI, explainability can be a real challenge (definitionally, a "black box" is unexplainable)! With AI getting new levels of press and prominence thanks to the explosion of generative AI platforms, the need for explainability continues to grow. But, it's just as important in more conventional situations. Dr. Janet Bastiman, the Chief Data Scientist at Napier, joined Moe and Tim to, well, explain the topic! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.