There's still very much a need for traditional rules-based analysis because, and we'll come on to that a bit later. What we are seeing is a great interest in it because the criminals know the rules that are out there. They know what people are trying to detect and it's very straightforward for them to try and find ways around those rules. It sounds a lot like the bot detection in the digital marketing space. I'm smurking.
To trust something, you need to understand it. And, to understand something, someone often has to explain it. When it comes to AI, explainability can be a real challenge (definitionally, a "black box" is unexplainable)! With AI getting new levels of press and prominence thanks to the explosion of generative AI platforms, the need for explainability continues to grow. But, it's just as important in more conventional situations. Dr. Janet Bastiman, the Chief Data Scientist at Napier, joined Moe and Tim to, well, explain the topic! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.