Industry expert and Director of Data Science at Western Digital, Srinimisha Morkonda Gnanasekaran, discusses the importance of explainable AI in AI decision-making. Topics include strategies for implementation, real-world examples of misclassifications, and techniques like decision trees and SHAP for model interpretability. The podcast also explores key considerations for successful AI adoption and essential skills for success in data science and AI.
Explainable AI is crucial for trust and transparency in AI systems, especially in high-stakes domains like healthcare and finance.
Intrinsic and post hoc techniques can improve AI explainability by simplifying models, providing insights into feature importance, and aiding in understanding individual feature impacts.
Deep dives
The Significance of Explainable AI
Explainable AI has been a topic of interest since the evolution of the field, with a milestone around 2015-2016 highlighting the importance of understanding model decisions. An example presented a scenario where a model misclassified a husky as a wolf due to considering the background snow. The essence of explainable AI lies in mimicking human decision-making, providing step-by-step insights into model predictions.
Importance of Explainable AI
Explainable AI is essential for building trust and transparency in AI systems. When models produce incorrect outputs, understanding the reasoning behind those decisions becomes crucial. Lack of transparency can lead to challenges in gaining accountability and trust in AI applications, particularly in high-stakes decision-making domains like healthcare, finance, and autonomous vehicles.
Techniques for Enhancing AI Explainability
Companies can employ intrinsic and post hoc techniques to enhance AI explainability. Intrinsic techniques focus on model transparency by simplifying architectures and incorporating human knowledge, evident in models like LIME. Post hoc techniques, such as SHAP (SHapley Additive exPlanations), provide insights into feature importance and model behavior post-model building. Visualization techniques like PDP and ICE further aid in understanding individual feature impacts on model predictions.
Showing your work isn’t just for math class, it’s also for AI! As AI systems become increasingly complex and integrated into our daily lives, the need for transparency and understanding in AI decision-making processes has never been more critical. We are joined by industry expert and Director of Data Science at Western Digital, Srinimisha Morkonda Gnanasekaran, for a discussion of the why, the how, and the importance of explainable AI.
Panelists:
Srinimisha Morkonda Gnanasekaran, Dir. Or Data Science & Advanced Analytics @ Western Digital - LinkedIn
This episode was produced by Megan Bowers, Mike Cusic, and Matt Rotundo. Special thanks to Andy Uttley for the theme music and Mike Cusic for the for our album artwork.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode