125 - Human-Centered XAI: Moving from Algorithms to Explainable ML UX with Microsoft Researcher Vera Liao
Sep 5, 2023
auto_awesome
Vera Liao, Principal Researcher at Microsoft, discusses the importance of human-centered approach in rendering model explainability within a UI. She shares insights on why example-based explanations tend to out-perform feature-based ones and why traditional XAI methods may not be the solution for every explainability problem. Vera advocates for qualitative research in tandem with model work to improve outcomes and highlights the challenges of responsible AI.
Explainability should be prioritized in AI applications to help users understand the system and make informed decisions.
A human-centered approach is crucial in developing effective explanations that align with user needs and enhance their understanding.
Deep dives
Importance of Explainability in AI Applications
Explainability should be at the core of AI applications to help users understand how the system works, what it can do, and how they can take appropriate actions. HCI researchers have been studying the topic of helping users understand for a long time, and explainability is an important component of that. It is crucial to consider users' mental models and provide explanations that can improve their understanding and decision-making process.
Clarifying the Concepts of Model Explainability and Interpretability
The terms model explainability and interpretability can be confusing, as they are sometimes used interchangeably by scholars. However, rather than engaging in the debate about precise definitions, it is more productive to focus on the broader goal of understanding. Ultimately, the aim is to develop explanatory features that help users understand and evaluate AI systems, rather than strictly adhering to rigid definitions. Researchers should prioritize improving people's understanding and determining how different explanations can effectively communicate complex AI concepts.
The Importance of a Human-Centered Approach to Explainability
A human-centered approach to explainability in AI applications is paramount. Existing explanation algorithms often prioritize technical aspects and lack comprehensive evaluation of user interactions and needs. Researchers should engage with practitioners, understand specific application requirements, and study how end-users interact with explanations. By incorporating designers, UX researchers, and other user experience professionals from the beginning, the design process can be more holistic and informed. The goal is to make choices from algorithms and designs that align with users' specific needs and enhance their understanding and decision-making processes.
Question-Driven XAI Design Process
The question-driven XAI design process involves four key steps. First, identify user questions through interviews and user research. This step helps determine the primary needs and expectations regarding explainability. Next, analyze the questions, clustering them into categories and prioritizing them based on importance and frequency. This analysis is crucial for outlining the key focus areas and requirements. Then, map the questions to technical solutions, exploring different algorithms, techniques, and approaches that suit the identified user needs. Finally, iteratively design and evaluate the chosen solution, incorporating user feedback and refining the design over time.
Today I’m joined by Vera Liao, Principal Researcher at Microsoft. Vera is a part of the FATE (Fairness, Accountability, Transparency, and Ethics of AI) group, and her research centers around the ethics, explainability, and interpretability of AI products. She is particularly focused on how designers design for explainability. Throughout our conversation, we focus on the importance of taking a human-centered approach to rendering model explainability within a UI, and why incorporating users during the design process informs the data science work and leads to better outcomes. Vera also shares some research on why example-based explanations tend to out-perform [model] feature-based explanations, and why traditional XAI methods LIME and SHAP aren’t the solution to every explainability problem a user may have.
Highlights/ Skip to:
I introduce Vera, who is Principal Researcher at Microsoft and whose research mainly focuses on the ethics, explainability, and interpretability of AI (00:35)
Vera expands on her view that explainability should be at the core of ML applications (02:36)
An example of the non-human approach to explainability that Vera is advocating against (05:35)
Vera shares where practitioners can start the process of responsible AI (09:32)
Why Vera advocates for doing qualitative research in tandem with model work in order to improve outcomes (13:51)
I summarize the slides I saw in Vera’s deck on Human-Centered XAI and Vera expands on my understanding (16:06)
Vera’s success criteria for explainability (19:45)
The various applications of AI explainability that Vera has seen evolve over the years (21:52)
Why Vera is a proponent of example-based explanations over model feature ones (26:15)
Strategies Vera recommends for getting feedback from users to determine what the right explainability experience might be (32:07)
The research trends Vera would most like to see technical practitioners apply to their work (36:47)
Summary of the four-step process Vera outlines for Question-Driven XAI design (39:14)