Azeem’s Picks: How to Practice Responsible AI with Dr. Rumman Chowdhury
Nov 1, 2023
auto_awesome
Dr. Rumman Chowdhury, a pioneer in applied algorithmic ethics, discusses assessing and diagnosing bias in 'black box' algorithms, top-down organizational change necessary for responsible AI, and the emerging field of 'Responsible Machine Learning Operations'.
Responsible AI demands top-down organizational change, new metrics, and systems of redress.
Bias in AI models must be considered in context, highlighting the challenges of explainability.
Deep dives
Importance of Responsible AI
In this podcast episode, the speaker emphasizes the importance of responsible AI implementation. They discuss how AI systems must behave ethically and fairly and highlight the need for responsible decision-making, accountability, and power dynamics in AI development. The conversation reflects on the evolution of the narrative surrounding responsible AI, from misconceptions about AI in the past to substantive questions about the role of AI in society, regulatory considerations, and the need for public awareness. The speaker also addresses the growing awareness and advocacy from the general public about the need for individual actions against AI biases and the potential negative effects of AI systems.
Examples of AI Failures
The podcast episode explores specific examples of AI failures that highlight the importance of responsible AI implementation. The speaker mentions instances where facial recognition systems misidentified individuals, algorithmic grading resulted in unfair outcomes, and biased models led to discriminatory practices. These examples demonstrate the potential harm and negative impact of AI when used without proper accountability and consideration of social biases and systemic flaws. The speaker emphasizes the need for thorough investigations, redress mechanisms, and proactive measures to address algorithmic failures and biases.
Understanding Bias and Explainability
The discussion delves into the complex issues of bias and explainability in AI systems. The speaker clarifies that bias in AI models does not always indicate a flaw and highlights the importance of context and intentionality when considering bias. They discuss different contexts where bias can be necessary, like biased models for specific target audiences. The speaker also highlights the challenges of explainability in black box models, particularly in deep learning systems, but suggests alternative approaches to achieving transparency and accountability. They propose focusing on examining the "what" (identifying potential harms) and "why" (understanding the causes of biases) of AI systems instead of fixating on technical explainability.
Moving Towards Responsible ML Operations
The podcast episode explores the concept of responsible machine learning operations (MLOps) as a way to promote responsible AI. The speaker discusses the need for standardized processes, investigations, and norms in the development and implementation of ML systems. They highlight the shift from reactive behavior to proactive measures, aiming to address potential harms during the development stages rather than after problems arise. The speaker emphasizes the importance of transparency, sharing findings, code, and data to facilitate learning, accountability, and public engagement. They also discuss the role of reputation risk and the potential influence on future legal and regulatory considerations.
Artificial Intelligence (AI) is on every business leader’s agenda. How do you ensure the AI systems you deploy are harmless and trustworthy? This month, Azeem picks some of his favorite conversations with leading AI safety experts to help you break through the noise.
Today’s pick is Azeem’s conversation with Dr. Rumman Chowdhury, a pioneer in the field of applied algorithmic ethics. She runs Parity Consulting, the Parity Responsible Innovation Fund, and she’s a Responsible AI Fellow at the Berkman Klein Center for Internet & Society at Harvard University.
They discuss:
How you can assess and diagnose bias in unexplainable “black box” algorithms.
Why responsible AI demands top-down organizational change, implementing new metrics, and systems of redress.
More details on the emerging field of “Responsible Machine Learning Operations”.