How AI works is often a mystery — that's a problem
Dec 22, 2023
auto_awesome
This podcast explores the challenges of 'black box' AI systems and the concept of Explainable AI. It discusses the impact of subjective questions and lack of transparency in parole decisions, as well as concerns about racial bias in health algorithms. The complexity of large language models and the mystery of their decision-making processes are also examined. Reasons for AI opacity, such as protecting intellectual property, are discussed, along with the importance of understanding the implications of AI before widespread use.
The use of black box algorithms in high-stakes applications like criminal justice raises concerns about transparency, potential biases, and wrongful accusations.
Developing explainable AI techniques and establishing robust governance are essential in addressing the black box problem and ensuring responsible and accountable use of AI.
Deep dives
Glenn's Transformation in Prison
Glenn Rodriguez, a former inmate, shares his journey of personal growth and transformation while incarcerated. Initially struggling to adapt to prison life, Glenn turned his behavior around and became a model prisoner. Despite his positive record and successful rehabilitation, Glenn's parole hearing was influenced by an algorithm called Compass, which predicted a high risk of reoffending and denied him parole.
Challenging the Compass Algorithm
Glenn investigated the Compass algorithm used in his parole assessment and discovered that an answer to a subjective question, 'notable disciplinary issues', held significant weight in determining the risk score. Determined to challenge the algorithm's influence on his parole decision, Glenn and his legal team faced a dead end. The proprietary black box nature of the algorithm prevented them from accessing the specific weights, which limited their ability to question and contest the outcome.
The Pitfalls of Black Box Algorithms
The increasing use of algorithms and AI in high-stakes applications, including criminal justice, raises concerns about their reliability and potential biases. Black box algorithms, like Compass, lack transparency and make it difficult to understand how decisions are made. Numerous cases highlight the unintended consequences of relying on these algorithms, leading to wrongful accusations, racial bias, and discrimination.
Explaining and Governing AI Systems
Researchers propose 'explainable AI' as a solution to address the black box problem. The goal is to develop techniques that provide insights into how AI models arrive at their decisions. However, achieving explainability is complex, as different communities have distinct objectives for AI. Additionally, there is a need for robust governance and accountability to ensure the responsible use of AI, protecting against unlawful discrimination and harm caused by these systems.
Many AIs are 'black box' in nature, meaning that part of all of the underlying structure is obfuscated, either intentionally to protect proprietary information, due to the sheer complexity of the model, or both. This can be problematic in situations where people are harmed by decisions made by AI but left without recourse to challenge them.
Many researchers in search of solutions have coalesced around a concept called Explainable AI, but this too has its issues. Notably, that there is no real consensus on what it is or how it should be achieved. So how do we deal with these black boxes? In this podcast, we try to find out.