

ML Model Fairness: Measuring and Mitigating Algorithmic Disparities; With Guest: Nick Schmidt
Aug 18, 2023
Nick Schmidt, the Chief Technology and Innovation Officer at SolasAI, discusses the role of fairness in AI/ML, highlighting real-life examples of bias and disparity in machine learning algorithms. He emphasizes the importance of model governance, accountability, and ownership in ensuring fairness. The podcast explores algorithmic fairness issues, consequences of biased algorithms, and the need for human involvement. Nick also offers advice for organizations assessing their AI security risk and advocates for seeking outside help when implementing fairness in ML models.
Chapters
Transcript
Episode notes
1 2 3 4 5 6
Introduction
00:00 • 3min
Ensuring Fairness and High Predictiveness in ML Models
03:00 • 19min
Flawed Assumptions in Predicting Healthcare Outcomes
21:55 • 2min
Algorithmic Fairness Issues and Consequences
23:57 • 3min
Accountability, Responsibility, and Fairness in Model Development
26:46 • 6min
Implementing Fairness in Machine Learning Models: Seeking Help and Giving Authority
32:46 • 3min