

Towards a Systems-Level Approach to Fair ML with Sarah M. Brown - #456
Feb 15, 2021
In this discussion, Sarah M. Brown, an Assistant Professor at the University of Rhode Island, dives into the crucial need for a systems-level approach to fairness in AI. She introduces Wiggum, a groundbreaking tool for detecting bias that fosters collaboration between technical experts and social scientists. The conversation also touches on how aggregated data can mislead perceptions of fairness, emphasizing the significance of granular analysis. Brown explores the complexities of defining fairness and the importance of innovative educational frameworks in addressing ethical algorithmic challenges.
AI Snips
Chapters
Transcript
Episode notes
UC Berkeley Admissions
- UC Berkeley faced sexism accusations due to higher male admission rates.
- A professor's analysis revealed that departments admitted more women than men.
Wiggum's Name
- The tool's name, "Wiggum," references the Simpsons' Detective Wiggum.
- This alludes to the tool's focus on uncovering hidden biases, like Simpson's Paradox.
Tools for Bias Detection
- Develop tools that translate between domain experts and machine learning engineers.
- These tools should help experts identify potential biases in data before model training.