

EP 141: How To Understand and Fix Biased AI
Nov 9, 2023
Nick Schmidt, Founder & CTO of SolasAI & BLDS, LLC, dives into the challenging world of biased AI models. He discusses how biases infiltrate algorithms across various platforms, from chatbots to healthcare applications. Nick shares a practical three-step approach to tackle discrimination and emphasizes the paradox of using AI to fix its own biases. With insights on the impact of user interactions and the need for algorithmic fairness, this conversation highlights the essential balance between innovation and ethics in AI development.
AI Snips
Chapters
Transcript
Episode notes
Misused Algorithm
- An algorithm designed to predict healthcare costs was repurposed to predict health outcomes.
- This led to underestimation of illness in Black individuals due to historically lower healthcare spending.
Bias in Image Generators
- AI image generators often show bias in their outputs, like depicting tech CEOs as white men.
- This reflects biases in training data and can perpetuate stereotypes if not addressed.
AI Fairness Framework
- Use the three-step burden shifting process to assess AI fairness.
- Evaluate for discrimination, justify the model's business purpose, and explore fairer alternatives.