Unlocking Us with Brené Brown

Dr. S. Craig Watkins on Why AI’s Potential to Combat or Scale Systemic Injustice Still Comes Down to Humans

43 snips
Apr 3, 2024
Dr. S. Craig Watkins, an AI expert, talks with Brené Brown about the alignment problem in AI systems and the potential for scaling injustice. They discuss who needs to be involved in building AI systems aligned with democratic values, how to ensure ethical principles are in place to avoid scaling injustice in high-stakes environments, such as healthcare and criminal justice, and emphasize the importance of intentional policies and expert involvement to guide the future development of AI technologies.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI and Bias

  • AI systems in organizations are being touted as bias eliminators in processes like hiring and service delivery.
  • However, there's concern that these systems may not eliminate bias but rather scale existing biases, especially in high-stakes environments.
INSIGHT

Fairness in AI

  • AI developers have different approaches to fairness, such as creating "race-unaware" models by removing racial data.
  • However, studies show that models can still predict race with high accuracy, even without explicit markers, suggesting deeper, undetectable racial signals.
ADVICE

Multidisciplinary Expertise in AI

  • Include diverse expertise like social scientists, humanists, and ethicists in AI development, not just engineers.
  • This multidisciplinary approach ensures that the complexities of systemic inequalities are considered and addressed.
Get the Snipd Podcast app to discover more snips from this episode
Get the app