

How AI makes policing more racist
Jul 2, 2020
Sigal Samuel, co-host of Vox's Future Perfect podcast, dives into the troubling intersection of AI and policing. She highlights how algorithms, like facial recognition, mirror societal biases, amplifying racial injustice. Samuel discusses the urgent need for federal regulations to monitor AI in law enforcement, advocating for oversight similar to that of the FDA. She also explores the importance of legislative efforts like the Facial Recognition and Biometric Technology Moratorium Act, aiming to protect marginalized communities from discriminatory practices.
AI Snips
Chapters
Transcript
Episode notes
Bias in Facial Recognition Data
- Facial recognition systems are trained on biased data, including police records and online images.
- This perpetuates a cycle of racial bias, disproportionately affecting Black and brown individuals.
Wrongful Arrest Due to Facial Recognition
- Robert Williams was wrongly arrested due to facial recognition software mismatching his photo with security footage.
- This incident highlights the racial bias in such technology and its potential for wrongful accusations.
Positive Use of AI in Policing
- Chicago's Englewood neighborhood used AI to identify crime hotspots, but instead of deploying more police, they collaborated with community leaders.
- This approach led to a decrease in crime, demonstrating a positive use of AI in policing.