

AI Setting Grades, ICE Pays Clearview, and Much More
Aug 22, 2020
This week, experts tackle a $224,000 contract between ICE and Clearview AI, raising serious ethics questions around facial recognition. They delve into how AI can amplify historical biases in policing, leading to troubling implications for justice. The conversation shifts to the controversial use of AI in grade predictions during COVID, which risks deepening educational inequalities. Lastly, they address the dangers of misinformation through deep fakes and the crucial need for authenticity in a digitally manipulated world.
AI Snips
Chapters
Transcript
Episode notes
ICE Uses Clearview AI
- ICE purchased $224,000 worth of Clearview AI licenses for "mission support".
- This raises concerns about facial recognition use in immigration enforcement.
Race Detection Software Concerns
- Race-detection software is growing, raising bias concerns despite seemingly benign applications.
- Hidden algorithms make it hard to detect discriminatory practices.
Flawed Crime Prediction AI
- UK police admitted flaws in their AI-powered violent crime prediction system, making it unusable.
- The system, never deployed, sparked discussions on potential biases and ethical concerns.