

Mini Episode: ICE/Clearview, Race Detection and Schools
Aug 16, 2020
This week's discussion dives into the controversial partnership between ICE and Clearview AI, spotlighting the ethical dilemmas of facial recognition in immigration and schools. The podcast also tackles the rise of race detection software in educational settings, highlighting its impact on grading and potential for discrimination. Furthermore, it raises urgent questions about privacy and safety as AI technologies are implemented for health monitoring as schools reopen. Tune in for a thought-provoking look at these pressing issues!
AI Snips
Chapters
Transcript
Episode notes
ICE and Clearview AI Partnership
- ICE signed a $224,000 contract with Clearview AI for mission support.
- Clearview AI is known for aggressively scraping images from social media and facing cease-and-desist orders.
Race Detection Software's Rise and Risks
- Race detection software is growing, with companies using it to analyze customer behavior and preferences.
- This raises concerns about potential discrimination, even in seemingly harmless applications.
AI Grading Controversy
- A-level and GCSE students had their grades determined by an AI using historical data instead of individual performance.
- This approach, prioritizing statistical spread, reinforced existing inequalities, similar to the IBO's AI grading fiasco.