

BLM, Genderify, Jobs, and AI Parody
Aug 7, 2020
Dive into the complexities of facial recognition technology, as it safeguards protester identities with playful emoji masks. Unpack the pitfalls of Genderify, an AI tool that misjudged gender, sparking outrage over biases. Explore the ethical maze of AI in fashion and hiring, questioning the fairness of algorithms. Delve into Microsoft's controversial partnership with law enforcement and the implications for civil rights. Finally, grapple with copyright dilemmas posed by AI-generated parodies, spotlighting the future of creativity and legal boundaries.
AI Snips
Chapters
Transcript
Episode notes
Anonymizing Protesters
- Sharon Zhou created a tool to anonymize protesters' faces.
- It uses face detection and a BLM fist emoji.
Genderify's Bias
- Genderify, an AI startup, shut down after public backlash.
- Its gender predictions based on text input were incredibly biased.
Genderify's Flawed Predictions
- Genderify labeled Anima Anankumar as only 1.6% female.
- It heavily skewed professional titles like "doctor" and "professor" towards male.