

Facebook Abandons Facial Recognition. Should Everyone Else Follow Suit? With Luke Stark - #534
Nov 8, 2021
Join Luke Stark, an ethics and AI researcher at Western University, as he critiques facial recognition technology, likening it to plutonium. He discusses his paper on physiognomic AI, highlighting the inherent racism in using facial features for judgments. Luke delves into Facebook's recent announcement to shut down its facial recognition system, suggesting it's not as groundbreaking as it seems. They explore the biases affecting marginalized communities and the urgent need for a regulatory framework to address ethical concerns surrounding this controversial technology.
AI Snips
Chapters
Books
Transcript
Episode notes
Facial Recognition's Fundamental Flaw
- Luke Stark argues facial recognition is not just dangerous due to misuse but is fundamentally flawed.
- Its core design of quantifying faces perpetuates racist categorization, making it inherently "toxic".
The Rotten Onion Analogy
- Stark compares facial recognition to a rotten onion; even if technical biases are peeled away, the core remains problematic.
- Quantifying faces, even for identification, inherently participates in racist classification.
The Racist Nature of Facial Recognition
- Stark believes all facial recognition is fundamentally racist because it quantifies faces, creating hierarchies.
- This digitization, regardless of intent, perpetuates racist classification by judging based on physical traits.