

Hate Speech, Applied AI, NYPD, & Grades
Aug 26, 2020
Daniel Bashir, an AI news summarizer, joins Stanford PhDs Andrey Kurenkov and Sharon Zhou to delve into recent AI developments. They tackle controversial topics like NYPD's facial recognition use during protests, raising ethical concerns about surveillance. The discussion shifts to the struggles of social media platforms, especially Facebook, in managing hate speech and misinformation. They also critique the fairness of algorithms used for student grading during the pandemic, highlighting the need for better methods to support educational equity.
AI Snips
Chapters
Transcript
Episode notes
Facebook's AI Challenge
- Facebook uses AI to detect and remove hate speech and misinformation.
- However, it struggles with toxic images and mixed-media content like memes.
AI's Ivory Tower
- Many AI researchers prioritize theoretical problems over real-world applications.
- This focus on benchmarks can lead to overlooking biases and neglecting impactful applications.
NYPD's Use of Facial Recognition
- The NYPD used facial recognition technology in the raid of a Black Lives Matter activist's apartment.
- The extent and justification of its use remain unclear, raising concerns about surveillance and free speech.