Google DeepMind: The Podcast cover image

Speaking of intelligence

Google DeepMind: The Podcast

00:00

Is Your Toxicity Classifier Biased?

The implications of over zealous toxicity classifiers were laid bare in a recent paper, which found that tweats containing words used to describe marginalized groups such as queer were one and a half times more likely to be flagged as fensive. The result is that people who are already marginalized are being unfairly policed for their language by an algorithm. And even if you could somehow get around this extremely thorny issue, offensive words aren't the end of the story.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app