Google DeepMind: The Podcast cover image

Speaking of intelligence

Google DeepMind: The Podcast

CHAPTER

Is Your Toxicity Classifier Biased?

The implications of over zealous toxicity classifiers were laid bare in a recent paper, which found that tweats containing words used to describe marginalized groups such as queer were one and a half times more likely to be flagged as fensive. The result is that people who are already marginalized are being unfairly policed for their language by an algorithm. And even if you could somehow get around this extremely thorny issue, offensive words aren't the end of the story.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner