Data Stories cover image

156  |  Visualizing Fairness in Machine Learning with Yongsu Ahn and Alex Cabrera

Data Stories

00:00

Is There a Difference Between the Output and the Input?

Usually the way we try to define fairness, or quanify it, is in the output. We don't really look at what features are used. So if, for example, the recetovism prediction case for african american mals, you're more likely to be given a higher rist score even though you're just as likely to recommit a crime. Oi'm wondering if we can, can you gues, maybe describe one or two specific examples where this kind of problems can arise?

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app