Data Stories cover image

156  |  Visualizing Fairness in Machine Learning with Yongsu Ahn and Alex Cabrera

Data Stories

00:00

Is There a Difference Between the Output and the Input?

Usually the way we try to define fairness, or quanify it, is in the output. We don't really look at what features are used. So if, for example, the recetovism prediction case for african american mals, you're more likely to be given a higher rist score even though you're just as likely to recommit a crime. Oi'm wondering if we can, can you gues, maybe describe one or two specific examples where this kind of problems can arise?

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner