

Algorithmic Injustices and Relational Ethics with Abeba Birhane - #348
9 snips Feb 13, 2020
In this conversation, Abeba Birhane, a PhD student from University College Dublin and author of a notable paper on algorithmic injustices, dives into the ethics of AI. She discusses the 'harm of categorization' and how traditional fairness metrics overlook marginalized communities. Birhane advocates for relational ethics, arguing for a focus on societal impacts rather than mere algorithmic fairness. The talk also touches on the complexities of language in machine learning and critiques the notion of 'robot rights' in favor of prioritizing human welfare.
AI Snips
Chapters
Transcript
Episode notes
Relational Ethics
- Relational ethics prioritizes the welfare of those most negatively impacted by technology.
- It questions whether certain technologies should exist, rather than just mitigating biases.
Facial Recognition Example
- Facial recognition systems are a prime example of technology with potentially harmful applications.
- Relational ethics suggests questioning their necessity altogether.
Prioritizing the Disadvantaged
- Traditional fairness approaches, focused on the majority, often harm minority groups.
- Relational ethics emphasizes prioritizing the welfare of the least privileged.