Me, Myself, and AI cover image

Me, Myself, and AI

Protecting Society From AI Harms: Amnesty International’s Matt Mahmoudi and Damini Satija (Part One)

Aug 29, 2023
Amnesty International researchers discuss the risks of AI tools for human rights and highlight examples like facial recognition tracking of social activists and automated decisions in public housing. They emphasize that fixing bias and discrimination requires human intervention and changes to public policy.
28:19

Podcast summary created with Snipd AI

Quick takeaways

  • AI tools like facial recognition systems and automated decision-making algorithms can put human rights at risk by enabling surveillance, discrimination, and bias.
  • Addressing bias, discrimination, and inequality in AI technology requires human intervention and changes to public policy, as technology alone cannot solve these problems.

Deep dives

AI and Human Rights: Protecting Marginalized Communities

Amnesty International researchers, Matt Memoudi and Damnee Satija, discuss their work in protecting human rights when AI tools are used. They focus on AI-driven surveillance, like facial recognition, which often leads to discriminatory outcomes and erosion of rights. Their research has traced facial recognition deployments in cities like New York City and Hyderabad, as well as in occupied Palestinian territories. They investigate the companies involved in developing these technologies and highlight the need for stronger safeguards and regulations to prevent rights violations.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner