Decoder with Nilay Patel

Recode Decode: Meredith Whittaker and Kate Crawford

Apr 8, 2019
Meredith Whittaker and Kate Crawford, founders of the AI Now Institute, dive deep into the societal implications of artificial intelligence. They discuss the dangers of 'dirty data' and biased search results that can skew AI conclusions. The importance of diversity in tech is highlighted, alongside the ethical concerns of technologies like facial recognition. They also critique current AI self-regulation efforts and explore international approaches, notably China's social credit system. Their insights underscore the need for transparency, accountability, and a more inclusive tech landscape.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI in Sensitive Sectors

  • AI systems are being integrated into sensitive areas like criminal justice and education.
  • This raises concerns about potential biases and lack of guardrails in their implementation.
ANECDOTE

Dirty Data in Predictive Policing

  • The AI Now Institute studied predictive policing data from 13 U.S. jurisdictions under legal orders for biased policing.
  • They found that data from corrupt police practices, like planting evidence, was used to train predictive policing systems, perpetuating bias.
ANECDOTE

Cat Example

  • Early image recognition systems trained only on white cats would misidentify darker cats, demonstrating training data limitations.
  • This highlights how data limitations can lead to biased outcomes in AI systems.
Get the Snipd Podcast app to discover more snips from this episode
Get the app