Marketplace Tech

NYC's child welfare agency uses AI to scrutinize marginalized families, recent investigation finds

7 snips
May 29, 2025
Colin Lecher, a reporter at The Markup, investigates the NYC Administration for Children's Services' use of AI in family scrutiny. The conversation reveals how predictive algorithms can perpetuate historical biases against marginalized families. Lecher shares a mother's harrowing experience with the system, raising questions about the fairness of algorithmic assessments. The talk emphasizes the ethical dilemmas surrounding AI in child welfare, highlighting the psychological impacts on families flagged by biased assessments and the challenges of addressing these issues.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

How ACS Uses AI for Risk Assessment

  • ACS uses predictive AI to score families for risk by analyzing historical cases of child harm.
  • The system considers 279 variables, including past involvement and socioeconomics, to identify high-risk families.
ANECDOTE

Carlina Hamblin's Family Scrutiny Story

  • Carlina Hamblin's family experienced scrutiny partly due to past ACS involvement and mental health challenges.
  • Even new cases can be flagged by AI based on historical data about a family.
INSIGHT

AI Risk of Reinforcing Bias

  • The AI tool uses geography and other proxies that can correlate to race or socioeconomic status.
  • This can unintentionally reinforce historical biases in child welfare scrutiny.
Get the Snipd Podcast app to discover more snips from this episode
Get the app