

NYC's child welfare agency uses AI to scrutinize marginalized families, recent investigation finds
4 snips May 29, 2025
Colin Lecher, a reporter from The Markup who investigates AI’s impact on society, discusses the controversial use of predictive AI by New York City's Administration for Children's Services. He highlights serious concerns about bias, explaining how historical data can unfairly target marginalized families. Lecher shares a poignant story of a mother caught in this web, illustrating the psychological toll on those flagged by the system. The conversation raises critical ethical questions about the intersection of technology and child welfare, urging listeners to consider whose voices are heard.
AI Snips
Chapters
Books
Transcript
Episode notes
AI Flags High-Risk Families
- New York City's ACS uses an AI tool scoring families on 279 variables including past cases and socioeconomic factors.
- This aims to predict risky situations by identifying patterns from past tragic child harm cases.
AI Triggers Closer Monitoring
- The AI flags cases potentially most dangerous, prompting supervisors to increase scrutiny.
- This leads to calls with teachers, relatives, and experts for deeper investigation.
Mom's Story Reflects AI Flags
- Carlina Hamblin's past with foster care, mental health, and her child placed in foster care fits factors the AI might flag.
- Despite her changed life, these historical factors could continue triggering close ACS scrutiny.