CXOTalk

AI Inequality EXPOSED: When Algorithms Fail | CXOTalk #882

Jun 9, 2025
In this engaging conversation, Kevin De Liban, founder of TechTonic Justice and former legal aid attorney, unveils the disturbing effects of AI on 92 million low-income Americans. He shares his pivotal legal victory challenging an algorithm that harmed disabled Medicaid recipients. De Liban explains why these technologies often reflect biases rather than neutrality, and how self-regulation typically falls short. He stresses the urgent need for robust regulation to protect vulnerable communities and offers actionable advice for technology leaders to ensure ethical practices in AI.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Arkansas AI Cuts Home Care Hours

  • In Arkansas, an algorithm replaced nurse discretion to allocate home care hours for disabled Medicaid recipients.
  • This caused severe harm like reduced care, intolerable suffering, and was legally challenged successfully.
INSIGHT

AI Is Not Neutral Technology

  • AI systems are not neutral; they're designed by humans with intentional or unintentional biases.
  • These systems often empower decision-makers while restricting low-income people's access to benefits.
INSIGHT

AI Affects 92 Million Americans

  • Ninety-two million low-income Americans have key life decisions influenced by AI.
  • AI's impact also extends to middle-class professionals with employer oversight using AI.
Get the Snipd Podcast app to discover more snips from this episode
Get the app