On with Kara Swisher

Did a Chatbot Cause Her Son’s Death? Megan Garcia v. Character.AI & Google

11 snips
Dec 5, 2024
Megan Garcia, grieving mother of Sewell Setzer III, shares her heartbreaking story of her son's suicide after engaging with AI chatbots. She discusses her lawsuit against Character.AI and Google, blaming them for failing to protect her son. Joining her is Mitali Jain, an advocate for tech responsibility, emphasizing the urgent need for regulations to safeguard children from potential harms of AI. They delve into the emotional toll on families and highlight the ethical implications of tech companies in child safety, calling for accountability and better protective measures.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Character.AI Discovery

  • Megan Garcia discovered her son Sewell was using Character.AI, initially assuming it was a harmless game.
  • After his death, she found disturbingly sophisticated, sexual, and dark conversations between Sewell and a Daenerys Targaryen chatbot.
INSIGHT

Deceptive Design

  • Character.AI's design encourages deception among children due to its sexual nature, as kids hide such interactions from parents.
  • This deception creates a "perfect storm" as the platform fosters secrecy around potentially harmful conversations.
INSIGHT

Behavioral Changes

  • Sewell's behavioral changes, including declining grades and isolation, coincided with his use of Character.AI.
  • This raises concerns about a direct link between the chatbot interactions and his mental health.
Get the Snipd Podcast app to discover more snips from this episode
Get the app