The AI Report Live

The Hidden Risks of AI Mental Health Apps No One Talks About

May 2, 2025
Kate Farmer, a freelance journalist focused on health and AI's impact on mental healthcare, sheds light on the hidden risks of AI mental health apps. She discusses her experiences with apps like Wysa and Woebot, revealing a concerning lack of regulatory oversight and the ethical dilemmas surrounding user data and emotional support. They also touch on the alarming rise in youth mental health issues and the limitations of AI in providing personalized care. The conversation drives home the urgent need for better regulations to protect vulnerable users.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

AI Therapy Lacks Crucial Intake

  • AI therapy apps like Wysa lack essential therapeutic intake steps such as detailed health history and symptom screening.
  • This absence compromises quality of care and risks unintentional malpractice despite the apps presenting as therapists.
ANECDOTE

Using Wysa Is Frustratingly Limited

  • Kate Farmer describes Wysa's user experience as dorky and limited with preset emoji mood inputs and canned responses.
  • Interaction requires frequent explanations due to its rule-based NLP model, making the conversation feel repetitive and off-topic at times.
INSIGHT

Pre-Session Screening Is Essential

  • Unlike human therapists, AI apps skip extensive pre-session screening required to tailor effective, safe treatment.
  • This omission means AI therapists lack important context and risk providing subpar or inappropriate care.
Get the Snipd Podcast app to discover more snips from this episode
Get the app