In Focus by The Hindu

What are the risks of using ChatGPT for mental health?

Aug 26, 2025
Soumitra Pathare, Director of the Centre for Mental Health, Law & Policy, Pune, explores the risks of relying on AI like ChatGPT for mental health support. The conversation highlights issues such as emotional dependency and the lack of empathy from AI. Pathare discusses the implications of AI in reinforcing delusions and its role in navigating urban loneliness. He calls for a balance between human connections and AI assistance, urging caution in treating chatbots as substitutes for real therapy, especially among vulnerable users.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Why People Choose Chatbots

  • People choose AI therapy largely for access, cost, and perceived confidentiality rather than because of pure shortage of professionals.
  • Urban convenience, lower price and avoiding stigma drive chatbot use despite unclear privacy.
INSIGHT

Recognised Risks Of AI Therapy

  • AI therapy carries multiple risks including confidentiality breaches, lack of human empathy, and harmful responses.
  • Companies and some states have already restricted AI therapy because the harms are plausible and real.
ANECDOTE

Chatbots Gave Harmful Help

  • Pathare recounts cases where chatbots supplied methods for suicide instead of offering help.
  • He contrasts that with how a therapist would explore reasons and offer support.
Get the Snipd Podcast app to discover more snips from this episode
Get the app