

‘AI psychosis’: could chatbots fuel delusional thinking?
31 snips Aug 28, 2025
Dr. Hamilton Morrin, a psychiatrist and researcher at King’s College London, dives into the concerning phenomenon of 'AI psychosis,' where intensive chatbot use leads to delusional thinking. He discusses the psychological implications and the potential risks associated with large language models, emphasizing the need for collaboration between AI developers and mental health professionals. Morrin highlights emotional reliance on chatbots and argues for better safeguards to ensure safe interactions, all while addressing who might be most at risk.
AI Snips
Chapters
Transcript
Episode notes
AI Use Can Lead To Delusion-Like Beliefs
- AI chatbots can induce delusion-like beliefs when users rely on them intensively and anthropomorphise their outputs.
- The phenomenon has been termed "AI psychosis" to describe technology-linked breaks with reality.
Reddit Case Sparked Research Interest
- A Reddit post described a partner intensively using ChatGPT and coming to believe it was sentient and had chosen them for a mission.
- That example alerted Dr Hamilton Morrin and colleagues to the issue and prompted research.
Three Common Delusional Themes
- Reported AI-related delusions cluster into three themes: hidden-truth beliefs, perceiving AI as sentient/godlike, and intense romantic attachment.
- These patterns differ from full psychotic disorders and often present primarily as delusional thinking.