Scott Carney Investigates

AI's Psychosis Epedemic

Jan 27, 2026
Amandeep Jutla, clinical psychiatry professor at Columbia who studies how chatbots can reinforce delusions. Ragy Girgis, Columbia psychiatry professor researching psychiatric risks from large language models. They explain AI psychosis types, how chatbots can echo and strengthen harmful beliefs, tests showing models often reinforce psychotic prompts, and the risks of anthropomorphic, companion-style AI. Practical warnings and industry critique follow.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

What 'AI Psychosis' Means

  • 'AI psychosis' describes how LLMs can drive or worsen psychotic thinking in vulnerable people.
  • The term covers decompensation in diagnosed psychosis, reinforcement of emerging delusions, and AI-enabled suicidal reinforcement.
INSIGHT

Delusions Run On A Conviction Spectrum

  • Delusions exist on a conviction spectrum from 0% to 100%, and chatbots can raise conviction levels.
  • People with genetic vulnerability or ego deficits are most at risk of reinforcement by LLMs.
ANECDOTE

GPT-2 Demo That Felt Real

  • Amandeep Jutla recounts testing GPT-2 by typing his name and watching it invent details like 'a 51 year old cardiac surgeon in New Delhi.'
  • The demo showed early models simply pattern-match names to plausible continuations, not understanding.
Get the Snipd Podcast app to discover more snips from this episode
Get the app