

AI – Microsoft boss troubled by rise in reports of 'AI psychosis'
Aug 21, 2025
Barry O'Sullivan, Director at the Insight Centre for Data Analytics, and Elaine Burke, tech journalist and host of 'For Tech's Sake,' dive into the unsettling issue of 'AI psychosis.' They discuss how AI chatbots can mislead vulnerable users, the contrasting nature of AI bots and traditional search engines, and the pressing need for regulations. O'Sullivan stresses the dangers of users mistaking AI for conscious entities and the crucial requirement for education on AI's true capabilities to combat these fears.
AI Snips
Chapters
Transcript
Episode notes
Perceived Consciousness Feels Real
- Mustafa Suleiman warns of 'AI psychosis' where users believe chatbots are sentient and form harmful attachments.
- Perceived consciousness can become reality for users even when systems lack true awareness.
Companion Bots Can Create False Bonds
- Companies build companion AIs and sometimes program vulnerability to deepen user bonds and create the feeling of a relationship.
- People have even travelled to meet chatbots after believing they were real companions.
Chatbots Predict Text, They Don't Understand
- Large language models generate statistically likely next words rather than understanding or reasoning like humans.
- Treating chatbots as thinking agents misreads how they produce text and inflates user expectations.