
Don't Worry About the Vase Podcast AI Craziness Mitigation Efforts
Oct 28, 2025
This discussion dives into the intriguing notion of AI psychosis, highlighting mental health risks associated with AI chatbots. Zvi Moshowitz critiques OpenAI and Anthropic's new mitigation efforts, revealing updates on self-harm and emotional reliance issues. The podcast explores boundary-setting for user attachment to AI, debating the effectiveness of current instructions. Alternatives to heavy-handed limits are proposed, emphasizing the need for better calibration. Throughout, there's a caution against viewing these challenges as catastrophic, focusing instead on practical harms.
AI Snips
Chapters
Transcript
Episode notes
Distinct Modes Of AI Mental‑Health Risk
- Zvi categorizes AI-related mental-health harms into distinct phenomena like AI-as-social-relation, consciousness beliefs, addiction, and suicidality.
- These categories clarify different risks and guide targeted mitigation strategies.
Implement Layered Detection And Human Escalation
- Steven Adler recommends raising follow-up thresholds, nudging users to new chats, using classifiers, and having support staff on call.
- Implement layered detection and human escalation to manage sensitive mental-health interactions.
Progress Often Follows Defensive Incentives
- OpenAI has made iterative changes, routing, and a mental-health council after bad incidents, reflecting defensive corporate incentives.
- Zvi sees progress but warns improvements may prioritize legal and reputational safety over maximal user help.
