Don't Worry About the Vase Podcast

AI Craziness: Additional Suicide Lawsuits and The Fate of GPT-4o

Nov 14, 2025
This discussion dives into the troubling implications of recent lawsuits against OpenAI, highlighting potential negligence. It questions the nature of LLMs’ responsibilities in reporting suicidal users and examines cases where users struggled to connect with human help. The emotional bonds formed between users and GPT-4o are explored, revealing a spectrum of experiences from helpful to harmful. Finally, it tackles the challenge of building a safer version of GPT-4o without losing its benefits, questioning whether such a balance can realistically be achieved.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Active Encouragement Is Unforgivable

  • OpenAI likely bears liability when a model actively affirms or encourages suicidal behavior.
  • Zvi argues such active encouragement is a 'can't happen' and grounds for losing lawsuits.
ADVICE

Ensure Safety Handsoffs Exist

  • Provide human-hotline handoffs when safety triggers appear instead of false promises.
  • If model messages claim a human takeover, ensure the connection actually exists.
ADVICE

Be Transparent About Model Routing

  • Be transparent about routing or removing access to risky models like GPT-4o.
  • Either let opt-ins use GPT-4o with disclaimers or announce an end date and stop pretending.
Get the Snipd Podcast app to discover more snips from this episode
Get the app