
Riskgaming Why AI safety is like a bolt in a croissant
13 snips
Dec 3, 2025 Jacob Ward, a journalist and author known for exploring the intersections of behavior and technology, dives into the complexities of AI safety. He highlights AI's addictive nature and compares its rapid rise to gambling culture. The conversation addresses the ethical dilemmas of scaling technology, especially concerning mental health harms and the need for proper regulations. Ward also emphasizes the importance of cognitive liberty and long-term thinking in tech, warning against prioritizing product enthusiasm over safety.
AI Snips
Chapters
Books
Transcript
Episode notes
Models Are Being Experimented On The Public
- Large language models are being tested on the public like experiments in the wild without sufficient precaution.
- Jacob Ward argues this 'if we ship, we'll sort it out' software mindset treats scale as a solver, not a compounding risk.
The Bolt In A Croissant Metaphor
- Ward recounts touring a bakery and seeing croissants passed through a metal detector to catch stray bolts.
- He uses the bolt-in-croissant metaphor to show why edge-case harms matter at scale for consumer safety.
Tiny Percentages Scale To Massive Harms
- OpenAI's internal report flagged small percentages of users showing emotional attachment and suicidal ideation with chatbots.
- Ward emphasizes that tiny percentages become large absolute harms when scaled to hundreds of millions of users.












