
Hard Fork Grok’s Undressing Scandal + Claude Code Capers + Casey Busts a Reddit Hoax
823 snips
Jan 9, 2026 Kate Conger, a technology reporter for The New York Times, joins to discuss the alarming use of Grok, an AI chatbot on X, to create deepfake images that target individuals, including minors. She shares heartbreaking stories from victims and the challenges they face in getting these images removed. The conversation shifts to Claude Code, where the hosts reveal their holiday projects using the tool's enhancements. Finally, Casey uncovers a viral Reddit hoax about food delivery scams, exposing the use of AI-generated evidence that fooled many.
AI Snips
Chapters
Transcript
Episode notes
Public Nudification On X Is Distinctly Harmful
- Grok's image generator began producing sexualized images publicly, often of women and children, and users invoked it in replies to undress photos on X.
- The publicness and scale of these undressing prompts make the harm immediate and visible, amplifying harassment and humiliation.
Victims Describe Slow, Traumatizing Removals
- Kate Conger spoke with victims whose innocent photos were turned into sexualized images on X, including minors, causing parents to monitor and panic.
- Removals sometimes take 36–72 hours, leaving harmful images visible and commented on for days.
Virality Incentives Beat Safety At X
- Grok's controversial outputs align with an explicit push for virality from leadership, producing engagement gains that the company appears to value.
- That engagement calculus explains why X hasn't clamped down quickly despite obvious harms.

