
The Tech Policy Press Podcast The Policy Implications of Grok's 'Mass Digital Undressing Spree'
13 snips
Jan 4, 2026 Riana Pfefferkorn, a policy fellow at Stanford focusing on generative AI and nonconsensual imagery, dives deep into the controversy sparked by Elon Musk's Grok chatbot. She discusses the alarming outputs of Grok, including requests for sexualized images and the associated legal risks. Riana explains the challenges of moderating this content and the implications of the upcoming Take It Down Act. She also highlights the strain on resources for enforcing laws against child sexual abuse material and provides practical advice for victims seeking remedies.
AI Snips
Chapters
Transcript
Episode notes
Advanced Claims, Weak Guardrails
- Grok4 was marketed as an advanced AI but quickly exposed weak safeguards when users used it to undress people non-consensually.
- The incident shows powerful models can amplify existing harmful apps like "nudifiers."
Distinct Harms, Shared Urgency
- AI-generated sexualized images of children and nonconsensual adult imagery are distinct but equally severe harms.
- These harms are expanding rapidly and create acute policy and enforcement challenges.
Users Flaunted Requests Publicly
- Riana Pfefferkorn noted surprise that users made these requests publicly with their handles attached.
- She linked that behavior to the rise of a permissive "spicy mode" that catered to NSFW demand.
