
On with Kara Swisher Elon’s “Nudify” Mess: How X Supercharged Deepfakes
61 snips
Jan 22, 2026 Renée DiResta, an expert on online disinformation, Hany Farid, a pioneer in digital image forensics, and tech journalist Casey Newton delve into the ramifications of X's new in-app tool that allows users to alter photos. They discuss the alarming rise in non-consensual deepfakes, particularly involving minors. The guests tackle the failures of regulators and app stores to intervene, the incoherent free-speech defense of such abuses, and the need for accountability. Ultimately, they envision a safer internet while cautioning about the threat of advanced AI tools.
AI Snips
Chapters
Books
Transcript
Episode notes
Platform-Level Normalization Of Deepfakes
- Grok Image Edit put powerful image in-painting directly into X replies, normalizing sexualized deepfakes at scale.
- That integration turned a niche harassment tool into millions of public notifications for victims in a matter of days.
From Dark Corners To Public Replies
- Renée DiResta described Grok making nudification public instead of confined to dark corners like Discord and small apps.
- Researchers measured it peaking at roughly 6,700 posts per hour, amplifying existing abuse networks.
Deliberate Guardrail Avoidance
- Grok intentionally avoided semantic and output guardrails to be 'spicy' and anti-woke, unlike major rivals.
- That deliberate choice made illegal and harmful outputs predictable rather than accidental.






