
Posting Through It 070: Elon Musk’s Grok is Victimizing Women and Children feat. Kat Tenbarge and Ashley St. Clair
Jan 12, 2026
Kat Tenbarge, an independent tech and culture reporter, breaks down the troubling issues surrounding Elon Musk's Grok AI, revealing how it perpetuates the spread of child exploitation and nonconsensual deepfakes. Ashley St. Clair, a former influencer and Musk's child’s mother, shares her distressing experience with Grok-generated images that violated her privacy. The conversation delves into the need for accountability for human perpetrators and the societal impact of AI misuse, highlighting the personal and psychological toll on victims.
AI Snips
Chapters
Transcript
Episode notes
Grok's Scale And Child Safety Risk
- Grok's "spicy" mode enabled mass generation of sexualized and childlike images at scale.
- Researchers found Grok producing thousands of suggestive images per hour, including a minority depicting minors.
Built On An Existing Deepfake Ecosystem
- Deepfake abuse culture predates Grok and was already normalized in corners of the internet.
- Kat Tenbarge links Grok to an existing ecosystem that has long targeted women and girls.
Use Image Hashing But Expect Limits
- Use stopncii.org to register images you want removed and employ hashing tools to request takedowns.
- Expect limited effectiveness because creations can be reuploaded or reproduced instantly.


