In The News

Elon Musk’s AI app creates abusive images. Can it be stopped?

Jan 12, 2026
Ellen Coyne, an Irish Times political correspondent, dives into the dark side of AI with a focus on Elon Musk’s Grok app. She discusses how it allows users to create non-consensual intimate images, raising alarming legal and moral questions. Ellen highlights the platform's popularity and Musk's mixed responses to criticism. She addresses gaps in current laws protecting victims and how proposed legislation aims to tackle harmful deepfakes. The conversation reveals the urgent need for effective safeguards against AI-driven abuse.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Grok Is Mass Market AI

  • Grok is a mainstream, highly promoted AI embedded in X and available as a top free app on the Apple store.
  • Ellen Coyne says Grok is the public face of Elon Musk's large AI venture and is widely accessible to ordinary users.
INSIGHT

Scale Makes Deepfakes More Dangerous

  • The novel harm is how easily and widely non-consensual sexual deepfakes can be generated and shared at scale.
  • Ellen Coyne warns this mass-market nudification is what makes Grok especially worrying for sexual-violence advocates.
ANECDOTE

Users Report Rapid, Widespread 'Nudification'

  • Users reported generating images at an astronomical rate, with some analysis claiming at least one a minute.
  • Ellen Coyne observed women saying strangers used their public photos to 'nudify' them without consent.
Get the Snipd Podcast app to discover more snips from this episode
Get the app