
The Times Tech Podcast Grok, deepfakes and who should police AI
19 snips
Jan 16, 2026 Kate Devlin, a renowned expert on AI and human interaction from King's College London, joins the conversation to delve into the implications of the Grok image-editing scandal. They discuss the urgent need for AI regulation, questioning if governments or tech firms should take the lead. The talk highlights the risks of rapid AI deployment and the varying global responses to these challenges. Devlin emphasizes the potential benefits of generative AI but warns against unleashing it without proper safeguards, making a compelling case for accountability in tech.
AI Snips
Chapters
Transcript
Episode notes
Image-Editing Misuse Sparks Regulatory Test
- Grok's image-editing feature was misused to create sexualised deepfakes and offensive images, prompting international backlash.
- That misuse exposed regulatory gaps and forced X to restrict image edits for real people and paid users only.
Ofcom Faces A High-Stakes Test
- Ofcom launched an investigation under the Online Safety Bill, marking a high-profile test of the regulator's new powers.
- Any enforcement could include fines or even app bans, but the process is slow and limited in reach.
Regulatory Divergence Creates Power Imbalance
- The US lacks comparable regulation, creating geopolitical tension when European regulators act.
- That regulatory gap allows companies in the US to adopt a 'move fast' stance with global consequences.
