
Marketplace All-in-One Sora 2's disinformation problem
Nov 4, 2025
Sophia Rubinson, Senior Editor at NewsGuard, delves into the troubling world of Sora 2, OpenAI's AI video generator. Despite its intended guardrails, it frequently produces videos based on false claims, including stories of election fraud and immigration controversies. She reveals that Sora 2 successfully churned out videos for 80% of tested false narratives, exposing loopholes in its safety measures. Their conversation highlights how these realistic AI creations could erode public trust and blur the lines between truth and fabrication.
AI Snips
Chapters
Transcript
Episode notes
Guardrails Often Fail At Blocking Lies
- Sora 2 produced videos for 80% of tested provably false claims despite OpenAI policies.
- Guardrails are inconsistent and often fail to block clear misinformation.
Workarounds Produced A Convincing Zelensky
- NewsGuard tested prompts that used a living public figure's name and received mixed results.
- Using alternate descriptors like "Ukraine's wartime chief" sometimes produced a convincing likeness of Zelensky.
Watermarks Aren't A Reliable Safeguard
- Watermarks move on Sora videos but are easy to miss on small screens and by untrained viewers.
- Free tools can remove the watermark in minutes, leaving only subtle blur artifacts.
