
Marketplace Tech Sora 2's disinformation problem
10 snips
Nov 4, 2025 Sofia Rubinson, a Senior Editor at NewsGuard specializing in online misinformation, discusses the concerning findings about OpenAI’s AI video generator, Sora 2. Despite its supposed guardrails, Sora 2 frequently creates videos based on false claims, including election fraud. Rubinson reveals that 80% of the tested prompts yielded misleading content and highlights the tool’s uncanny realism. She also shares insights on the risks associated with such technology, the challenges of filtering prompts, and the erosion of trust in genuine media.
AI Snips
Chapters
Transcript
Episode notes
Sora 2 Often Generates False Claims
- NewsGuard found Sora 2 produced videos for 80% of provably false claims they tested.
- OpenAI's stated policy bans misleading videos but enforcement appears inconsistent.
Guardrails Behave Inconsistently
- Sora's guardrails block some violent or public-figure prompts but fail unpredictably.
- The same prompt could be rejected for one tester and accepted for another.
Workaround Produced A Fake Zelensky Video
- NewsGuard tried prompts for Ukrainian President Volodymyr Zelensky and sometimes got rejections.
- Using alternate phrasing like "Ukraine's wartime chief" produced a video that looked exactly like him.
