
 Solutions with Henry Blodget How to Stop Russian Ops from Exploiting AI
 13 snips 
 Sep 15, 2025  Gordon Crovitz and Steve Brill, co-founders of NewsGuard and veterans of the journalism industry, shed light on the alarming intersection of AI and disinformation. They reveal that AI chatbots spread false information 35% of the time, often manipulated by Russian operatives. The duo discusses the urgent need for accurate AI reporting, the importance of combating deep fakes, and innovative strategies for fostering trust in journalism. Their insights underscore the critical challenge of maintaining truth in a rapidly evolving digital landscape. 
 AI Snips 
 Chapters 
 Transcript 
 Episode notes 
LLMs Often Repeat False News
- NewsGuard's year-long audits found the top 10 LLMs spread false information on controversial news topics about 35% of the time.
 - Gordon Crovitz warns this high error rate undermines trust and is driven by how models are trained on internet text.
 
Deliberate Flooding Infects Models
- Russian operators identified ~200 false claims and pumped millions of articles repeating them to infect LLM training data.
 - Crovitz explains models then predict those false claims because they see them as the likeliest text patterns.
 
Macro Disinformation Targets Training Signals
- Malign actors use AI to mass-produce versions of the same false story so the model sees overwhelming repetition rather than relying on readership.
 - Steve Brill calls this 'macro' disinformation aimed at the training signal, not human audiences.
 

