
The Sunday Show
Assessing the Problem of Disinformation
Sep 10, 2023
Dr. Shelby Grossman discusses AI's ability to write persuasive propaganda. Dr. Kirsty Park and Steph Amunke highlight the shortcomings of the initial disinformation code of practice and the new strengthened code. The assessment of reporting requirements reveals that most platforms scored below adequate. The transition to a code of conduct in relation to the Digital Services Act is discussed. Addressing disinformation challenges and solutions, with the importance of regulating procedures and intervening at the economic and ecosystem level.
33:03
Episode guests
AI Summary
AI Chapters
Episode notes
Podcast summary created with Snipd AI
Quick takeaways
- AI-generated propaganda can be convincing and customized for specific audiences, posing challenges for identifying and combating disinformation campaigns.
- Major technology platforms score below adequate in reporting compliance, requiring further improvement and accountability in combating disinformation.
Deep dives
AI-generated Propaganda: Research Findings
A research study conducted by a team at the Stanford Internet Observatory explored the use of large language models, such as Open AI's GPT-3, in generating persuasive propaganda. The team fed six old-fashioned human-generated propaganda articles into GPT-3 and found that it could generate convincing propaganda with close to a 44% agreement rate, slightly lower than the original articles. The research raises concerns about the potential misuse of AI-generated text by disinformation actors. It also highlights that even without exposure to propaganda, a significant percentage of people are susceptible to believing false claims when asked directly, emphasizing the complexity of tackling misinformation.