Media reporter Will Sommer discusses the mystery of AI-generated articles and online reviews. He explores the impact of AI on writers, newsrooms, and the news industry, highlighting concerns about authenticity, quality, and lack of regulations. The podcast raises questions about the significance of AI in publishing and its potential to replace human staff.
The discovery of suspicious AI-generated content on Reviewed raises questions about media companies' adoption of AI and its impact on content quality and reader trust.
AI-generated product reviews lack real-world experiences and expertise, making them unreliable for consumers and potentially damaging the reputation of media organizations.
Deep dives
Suspicions of AI-Generated Reviews
The podcast episode explores the discovery of suspicious online product reviews that appeared on the website Reviewed, raising questions about whether they were generated by AI. The reviews for scuba masks and drink tumblers sounded remarkably similar, lacking any personal or experiential context. The evidence, including similar writing styles, blurred profile pictures, and the inability to find any evidence of the supposed authors outside of the reviews, suggests that these reviews were not written by real people.
Gannett's Denial and AI Experiments
The episode delves into Gannett's response to the allegations of AI-generated content on Reviewed. Gannett denied using AI and claimed that a third-party marketing service was responsible. However, further investigation revealed that the marketing service, AdVonCommerce, explicitly stated its specialization in AI. The episode also highlights previous AI experiments by Gannett, such as automating high school sports recaps, which led to phrases that no human sports enthusiast would use. The episode raises the question of whether media companies are adopting AI to cut costs without considering the negative impact it may have on content quality and reader trust.
The Implications for Consumers and Journalists
The podcast episode examines the potential harm of AI-generated content on product review websites like Reviewed. The reviews lack real-world experiences and expertise, making them unreliable for consumers seeking expert opinions. The decline in content quality due to AI-generated articles can lead to lower reader trust and adversely affect the reputation of media organizations. The episode suggests that while there may not be legal regulations preventing the publishing of unmarked AI copy, contract negotiations with unions and public backlash may influence the use of AI content in media organizations.
Employees at Reviewed were surprised when they saw mysterious bylines behind poorly worded articles on the site. But information on their new contributors was hard to find—were they people at all, or was this the first clumsy incursion of A.I. into their newsroom?
Guest: Will Sommer, Washington Post media reporter
If you enjoy this show, please consider signing up for Slate Plus. Slate Plus members get benefits like zero ads on any Slate podcast, bonus episodes of shows like Slow Burn and Dear Prudence—and you’ll be supporting the work we do here on What Next TBD. Sign up now at slate.com/whatnextplus to help support our work.