
Marketplace Tech AI-generated "letters to the editor" are flooding academic publications
7 snips
Nov 24, 2025 Dr. Carlos Chaccour, a physician-scientist from the University of Navarra, uncovers a troubling trend in academic publishing. After spotting errors in a suspicious letter regarding his malaria research, he investigated further. Chaccour reveals a surge in AI-generated letters from new authors, often designed to enhance academic reputations. He discusses the implications of this phenomenon, warning that it could inflate research metrics and lead to a 'science bubble.' The conversation dives into the broader misuse of AI in the publishing world.
AI Snips
Chapters
Transcript
Episode notes
Suspicious Letter Sparked An Investigation
- Dr. Carlos Chaccour found a suspicious letter to the New England Journal of Medicine that misattributed critiques to his own work.
- He and Matthew Rudd investigated and discovered a surge in new authors publishing dozens of letters after ChatGPT launched.
AI-Linked Spike In Prolific New Authors
- Chaccour and Rudd identified 'prolific debutantes' who suddenly published many letters starting around ChatGPT's release.
- This pattern suggests AI may enable mass production of short academic comments that inflate publication counts.
Require AI Disclosure In Submissions
- Declare AI use in papers and comments to preserve transparency and trust.
- Journals should require disclosure because letters are short and easily mass-produced with AI.
