Guest: Makena Kelly, senior writer at Wired covering politics and technology. Fake robocalls impersonating Joe Biden in New Hampshire primary raise concerns about AI-generated disinformation during the upcoming election. Discussion on the challenges of identifying real and fake information, regulating AI in politics, and the use of watermarks to monitor AI content. Exploring the impact of AI on spreading fake information and manipulating audio clips.
AI-generated content, such as audio and video, is becoming a major player in politics, raising concerns about the authenticity of information.
Regulating AI in politics poses challenges, as current regulation is limited, and companies often rely on voluntary commitments rather than enforceable rules.
Deep dives
AI-generated robocalls impersonating political candidates pose a threat to elections
In New Hampshire, a fake robocall from someone impersonating Joe Biden urged voters not to participate in the primary, claiming it only enables Republicans to re-elect Donald Trump. This robocall was likely an unlawful attempt at voter suppression using artificially generated impersonation. AI-generated content, such as audio and video, is becoming a major player in politics, raising concerns about the authenticity of information. Detecting AI-generated fakes is challenging, even for AI companies. The polarization of politics makes people susceptible to believing and spreading misinformation, regardless of its authenticity.
The use of AI in political campaigns raises concerns about misinformation and manipulation
AI is not only being used to create deepfake audio and video, but also for AI programs like Dean Bot, which mimicked Democratic candidate Dean Phillips and generated responses in his voice. However, AI companies are starting to restrict or ban the use of their products for political campaigns due to concerns about liability and misuse. Regulating AI in politics poses challenges, as current regulation is limited, and companies often rely on voluntary commitments rather than enforceable rules. The absence of robust regulation and the gutting of disinformation monitoring teams by social media platforms like Twitter and Facebook further exacerbate the risk of AI-driven manipulation in elections.
The difficulty of detecting AI-generated content highlights the need for watermarking and regulation
Detecting AI-generated content, such as images and audio, is challenging, and even AI companies struggle to reliably identify fakes. Watermarking has been suggested as a solution, either through visual watermarks or metadata. However, watermarking methods are still in development and can be easily removed using other AI algorithms. Despite some legislative efforts to require disclosure of AI-generated content in political ads, regulation remains limited and largely voluntary. Policymakers, tech companies, and regulatory agencies face the challenge of keeping up with the rapidly advancing capabilities of AI and mitigating the risks it poses to election integrity.
In the days leading up to the New Hampshire primary, voters received a robocall purportedly from Joe Biden. Authorities have now determined the call was likely A.I.-generated.
In the era of A.I., how can voters tell what’s real and what’s not? And will the general election be thrown into chaos by artificial intelligence-created disinformation?
Guest: Makena Kelly, senior writer at Wired covering politics and technology
If you enjoy this show, please consider signing up for Slate Plus. Slate Plus members get benefits like zero ads on any Slate podcast, bonus episodes of shows like Slow Burn and Dear Prudence—and you’ll be supporting the work we do here on What Next TBD. Sign up now at slate.com/whatnextplus to help support our work.
Podcast production by Evan Campbell, Paige Osburn and Anna Phillips.