Vittoria Elliott, a WIRED reporter on AI’s evolving landscape, is joined by fellow tech journalist Will Knight and contributor Leah Feiger, who focuses on AI’s political impacts. They dive deep into the ongoing threats posed by political deepfakes and non-consensual content, especially in the context of the upcoming election. The trio discusses how generative AI can blur realities and manipulate public perception. They also tackle the legislative challenges and ethical dilemmas surrounding these technologies, emphasizing the urgent need for effective regulation.
Legislative efforts like the Defiance Act and Take It Down Act aim to combat non-consensual deepfakes, yet federal movement remains stalled.
The rise of AI-generated political deepfakes complicates public perception and trust, highlighting a nuanced threat in modern campaigning strategies.
Deep dives
Legislative Landscape on AI-Generated Content
Various states are actively addressing the issue of AI-generated pornography, specifically non-consensual deepfakes, through legislation. For instance, Congresswoman Alexandria Ocasio-Cortez has proposed the Defiance Act, allowing victims to sue creators of non-consensual content, while Senator Ted Cruz introduced the Take It Down Act to facilitate the removal of such material from platforms. States like Michigan are focusing on protecting minors from explicit deepfakes, reinforcing both civil and criminal penalties against offenders. Despite significant attention, legislative movement has stalled at the federal level, leaving a patchwork of laws that complicate enforcement across state lines.
The Banner of AI in Political Propaganda
The accessibility of AI tools has facilitated the widespread creation of politically-themed deepfakes and propaganda-style images. Users have taken advantage of open-source technologies to create deepfakes with minimal effort, resulting in a surge of content aimed at mocking political figures like Kamala Harris and fueling propaganda narratives. While some believe these manipulations are easily identifiable, they can still resonate with audiences and influence public perception. This trend highlights a shift towards using deepfakes not only for misrepresentation but as tools for political mockery and manipulation.
Challenges in AI Detection Technologies
Deepfake detection technologies are struggling to keep pace with the increasing sophistication of AI-generated content. Despite the emergence of various detection tools, effectiveness varies significantly due to training biases and the quality of content they analyze. Many detection systems are based on narrow datasets that often overlook diverse demographics and contexts, leading to false positives and negatives. As the technology evolves, it creates an ongoing arms race between creators of deepfakes and those trying to develop effective detection methods.
AI's Role in Shaping Political Discourse
The ongoing integration of AI into political campaigns raises concerns about its potential to influence voter perception and behavior subtly. While the initial panic over AI in elections has diminished, experts recognize the lingering threat of convincing deepfakes appearing late in campaigns, which could disrupt public trust. Furthermore, AI tools are already being used for voter outreach and campaign strategy, often in ways that do not draw immediate scrutiny. This evolving landscape indicates that AI's role in politics is complex, encompassing both overtly deceptive tactics and more nuanced applications that could reshape voter engagement.
A few months ago, everyone seemed worried about how AI would impact the 2024 election. Lately, it seems like some of the panic has dissipated, but political deepfakes — including pornographic images and video — are still everywhere. Today on the show, WIRED reporters Vittoria Elliott and Will Knight on what has changed with AI and what we should worry about.
Leah Feiger is @LeahFeiger. Vittoria Elliott is @telliotter. Will Knight is @willknight to us at politicslab@WIRED.com. Be sure to subscribe to the WIRED Politics Lab newsletter here.