Dean Jackson and Jon Bateman, experts on deepfakes and disinformation, dive into the alarming implications of deepfake technology for the 2024 election. They discuss California's new legislation targeting online deepfakes and emphasize the need for media literacy and systemic solutions. The conversation touches on the challenges of managing disinformation in a polarized political landscape, the decline of local journalism, and the importance of trust in information sources. Get ready for a thought-provoking discussion on navigating our digital age!
California's new laws aim to combat deepfakes by requiring social media platforms to remove deceptive AI-generated content and disclose usage in political ads.
The effectiveness of disinformation campaigns relies more on psychological predispositions than technology, highlighting the importance of understanding human behavior.
Addressing disinformation requires both short-term tactics like watermarks and long-term efforts such as enhancing media literacy and supporting local journalism.
Deep dives
California's Legislative Response to AI and Disinformation
California Governor Gavin Newsom has signed three laws aimed at addressing the implications of AI-generated content and deep fakes. These laws mandate social media companies to remove deceptive AI-generated material and require political campaigns to disclose their use of AI in advertisements. This legislative effort highlights the growing concern over disinformation, especially in the context of upcoming elections. As disinformation proliferates on social media, these measures represent a significant attempt to combat the challenges posed by modern technology.
The Nature of Disinformation and Its Challenges
Disinformation has become a significant concern, particularly with the rise of generative AI, which makes it easier to produce misleading content. Research shows that while AI-generated content can contribute to misinformation, the essence of effective disinformation campaigns often lies in the existing psychological predispositions of individuals rather than in the tools used. Historical instances, such as the claims surrounding the 2020 election, demonstrate that mass belief can spread without needing highly realistic evidence. Thus, the crux of disinformation lies not just in technology but in understanding human behavior and societal dynamics.
Control Over Information: Power Dynamics in Society
A critical debate surrounds who should govern the information environment in society, balancing the roles of government, tech companies, and individual citizens. Government control may lead to propaganda and suppression of free speech, while tech companies prioritize profitability over truth. The proposal shifts towards letting citizenry collaboratively engage with content, yet this is hampered by widespread disinformation and diminishing discourse skills. Hence, finding an ideal framework for managing information is complex and fraught with challenges that require collective effort and deliberation.
Long-Term Solutions to Counter Disinformation
Addressing disinformation effectively involves a combination of short-term tactical responses and long-term structural changes in society. Immediate solutions like watermarks on content or curbing virality have limited effectiveness, and broader cultural improvements are needed to enhance media literacy and critical thinking. Supporting local journalism and educating the public about media practices can create a healthier information ecosystem, reducing the allure and impact of disinformation. This multifaceted approach highlights the importance of cultivating a society that values accurate information over sensationalism.
The Complexity of Persuasion and the Role of AI
Persuading individuals to change long-held beliefs is a complex process, and personalization through micro-targeted ads may not significantly influence political behavior. Studies suggest that while AI can enhance the efficiency of information dissemination, it does not necessarily translate to greater persuasion. The effectiveness of AI-generated content often falls short compared to human-created narratives. Ultimately, addressing the disinformation challenge requires recognizing the limitations of both technology and human psychology in shaping public opinion and belief.
California just signed a bill to drastically decrease deepfakes on social media. The worry, of course, is that they are already being used to unjustifiably sway voters. In this episode, one of the best from Season 1, I talk to Dean Jackson and Jon Bateman, experts on the role of deepfakes in disinformation campaigns. The bottom line? Deepfakes aren’t great but they’re not half the problem.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode