From 2016 to 2024, this chapter traces the evolution of disinformation in U.S. elections, focusing on deep fakes, social platform roles, and AI challenges. It examines the slow progress and effectiveness of Facebook's oversight board in content moderation decisions and the aftermath of significant trust and safety issues faced by tech giants. Additionally, it explores the debates around regulating deep fakes, self-regulation efforts, and the difficulty in preventing their creation through various proposed strategies.
Our new Thursday episodes of Decoder are all about deep dives into big topics in the news, and this week we’re continuing our mini-series on one of the biggest topics of all: generative AI. Last week, we took a look at the wave of copyright lawsuits that might eventually grind this whole industry to a halt. Those are basically a coin flip — and the outcomes are off in the distance, as those cases wind their way through the legal system.
A bigger problem right now is that AI systems are really good at making just believable enough fake images and audio — and with tools like OpenAI’s new Sora, maybe video soon, too. And of course, it’s once again a presidential election year here in the US. So today, Verge policy editor Adi Robertson joins the show to discuss how AI might supercharge disinformation and lies in an election that’s already as contentious as any in our lifetimes — and what might be done about it.
Links:
Credits:
Decoder is a production of The Verge and part of the Vox Media Podcast Network.
Today’s episode was produced by Kate Cox and Nick Statt and was edited by Callie Wright.
The Decoder music is by Breakmaster Cylinder.
Learn more about your ad choices. Visit podcastchoices.com/adchoices