The rise of voice deepfakes presents challenges similar to image-based AI, but it might be harder to detect and regulate due to optimization for image detection. Despite being a technical challenge, the legal implications of voice impersonation are significant. The existence of high-quality celebrity voice impersonators exacerbates the issue. This situation leads to an existential crisis with blurry legal boundaries, especially in cases like deep fake non-consensual pornography. The complexity of these challenges requires a thoughtful approach given that many people struggle to discern fake information, irrespective of its detectability.
Our new Thursday episodes of Decoder are all about deep dives into big topics in the news, and this week we’re continuing our mini-series on one of the biggest topics of all: generative AI. Last week, we took a look at the wave of copyright lawsuits that might eventually grind this whole industry to a halt. Those are basically a coin flip — and the outcomes are off in the distance, as those cases wind their way through the legal system.
A bigger problem right now is that AI systems are really good at making just believable enough fake images and audio — and with tools like OpenAI’s new Sora, maybe video soon, too. And of course, it’s once again a presidential election year here in the US. So today, Verge policy editor Adi Robertson joins the show to discuss how AI might supercharge disinformation and lies in an election that’s already as contentious as any in our lifetimes — and what might be done about it.
Links:
Credits:
Decoder is a production of The Verge and part of the Vox Media Podcast Network.
Today’s episode was produced by Kate Cox and Nick Statt and was edited by Callie Wright.
The Decoder music is by Breakmaster Cylinder.
Learn more about your ad choices. Visit podcastchoices.com/adchoices