Decoder is off this week for a short end-of-summer break. We’ll be back with both our interview and explainer episodes after the Labor Day holiday. In the meantime we thought we’d re-share an explainer that’s taken on a whole new relevance in the last couple weeks, about deepfakes and misinformation.
In February, I talked with Verge policy editor Adi Robertson how the generative AI boom might start fueling a wave of election-related misinformation, especially deepfakes and manipulated media. It’s not been quite an apocalyptic AI free-for-all out there. But the election itself took some really unexpected turns in these last couple of months. Now we’re heading into the big, noisy home stretch, and use of AI is starting to get really weird — and much more troublesome.
Links:
- The AI-generated hell of the 2024 election | The Verge
- AI deepfakes are cheap, easy, and coming for the 2024 election | Decoder
- Elon Musk posts deepfake of Kamala Harris that violates X policy | The Verge
- Donald Trump posts a fake AI-generated Taylor Swift endorsement | The Verge
- X’s Grok now points to government site after misinformation warnings | The Verge
- Political ads could require AI-generated content disclosures soon | The Verge
- The Copyright Office calls for a new federal law regulating deepfakes | The Verge
- How AI companies are reckoning with elections | The Verge
- The lame AI meme election | Axios
- Deepfakes' parody loophole | Axios
Credits:
Decoder is a production of The Verge and part of the Vox Media Podcast Network.
Our producers are Kate Cox and Nick Statt. Our editor is Callie Wright. Our supervising producer is Liam James.
The Decoder music is by Breakmaster Cylinder.
Learn more about your ad choices. Visit podcastchoices.com/adchoices