The concept of having absolute control over one's photograph has gradually gained prominence, fueled by the idea that individuals should have authority over any image of themselves. Likeness law supports this notion, but there are complexities to consider, especially in cases involving non-synthetic content like documentaries and news reports where public interest is at play. The dilemma is intensified when discussing AI-generated images, prompting questions about the boundaries between AI-generated images, Photoshopped images, and drawings. The debate revolves around whether individuals should have the right to prevent depictions that they disapprove of, even if the depiction is protected by the First Amendment, raising significant considerations about balancing personal autonomy with societal benefit.
Our new Thursday episodes of Decoder are all about deep dives into big topics in the news, and this week we’re continuing our mini-series on one of the biggest topics of all: generative AI. Last week, we took a look at the wave of copyright lawsuits that might eventually grind this whole industry to a halt. Those are basically a coin flip — and the outcomes are off in the distance, as those cases wind their way through the legal system.
A bigger problem right now is that AI systems are really good at making just believable enough fake images and audio — and with tools like OpenAI’s new Sora, maybe video soon, too. And of course, it’s once again a presidential election year here in the US. So today, Verge policy editor Adi Robertson joins the show to discuss how AI might supercharge disinformation and lies in an election that’s already as contentious as any in our lifetimes — and what might be done about it.
Links:
Credits:
Decoder is a production of The Verge and part of the Vox Media Podcast Network.
Today’s episode was produced by Kate Cox and Nick Statt and was edited by Callie Wright.
The Decoder music is by Breakmaster Cylinder.
Learn more about your ad choices. Visit podcastchoices.com/adchoices