Sam Gregory, Executive Director of WITNESS, is an expert on deepfakes and AI's impact on trust. He discusses the alarming ease of creating deepfakes and their potential to mislead the public. The podcast reveals how audio deepfakes pose unique threats and examines their limited impact on the 2024 US Presidential Election. Gregory emphasizes the mantra 'Prepare, Don’t Panic' for addressing AI challenges and offers practical steps for software makers to enhance transparency and ethical use, all while empowering human rights defenders.
28:20
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
00:00 / 00:00
Deepfakes Impact on Journalistic Trust
Deepfakes and manipulated media worsen distrust in information and harm frontline journalists and defenders.
Sam Gregory's organization focuses on supporting these users to use new tech safely and effectively.
00:00 / 00:00
AI Usage in the 2024 US Election
Deepfakes did not massively influence the 2024 US election but cheapfakes and audio manipulations were notable.
AI is also used for satire, participation, and sometimes dismissing real compromising footage as fake.
00:00 / 00:00
Prepare, Don't Panic with AI
Prepare for AI's growing realism and personalization rather than panic.
Build resilience among users and enable transparency to detect deceptive AI content.
Get the Snipd Podcast app to discover more snips from this episode
Deepfakes are getting easier and easier to make. So, how will we be able to believe that what we see and what we hear is real? And what can software makers do to help?
Sam Gregory is an expert on deepfakes, AI, and trust. He advises governments and tech companies on how they can protect human rights and how we can preserve our shared reality.
Sam is the executive director of WITNESS, an organization that helps citizens use video to foster social change. WITNESS has trained and supported citizen-journalists since the days of the camcorder through the smartphone era and now into the world of AI.
We discuss:
How deepfakes are being used to spread disinformation and erode trust in media.
How to detect that a piece of media was manipulated and to what degree
Why audio deepfakes are so pernicious
How deepfakes mostly did not affect the 2024 US Presidential Election, while cheapfakes were very common
The surprising ways AI is both helping and harming human rights defenders and journalists
Why “Prepare, Don’t Panic” is WITNESS’s mantra for addressing AI threats.
Practical steps software makers can take to design tools that prioritize transparency and ethical use, such as including transparency features in AI-generated content, red teaming to simulate misuse scenarios, thinking beyond Western contexts, and more…
Chapters:
(00:55) - Deepfakes and the threat they pose human rights and journalism
(03:16) - The 2024 US election and how deepfakes, cheapfakes, and audio clones were used
(07:35) - Why WITNESS. says “Prepare, Don’t Panic” about AI
(11:16) - Recommendation for software builders to prevent — and detect — misuse
(13:45) - How to identify that a piece of media was manipulated by AI
(17:31) - Red Teaming: The scary questions builders should ask as they deploy new products
(22:20) - WITNESS.’s work beyond AI
(26:00) - Good news: we’ve preparing for AI and deepfakes for a long time and governments and technologists are working together