When AI can fake reality, who can you trust? | Sam Gregory
Dec 20, 2023
auto_awesome
Sam Gregory, technologist and human rights advocate, discusses the challenges of distinguishing real from fake in a world filled with hyper-realistic deepfakes. He emphasizes the importance of accessible deep fake detection tools and the need to ensure trust in AI-generated media through methods like invisible watermarking and cryptographically signed metadata.
Developing detection tools and skills to fortify credibility of critical voices and images is crucial in combating deep fakes.
Adding invisible watermarking and cryptographically signed metadata to AI-generated media reinforces trust in information while ensuring authenticity without compromising privacy or anonymity.
Deep dives
The Growing Threat of Deep Fakes
The speaker discusses the increasing difficulty of distinguishing between real and fake content due to advances in generative AI and deep fakes. These technologies have the potential to harm individuals, especially women and girls, by creating falsified sexual images and misleading audio-visual content. The presence of audio deep fakes in electoral contexts and synthetic avatars impersonating news anchors further exacerbate the problem. The speaker emphasizes the importance of developing detection tools and skills to fortify the credibility of critical voices and images, and to provide access to these tools for journalists, community leaders, and election officials as the first line of defense against deep fakes.
The Need for Content Provenance and Disclosure
To address the challenge of AI-infused communication, there is a need to better understand the recipe of the content consumed. The speaker highlights the significance of content provenance and disclosure, which involves adding invisible watermarking and cryptographically signed metadata to AI-generated media. This approach reinforces trust in the information and provides details about the involvement of AI, humans, and other technologies in the creation and distribution of content. The speaker advocates for the development of an infrastructure that ensures authenticity without compromising privacy or anonymity.
The Pipeline of Responsibility for AI
The final step to address the threat of deep fakes involves establishing a pipeline of responsibility. This pipeline should prioritize transparency, accountability, and liability throughout the process of developing and deploying AI models, from foundation models and open source projects to platforms and applications where media is consumed and communicated. Without these measures, it becomes increasingly challenging to distinguish between real and fake content, ultimately risking a world where fake reality is easier to create and dismiss.
We're fast approaching a world where widespread, hyper-realistic deepfakes lead us to dismiss reality, says technologist and human rights advocate Sam Gregory. What happens to democracy when we can't trust what we see? Learn three key steps to protecting our ability to distinguish human from synthetic — and why fortifying our perception of truth is crucial to our AI-infused future.