Explore the evolving role of video proof with smartphones and deepfakes. Uncover biases in surveillance systems and the impact on privacy. Dive into discussions on algorithmic auditing and evaluation. Learn about the risks of AI-generated images and the importance of critical evaluation in a surveillance-heavy world.
Read more
AI Summary
Highlights
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
Increased documentation of our lives through technology reveals biases and privacy concerns.
Advancements in AI and machine learning pose risks of distorting reality and spreading misinformation.
Deep dives
The Influence of Wearable Technology on Behavior and Self-Surveillance
Wearable technology, such as smartwatches, prompts users to stand or move, creating conflicting emotions of appreciation and judgment. The concept of self-surveillance is discussed, where individuals monitor their daily activities and behaviors through devices like smartwatches. This constant tracking can lead to feelings of accountability but may also border on self-surveillance, blurring the line between self-care and intrusive monitoring.
Implications of Surveillance Technologies and Anonymity Concerns
Surveillance technologies, both physical and online, raise privacy concerns and societal implications. The use of facial recognition and AI in online surveillance poses risks, especially concerning bias and accuracy issues. Concerns about privacy laws and the collection of biometric data highlight the need for comprehensive regulations and accountability measures to safeguard individuals' privacy.
Risks of Deep Fakes and Shallow Fakes in Manipulating Media
The proliferation of deep fakes and shallow fakes, along with advancements in AI and machine learning, raise concerns about distorting reality and spreading misinformation. Examples of misuse, such as deep fake porn and altered audio, emphasize the potential for abuse of technology. Strategies to detect fake content, including watermarking and personal vigilance, are crucial in combating the spread of manipulated media.
With smartphones in our pockets and doorbell cameras cheaply available, our relationship with video as a form of proof is evolving. We often say “pics or it didn’t happen!”—but meanwhile, there’s been a rise in problematic imaging including deepfakes and surveillance systems, which often reinforce embedded gender and racial biases. So what is really being revealed with increased documentation of our lives? And what’s lost when privacy is diminished?
In this episode of How to Know What’s Real, staff writer Megan Garber speaks with Deborah Raji,a Mozilla fellow, whose work is focused on algorithmic auditing and evaluation. In the past, Raji worked closely with the Algorithmic Justice League initiative to highlight bias in deployed AI products.
Write to us at howtopodcast@theatlantic.com.
Music by Forever Sunset (“Spring Dance”), baegel (“Cyber Wham”), Etienne Roussel (“Twilight”), Dip Diet (“Sidelined”), Ben Elson (“Darkwave”), and Rob Smierciak (“Whistle Jazz”).