The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks with Nataniel Ruiz - #375

May 14, 2020
Nataniel Ruiz, a PhD student at Boston University specializing in image and video computing, dives into the intricate world of deepfakes. He discusses the importance of adversarial attacks in combating manipulative technology while navigating the ethical implications of image translation networks. The conversation addresses the complexities of protecting digital images and explores potential applications of blockchain in image security. Ruiz highlights the delicate balance needed in executing effective attacks and developing defenses, all while reflecting on the challenges of research amid uncertainty.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Cassava Plant Disease Detection

  • Nataniel Ruiz's first AI project involved detecting diseases in cassava plants in Uganda.
  • This experience sparked his interest in deep neural networks and image processing.
ANECDOTE

Obama Deepfakes

  • Nataniel Ruiz was fascinated by deepfakes of Obama created using computer graphics and deep neural networks.
  • This, combined with privacy concerns, led him to explore disrupting deepfakes.
INSIGHT

Disrupting Deepfakes

  • Injecting noise into an image can disrupt a generative model's ability to manipulate it.
  • This can protect images from being used in deepfakes.
Get the Snipd Podcast app to discover more snips from this episode
Get the app