

#95 – Dawn Song: Adversarial Machine Learning and Computer Security
May 12, 2020
Dawn Song, a UC Berkeley professor specializing in security and machine learning, discusses crucial topics like the vulnerabilities in software and the risks posed by human error. She delves into adversarial machine learning, revealing its implications for autonomous vehicles and the need for enhanced defenses. Privacy concerns and data ownership dynamics are highlighted, alongside emerging strategies like differential privacy. The conversation also touches on program synthesis and the journey from physics to computer science, emphasizing the beauty of both fields.
AI Snips
Chapters
Transcript
Episode notes
Software Vulnerabilities
- Software vulnerabilities are difficult to avoid due to the broad nature of attacks and the challenge of writing bug-free code.
- Formal verification methods are advancing, but systems can still be vulnerable to unforeseen attacks.
Humans as the Weakest Link
- Attacks increasingly target humans, the weakest link, through social engineering and manipulation.
- AI could offer solutions, but human vulnerabilities are harder to patch than software.
AI Chatbots for Security
- Develop AI chatbots to detect social engineering attacks by observing conversations and posing challenges.
- This could help protect users from phishing and other manipulative tactics.