

#48 Machine Learning Security - Andy Smith
Mar 16, 2021
Andy Smith, a cybersecurity expert and YouTube content creator, dives into the often-overlooked realm of security in ML DevOps. He highlights the importance of threat modeling and the complexities posed by adversarial examples. The conversation sheds light on trust boundaries in machine learning systems and the need for a collaborative approach between ML and security teams. Andy also discusses the unpredictability of state space and the essential role of human oversight, advocating for a pragmatic focus on risk management to enhance data integrity.
AI Snips
Chapters
Transcript
Episode notes
Threat Modeling for ML Security
- Use threat modeling to manage security risks in complex ML systems.
- Start with a high-level view and define trust boundaries between system components.
Threat Modeling and Risk Management
- Threat modeling helps identify threats, which then feed into risk management.
- Sometimes, accepting low-risk threats is better than focusing on unlikely high-risk ones.
Adversarial Examples vs. Real-World Threats
- Adversarial examples are a theoretical threat, but not commonly seen in real-world attacks.
- Practical threats often involve simpler issues like multi-factor authentication or misconfigurations.