

Evaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations
Nov 28, 2023
Speakers discuss concerns of customers and clients regarding security of AI applications and machine learning systems. They explore the distinction between robustness and security in adversarial attacks on ML models. The concept of mitigations in robust ML, including data encryption and secure backups, is discussed. The use of cryptographic signature for data and supply chain validation for data poisoning protection are examined. Techniques of model inversion and differential privacy in adversarial ML are explained. Building effective machine learning models with clear goals is emphasized.
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8
Introduction
00:00 • 2min
The Concerns of Customers and Clients Regarding the Security of AI Applications
01:37 • 5min
Balancing Robustness and Security
06:28 • 12min
Mitigations in Robust ML and Effective Management
18:13 • 2min
Cryptographic Signature, Large Data Sets, and Encryption Trade-offs
20:24 • 2min
Supply Chain Validation for Data Poisoning Protection
21:59 • 2min
Adversarial ML Techniques: Model Inversion and Differential Privacy
24:09 • 15min
Building Effective Machine Learning Models with Clear Goals
38:58 • 2min