
Cloud Security Podcast by Google
EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw
Guest:
-
Dr Gary McGraw, founder of the Berryville Institute of Machine Learning
Topics:
-
Gary, you’ve been doing software security for many decades, so tell us: are we really behind on securing ML and AI systems?
-
If not SBOM for data or “DBOM”, then what? Can data supply chain tools or just better data governance practices help?
-
How would you threat model a system with ML in it or a new ML system you are building?
-
What are the key differences and similarities between securing AI and securing a traditional, complex enterprise system?
-
What are the key differences between securing the AI you built and AI you buy or subscribe to?
- Which security tools and frameworks will solve all of these problems for us?
Resources:
- EP135 AI and Security: The Good, the Bad, and the Magical
-
“An Architectural Risk Analysis Of Machine Learning Systems: Toward More Secure Machine Learning“ paper
-
“What to think about when you’re thinking about securing AI”
-
“Microsoft AI researchers accidentally leak 38TB of company data”
- Introducing Google’s Secure AI Framework