Coffee Sessions #51 with Sahbi Chaieb, ML security: Why should you care?
//Abstract
Sahbi, a senior data scientist at SAS, joined us to discuss the various security challenges in MLOps. We went deep into the research he found describing various threats as part of a recent paper he wrote. We also discussed tooling options for this problem that is emerging from companies like Microsoft and Google.
// Bio
Sahbi Chaieb is a Senior Data Scientist at SAS, he has been working on designing, implementing, and deploying Machine Learning solutions in various industries for the past 5 years. Sahbi graduated with an Engineering degree from Supélec, France, and holds an MS in Computer Science specialized in Machine Learning from Georgia Tech.
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/
Connect with Sahbi on LinkedIn: https://www.linkedin.com/in/sahbichaieb/
Timestamps:
[00:00] Introduction to Sahbi Chaieb
[01:25] Sahbi's background in tech
[02:57] Inspiration of the article
[09:40] Why should you care about keeping our model secure?
[12:53] Model stealing
[14:16] Development practices
[17:24] Other tools in the toolbox covered in the article
[21:29] Stories/occurrences where data was leaked
[24:45] EU Regulations on robustness
[26:49] Dangers of federated learning
[31:50] Tooling status on model security [33:58] AI Red Teams
[36:42] ML Security best practices
[38:26] AI + Cyber Security
[39:26] Synthetic Data
[42:51] Prescription on ML Security in 5-10 years
[46:37] Pain points encountered