AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Coffee Sessions #45 with Diego Oppenheimer of Algorithmia, Enterprise Security and Governance MLOps.
//Abstract
MLOps in the enterprise is difficult due to security and compliance. In this MLOps Coffee Session, the CEO of Algorithmia, Diego talks to us about how we can better approach MLOps within the enterprise. This is an introduction to essential principles of security in MLOps and why it is crucial to be aware of security best practices as an ML professional.
// Bio
Diego Oppenheimer is co-founder and CEO of Algorithmia. Previously, he designed, managed, and shipped some of Microsoft’s most used data analysis products including Excel, Power Pivot, SQL Server, and Power BI. He holds a Bachelor’s degree in Information Systems and a Master’s degree in Business Intelligence and Data Analytics from Carnegie Mellon University.
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/
Connect with Diego on LinkedIn: https://www.linkedin.com/in/diego/
Timestamps:
[00:00] Thank you Diego and Algorithmia for sponsoring this session!
[01:04] Introduction to Diego Oppenheimer
[02:55] Security
[04:42] "The level of scrutiny for apps and development and that of the operational software is much higher."
[07:40] "We take the Ops part of MLOps very, very seriously and it's really about the operational side of the equation."
[09:22] MLSecOps
[11:42] "The code doesn't change, but things change cause the data changed."
[15:23] Maturity of security
[18:45] "To a certain degree, we have general parameters of software DevOps In software engineering and DevOps, and we're adapting it to this new world of ML."
[19:03] Development workflow
[20:58] "In the ideal world, you're just sitting in your data science platform, your auto ML platform, whatever it is that you're working with, you can push a model."
[22:50] Security, responsibility and authentication
[23:38] "What you don't want to learn is how to do automation every single time there's a new use case. That's just not a good use of your time." [24:30] Hurdles needed to be cleared
[24:47] "I would argue that there's no such thing as Bulletproof in software. That doesn't exist. It never has and never will."
[26:25] Machine Learning security risks
1. Operational risk
2. Brand risk
3. Strategic risk
[28:23] Machine Learning security risk standards
[31:11] "There's a world where you can reverse engineer a model by essentially feeding a whole bunch of data and understanding where that comes back."
[33:55] How to change the mindset of relaxed companies when it comes to security
[35:19] "It takes time and money to figure out security."
[37:52] Conscientious when building systems
[39:44] "Look at the end result of the workflow and understand the value of that workflow, which you should know at that point because if you're going into an ML workflow without understanding what the end value is going to be, it's not a good sign."
[40:19] Root cause analysis
[41:00] Threat modeling
[41:14] "There's a natural next step where there's threat modeling for ML systems and it's a task that gets built and understood, and nobody's going to enjoy doing it."
[43:07] Security as code
[45:29] MLRE