

The MLSecOps Podcast
MLSecOps.com
Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today.Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.
Episodes
Mentioned books

Sep 5, 2023 • 29min
A Holistic Approach to Understanding the AI Lifecycle and Securing ML Systems: Protecting AI Through People, Processes & Technology; With Guest: Rob van der Veer
Rob van der Veer, a pioneering Senior Director of AI Security and Privacy, discusses the pressing need for security in AI and machine learning systems. He highlights the unique challenges posed by AI compared to traditional software, including practical threats and ethical compliance. The conversation also covers the upcoming ISO 5338 standard, which aims to enhance AI security practices, and the importance of integrating AI specialists into businesses to bolster collaboration. Insights into maturing AI security strategies make this dialogue a must-listen!

Aug 18, 2023 • 36min
ML Model Fairness: Measuring and Mitigating Algorithmic Disparities; With Guest: Nick Schmidt
Nick Schmidt, the Chief Technology and Innovation Officer at SolasAI, discusses the role of fairness in AI/ML, highlighting real-life examples of bias and disparity in machine learning algorithms. He emphasizes the importance of model governance, accountability, and ownership in ensuring fairness. The podcast explores algorithmic fairness issues, consequences of biased algorithms, and the need for human involvement. Nick also offers advice for organizations assessing their AI security risk and advocates for seeking outside help when implementing fairness in ML models.

Aug 17, 2023 • 35min
Exploring AI/ML Security Risks: At Black Hat USA 2023 with Protect AI
AI security experts from Protect AI discuss the state of AI/ML security, open-source risks, unique threats in AI/ML system deployment, lack of understanding and need for more data, and securing AI and ML systems through threat modeling and proactive measures.

Aug 3, 2023 • 39min
Everything You Need to Know About Hacker Summer Camp 2023
Send us a textWelcome back to The MLSecOps Podcast for this week's episode, “Everything You Need to Know About Hacker Summer Camp 2023.” This week, our show is hosted by Protect AI's Chief Information Security Officer, Diana Kelley, and Diana talks with two more longtime security experts, Chloé Messdaghi and Dan McInerney, about all things related to what the security research community fondly refers to as Hacker Summer Camp. The group discusses various events held throughout the course of this exciting week in Las Vegas, including what to expect at Black Hat [USA 2023] and DEF CON [31]. (1:21) What is Hacker Summer Camp Week and what are the various events and Cons that take place during that time? (3:58) It’s my first time attending Black Hat, DEF CON, Hacker Summer Camp Week, etc.: where can I find groups to attend with or to help me navigate where to go and what events to attend? (9:53) Advice: if it’s my first time attending Black Hat, what other advice is there for me? What should I be thinking about? (13:25) If I attend Black Hat, does that mean I’m automatically able to attend DEF CON or how does that work? (TL;DR separate passes are needed for each event)(14:14) Are certain personas more welcomed at specific conferences? (15:49) What are some interesting panel talks we should know about? (20:53) There are a couple of other conferences going on during “Summer Camp Week” - BSides Las Vegas, Squadcon, Diana Initiative. What are those? When are they taking place? Can I go to all of them? How does that work?(23:26) What AI/ML security trends are happening? What should I be looking for at Black Hat and DEF CON this year in terms of talks and research?(28:55) How can I determine if a particular talk is going to be worth my time? (32:54) Any other tips on how to stay healthy and safe (both physical and electronic safety) throughout the week?Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Jul 12, 2023 • 47min
Privacy Engineering: Safeguarding AI & ML Systems in a Data-Driven Era; With Guest Katharine Jarmul
In this episode, renowned data scientist Katharine Jarmul discusses the risks of data privacy and security in ML models. They touch on topics such as OpenAI's ChatGPT, GDPR, challenges faced by organizations, privacy by design, and reputational risk. They emphasize the need for auditability, consent questions, and population selection, as well as promoting a culture of privacy champions. Building models in a secure and private way is crucial, and listeners have a chance to win Katharine's book on practical data privacy.

Jun 21, 2023 • 35min
The Intersection of MLSecOps and DataPrepOps; With Guest: Jennifer Prendki, PhD
Send us a textOn this week’s episode from The MLSecOps Podcast, we have the pleasure of hearing from Dr. Jennifer Prendki, founder and CEO of Alectio - The DataPrepOps Company. Alectio’s name comes from a blend of the acronym “AL,” standing for Active Learning, and the Latin term for the word “selection,” which is “lectio.”In this episode, Dr. Prendki defines DataPrepOps for us and describes its contrasts to MLOps, along with how DataPrepOps intersects with MLSecOps best practices. She also discusses data quality, security risks in data science, and the role that data curation plays in helping to mitigate security risks in ML models.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Jun 14, 2023 • 31min
The Evolved Adversarial ML Landscape; With Guest: Apostol Vassilev, NIST
Send us a textIn this episode, we explore the National Institute of Standards and Technology (NIST) white paper, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. The report is co-authored by our guest for this conversation; Apostol Vassilev, NIST Research Team Supervisor. Apostol provides insights into the motivations behind this initiative and the collaborative research methodology employed by the NIST team.Apostol shares with us that this taxonomy and terminology report is part of the Trustworthy & Responsible AI Resource Center that NIST is developing. Additional tools in the resource center include NIST’s AI Risk Management Framework (RMF), the OECD-NIST Catalogue of AI Tools and Metrics, and another crucial publication that Apostol co-authored called Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.The conversation then focuses on the evolution of adversarial ML (AdvML) attacks, including prominent techniques like prompt injection attacks, as well as other emerging threats amidst the rise of large language model applications. Apostol discusses the changing AI and computing infrastructure and the scale of defenses required as a result of these changes. Concluding the episode, Apostol shares thoughts on enhancing ML security practices and invites stakeholders to contribute to the ongoing development of the AdvML taxonomy and terminology white paper. Join us now for a thought-provoking discussion that sheds light on NIST's efforts to further define the terminology of adversarial ML and develop a comprehensive taxonomy of concepts that will aid industry leaders in creating additional standards and guides.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Jun 7, 2023 • 39min
Navigating the Challenges of LLMs: Guardrails AI to the Rescue; With Guest: Shreya Rajpal
Send us a textIn “Navigating the Challenges of LLMs: Guardrails to the Rescue,” Protect AI Co-Founders, Daryan Dehghanpisheh and Badar Ahmed, interview the creator of Guardrails AI, Shreya Rajpal.Guardrails AI is an open source package that allows users to add structure, type, and quality guarantees to the outputs of large language models (LLMs). In this highly technical discussion, the group digs into Shreya’s inspiration for starting the Guardrails project, the challenges of building a deterministic “guardrail” system on top of probabilistic large language models, and the challenges in general (both technical and otherwise) that developers face when building applications for LLMs. If you’re an engineer or developer in this space looking to integrate large language models into the applications you’re building, this episode is a must-listen and highlights important security considerations.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

May 24, 2023 • 36min
Indirect Prompt Injections and Threat Modeling of LLM Applications; With Guest: Kai Greshake
Send us a textThis talk makes it increasingly clear. The time for machine learning security operations - MLSecOps - is now. In “Indirect Prompt Injections and Threat Modeling of LLM Applications,” (transcript here -> https://bit.ly/45DYMAG) we dive deep into the world of large language model (LLM) attacks and security. Our conversation with esteemed cyber security engineer and researcher, Kai Greshake, centers around the concept of indirect prompt injections, a novel adversarial attack and vulnerability in LLM-integrated applications, which Kai has explored extensively. Our host, Daryan Dehghanpisheh, is joined by special guest-host (Red Team Director and prior show guest) Johann Rehberger to discuss Kai’s research, including the potential real-world implications of these security breaches. They also examine contrasts to traditional security injection vulnerabilities like SQL injections. The group also discusses the role of LLM applications in everyday workflows and the increased security risks posed by their integration into various industry systems, including military applications. The discussion then shifts to potential mitigation strategies and the future of AI red teaming and ML security. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

May 17, 2023 • 33min
Responsible AI: Defining, Implementing, and Navigating the Future; With Guest: Diya Wynn
Send us a textIn this episode of The MLSecOps Podcast, Diya Wynn, Sr. Practice Manager in Responsible AI in the Machine Learning Solutions Lab at Amazon Web Services shares her background and the motivations that led her to pursue a career in Responsible AI. Diya shares her passion for work related to diversity, equity, and inclusion (DEI), and how Responsible AI offers a unique opportunity to merge her passion for DEI with what her core focus has always been: technology. She explores the definition of Responsible AI as an operating approach focused on minimizing unintended impact and maximizing benefits. The group also spends some time in this episode discussing Generative AI and its potential to perpetuate biases and raise ethical concerns. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform