

The MLSecOps Podcast
MLSecOps.com
Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today.Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.
Episodes
Mentioned books

May 10, 2023 • 39min
ML Security: AI Incident Response Plans and Enterprise Risk Culture; With Guest: Patrick Hall
Send us a textIn this episode of The MLSecOps Podcast, Patrick Hall, co-founder of BNH.AI and author of "Machine Learning for High-Risk Applications," discusses the importance of “responsible AI” implementation and risk management. He also shares real-world examples of incidents resulting from the lack of proper AI and machine learning risk management; supporting the need for governance, security, and auditability from an MLSecOps perspective.This episode also touches on the culture items and capabilities organizations need to build to have a more responsible AI implementation, the key technical components of AI risk management, and the challenges enterprises face when trying to implement responsible AI practices - including improvements to data science culture that some might suggest lacks authentic “science” and scientific practices.Also discussed are the unique challenges posed by large language models in terms of data privacy, bias management, and other incidents. Finally, Hall offers practical advice on using the NIST AI Risk Management Framework to improve an organization's AI security posture, and how BNH.AI can help those in risk management, compliance, general counsel and various other positions.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

May 3, 2023 • 41min
AI Audits: Uncovering Risks in ML Systems; With Guest: Shea Brown, PhD
Send us a textShea Brown, PhD explores with us the “W’s” and security practices related to AI and algorithm audits. What is included in an AI audit? Who is requesting AI audits and, conversely, who isn’t requesting them but should be? When should organizations request a third party audit of their AI/ML systems and machine learning algorithms?Why should they do so? What are some organizational risks and potential public harms that could result from not auditing AI/ML systems? What are some next steps to take if the results of your audit are unsatisfactory or noncompliant? Shea Brown, PhD; is the Founder and CEO of BABL AI, and a faculty member in the Department of Physics & Astronomy at the University of Iowa. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Apr 26, 2023 • 40min
MLSecOps: Red Teaming, Threat Modeling, and Attack Methods of AI Apps; With Guest: Johann Rehberger
Send us a textJohann Rehberger is an entrepreneur and Red Team Director at Electronic Arts. His career experience includes time with Microsoft and Uber, and he is the author of “Cybersecurity Attacks – Red Team Strategies: A practical guide to building a penetration testing program having homefield advantage” and the popular blog, EmbraceTheRed.com. In this episode, Johann offers insights about how to apply a traditional security engineering mindset and red team approach to analyzing the AI/ML attack surface. We also discuss ways that organizations can adapt their traditional security postures to address the unique challenges of ML security. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Apr 18, 2023 • 40min
MITRE ATLAS: Defining the ML System Attack Chain and Need for MLSecOps; With Guest: Christina Liaghati, PhD
Dr. Christina Liaghati, AI Strategy Execution & Operations Manager at MITRE, dives into AI security challenges, spotlighting the MITRE ATLAS framework and its evolution from traditional cybersecurity. She discusses real-world case studies, including a notorious theft, demonstrating the complexities of adversarial machine learning. The conversation emphasizes tailored strategies for safeguarding machine learning systems, advocating for collaborative efforts in the community and addressing regulatory challenges to ensure robust security in an evolving landscape.

Apr 11, 2023 • 39min
Unpacking AI Bias: Impact, Detection, Prevention, and Policy; With Guest: Dr. Cari Miller, MBA, FHCA
Send us a textWhat is AI bias and how does it impact both organizations and individual members of society? How does one detect if they’ve been impacted by AI bias? What can be done to prevent or mitigate it? Can AI/ML systems be audited for bias and, if so, how?The MLSecOps Podcast explores these questions and more with guest Cari Miller, Founder of the Center for Inclusive Change and member of the For Humanity Board of Directors.This week’s episode delves into the controversial topics of Trusted and Ethical AI within the realm of MLSecOps, offering insightful discussion and thoughtful perspectives. It also highlights the importance of continuing the conversation around AI bias and working toward creating more ethical and fair AI/ML systems.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Mar 28, 2023 • 39min
A Closer Look at "Adversarial Robustness for Machine Learning" With Guest: Pin-Yu Chen
Send us a textIn this episode of The MLSecOps podcast, the co-hosts interview Pin-Yu Chen, Principal Research Scientist at IBM Research, about his book co-authored with Cho-Jui Hsieh, "Adversarial Robustness for Machine Learning." Chen explores the vulnerabilities of machine learning (ML) models to adversarial attacks and provides examples of how to enhance their robustness. The discussion delves into the difference between Trustworthy AI and Trustworthy ML, as well as the concept of LLM practical attacks, which take into account the practical constraints of an attacker. Chen also discusses security measures that can be taken to protect ML systems and emphasizes the importance of considering the entire model lifecycle in terms of security. Finally, the conversation concludes with a discussion on how businesses can justify the cost and value of implementing adversarial defense methods in their ML systems.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Mar 28, 2023 • 48min
Just How Practical Are Data Poisoning Attacks? With Guest: Dr. Florian Tramèr
Send us a textETH Zürich's Assistant Professor of Computer Science, Dr. Florian Tramèr, joins us to talk about data poisoning attacks and the intersection of Adversarial ML and MLSecOps (machine learning security operations).Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Mar 28, 2023 • 31min
A Closer Look at "Securing AIML Systems in the Age of Information Warfare" With Guest: Disesdi Susanna Cox
Disesdi Susanna Cox, a security researcher and AI/ML architect with a background in politics, dives into the intersection of AI and information warfare. She shares her experiences with AI security challenges and the need for robust defenses. Highlighting the importance of auditing in AI systems, she discusses vulnerabilities and the danger of prioritizing innovation over security. The conversation wraps up with a call for stronger collaboration between security and AI experts to enhance resilience in ML models, urging listeners to engage in this vital dialogue.