

The MLSecOps Podcast
MLSecOps.com
Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today.Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.
Episodes
Mentioned books

Jul 21, 2025 • 24min
Season 3 Finale: Top Insights, Hacks, and Lessons from the Frontlines of AI Security
Send us a textTo close out Season 3, we’re revisiting the standout insights, wildest vulnerabilities, and most practical lessons shared by 20+ AI practitioners, researchers, and industry leaders shaping the future of AI security. If you're building, breaking, or defending AI/ML systems, this is your must-listen roundup.Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/season-3-finale-top-insights-hacks-and-lessons-from-the-frontlines-of-ai-securityThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Jul 16, 2025 • 54min
Breaking and Securing Real-World LLM Apps
Rico Komenda, an AI security specialist at Adesso SE, and Javan Rasokat from Sage share their expertise on securing LLM-integrated systems. They dive into prompt injection attacks, explaining their seriousness and potential risks. The duo discusses how vulnerabilities extend beyond models to data pipelines and APIs, highlighting the need for robust security measures. They also tackle the concept of AI firewalls and innovative strategies to enhance application security. Their insights on the evolving landscape of AI security are both timely and crucial.

Jul 9, 2025 • 42min
How Red Teamers Are Exposing Flaws in AI Pipelines
Robbe Van Roey, known as PinkDraconian, serves as the Offensive Security Lead at Toreon and is a renowned bug bounty hunter focused on AI frameworks. He shares his journey from hobby hacking to discovering critical vulnerabilities in AI systems such as BentoML and LangChain. Robbe discusses the dangers of Python pickling for model serialization, exposing risks like remote code execution. He emphasizes the importance of safe alternatives and how red teaming can uncover hidden bugs. His insights also include strategies for improving AI security and the significance of public CVEs in career growth.

Jun 25, 2025 • 34min
Securing AI for Government: Inside the Leidos + Protect AI Partnership
Send us a textOn this episode of the MLSecOps Podcast, Rob Linger, Information Advantage Practice Lead at Leidos, join hosts Jessica Souder, Director of Government and Defense at Protect AI, and Charlie McCarthy to explore what it takes to deploy secure AI/ML systems in government environments.Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/securing-ai-for-government-inside-the-leidos-protect-ai-partnership.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Jun 13, 2025 • 49min
Holistic AI Pentesting Playbook
Send us a textJason Haddix, CEO of Arcanum Information Security, joins the MLSecOps Podcast to share his methods for assessing and defending AI systems.Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/holistic-ai-pentesting-playbook.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

May 21, 2025 • 32min
AI Agent Security: Threats & Defenses for Modern Deployments
Yifeng (Ethan) He, a PhD candidate at UC Davis specializing in software and AI security, and Peter Rong, a researcher focused on vulnerabilities in AI agents, discuss the critical threats facing AI agents. They break down issues like session hijacks and tool-based jailbreaks, highlighting the shortcomings of current defenses. The duo also advocates for effective sandboxing and agent-to-agent protocols, sharing practical strategies for securing AI deployments and emphasizing the importance of a zero-trust approach in agent security.

May 14, 2025 • 24min
Autonomous Agents Beyond the Hype
Send us a textPart 2 with Gavin Klondike dives into autonomous AI agents—how they really work, the attack paths they open, and practical defenses like least-privilege APIs and out-of-band auth. A must-listen roadmap for anyone building—or defending—the next generation of AI applications.Full transcript with links to resources available at https://mlsecops.com/podcast/autonomous-agents-beyond-the-hypeThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

4 snips
Apr 30, 2025 • 26min
Beyond Prompt Injection: AI’s Real Security Gaps
Send us a textIn Part 1 of this two-part MLSecOps Podcast, Principal Security Consultant Gavin Klondike joins Dan and Marcello to break down the real threats facing AI systems today. From prompt injection misconceptions to indirect exfiltration via markdown and the failures of ML Ops security practices, Gavin unpacks what the industry gets wrong—and how to fix it.Full transcript with links to resources available at https://mlsecops.com/podcast/beyond-prompt-injection-ais-real-security-gapsThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Apr 21, 2025 • 24min
What’s Hot in AI Security at RSA Conference 2025?
Send us a textWhat’s really hot at RSA Conference 2025? MLSecOps Community Manager Madi Vorbrich sits down with Protect AI Co‑Founder Daryan “D” Dehghanpisheh for a rapid rundown of must‑see sessions, booth events, and emerging AI‑security trends—from GenAI agents to zero‑trust AI and million‑model scans. Use this episode to build a bullet‑proof RSA agenda before you land in San Francisco.Full transcript with links to resources available at https://mlsecops.com/podcast/whats-hot-in-ai-security-at-rsa-conference-2025Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Apr 16, 2025 • 36min
Unpacking the Cloud Security Alliance AI Controls Matrix
Send us a textIn this episode of the MLSecOps Podcast, we sit down with three expert contributors from the Cloud Security Alliance’s AI Controls Matrix working group. They reveal how this newly released framework addresses emerging AI threats—like model poisoning and adversarial manipulation—through robust technical controls, detailed implementation guidelines, and clear auditing strategies.Full transcript with links to resources available at https://mlsecops.com/podcast/unpacking-the-cloud-security-alliance-ai-controls-matrix Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML ModelsRecon: Automated Red Teaming for GenAIProtect AI’s ML Security-Focused Open Source ToolsLLM Guard Open Source Security Toolkit for LLM InteractionsHuntr - The World's First AI/Machine Learning Bug Bounty PlatformThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform