

The MLSecOps Podcast
MLSecOps.com
Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today.Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.
Episodes
Mentioned books

Jul 21, 2025 • 24min
Season 3 Finale: Top Insights, Hacks, and Lessons from the Frontlines of AI Security
Send us a textTo close out Season 3, we’re revisiting the standout insights, wildest vulnerabilities, and most practical lessons shared by 20+ AI practitioners, researchers, and industry leaders shaping the future of AI security. If you're building, breaking, or defending AI/ML systems, this is your must-listen roundup.Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/season-3-finale-top-insights-hacks-and-lessons-from-the-frontlines-of-ai-securityThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Jul 16, 2025 • 54min
Breaking and Securing Real-World LLM Apps
Send us a textFresh off their OWASP AppSec EU talk, Rico Komenda and Javan Rasokat join Charlie McCarthy to share real-world insights on breaking and securing LLM-integrated systems.Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/breaking-and-securing-real-world-llm-appsAsk ChatGPTThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Jul 9, 2025 • 42min
How Red Teamers Are Exposing Flaws in AI Pipelines
Send us a textProlific bug bounty hunter and Offensive Security Lead at Toreon, Robbe Van Roey (PinkDraconian), joins the MLSecOps Podcast to break down how he discovered RCEs in BentoML and LangChain, the risks of unsafe model serialization, and his approach to red teaming AI systems. Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/how-red-teamers-are-exposing-flaws-in-ai-pipelinesThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Jun 25, 2025 • 34min
Securing AI for Government: Inside the Leidos + Protect AI Partnership
Send us a textOn this episode of the MLSecOps Podcast, Rob Linger, Information Advantage Practice Lead at Leidos, join hosts Jessica Souder, Director of Government and Defense at Protect AI, and Charlie McCarthy to explore what it takes to deploy secure AI/ML systems in government environments.Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/securing-ai-for-government-inside-the-leidos-protect-ai-partnership.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Jun 13, 2025 • 49min
Holistic AI Pentesting Playbook
Send us a textJason Haddix, CEO of Arcanum Information Security, joins the MLSecOps Podcast to share his methods for assessing and defending AI systems.Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/holistic-ai-pentesting-playbook.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

May 21, 2025 • 32min
AI Agent Security: Threats & Defenses for Modern Deployments
Send us a textResearchers Yifeng (Ethan) He and Peter Rong join host Madi Vorbrich to break down their paper "Security of AI Agents." They explore real-world AI agent threats, like session hijacks and tool-based jailbreaks, and share practical defenses, from sandboxing to agent-to-agent protocols.Full transcript with links to resources available at https://mlsecops.com/podcast/ai-agent-security-threats-defenses-for-modern-deploymentsThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

May 14, 2025 • 24min
Autonomous Agents Beyond the Hype
Send us a textPart 2 with Gavin Klondike dives into autonomous AI agents—how they really work, the attack paths they open, and practical defenses like least-privilege APIs and out-of-band auth. A must-listen roadmap for anyone building—or defending—the next generation of AI applications.Full transcript with links to resources available at https://mlsecops.com/podcast/autonomous-agents-beyond-the-hypeThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

4 snips
Apr 30, 2025 • 26min
Beyond Prompt Injection: AI’s Real Security Gaps
Send us a textIn Part 1 of this two-part MLSecOps Podcast, Principal Security Consultant Gavin Klondike joins Dan and Marcello to break down the real threats facing AI systems today. From prompt injection misconceptions to indirect exfiltration via markdown and the failures of ML Ops security practices, Gavin unpacks what the industry gets wrong—and how to fix it.Full transcript with links to resources available at https://mlsecops.com/podcast/beyond-prompt-injection-ais-real-security-gapsThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Apr 21, 2025 • 24min
What’s Hot in AI Security at RSA Conference 2025?
Send us a textWhat’s really hot at RSA Conference 2025? MLSecOps Community Manager Madi Vorbrich sits down with Protect AI Co‑Founder Daryan “D” Dehghanpisheh for a rapid rundown of must‑see sessions, booth events, and emerging AI‑security trends—from GenAI agents to zero‑trust AI and million‑model scans. Use this episode to build a bullet‑proof RSA agenda before you land in San Francisco.Full transcript with links to resources available at https://mlsecops.com/podcast/whats-hot-in-ai-security-at-rsa-conference-2025Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Apr 16, 2025 • 36min
Unpacking the Cloud Security Alliance AI Controls Matrix
Send us a textIn this episode of the MLSecOps Podcast, we sit down with three expert contributors from the Cloud Security Alliance’s AI Controls Matrix working group. They reveal how this newly released framework addresses emerging AI threats—like model poisoning and adversarial manipulation—through robust technical controls, detailed implementation guidelines, and clear auditing strategies.Full transcript with links to resources available at https://mlsecops.com/podcast/unpacking-the-cloud-security-alliance-ai-controls-matrix Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML ModelsRecon: Automated Red Teaming for GenAIProtect AI’s ML Security-Focused Open Source ToolsLLM Guard Open Source Security Toolkit for LLM InteractionsHuntr - The World's First AI/Machine Learning Bug Bounty PlatformThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform