The MLSecOps Podcast cover image

The MLSecOps Podcast

Latest episodes

undefined
Jul 9, 2025 • 42min

How Red Teamers Are Exposing Flaws in AI Pipelines

Send us a textProlific bug bounty hunter and Offensive Security Lead at Toreon, Robbe Van Roey (PinkDraconian), joins the MLSecOps Podcast to break down how he discovered RCEs in BentoML and LangChain, the risks of unsafe model serialization, and his approach to red teaming AI systems. Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/how-red-teamers-are-exposing-flaws-in-ai-pipelinesThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Jun 25, 2025 • 34min

Securing AI for Government: Inside the Leidos + Protect AI Partnership

Send us a textOn this episode of the MLSecOps Podcast, Rob Linger, Information Advantage Practice Lead at Leidos, join hosts Jessica Souder, Director of Government and Defense at Protect AI, and Charlie McCarthy to explore what it takes to deploy secure AI/ML systems in government environments.Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/securing-ai-for-government-inside-the-leidos-protect-ai-partnership.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Jun 13, 2025 • 49min

Holistic AI Pentesting Playbook

Send us a textJason Haddix, CEO of Arcanum Information Security, joins the MLSecOps Podcast to share his methods for assessing and defending AI systems.Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/holistic-ai-pentesting-playbook.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
May 21, 2025 • 32min

AI Agent Security: Threats & Defenses for Modern Deployments

Send us a textResearchers Yifeng (Ethan) He and Peter Rong join host Madi Vorbrich to break down their paper "Security of AI Agents." They explore real-world AI agent threats, like session hijacks and tool-based jailbreaks, and share practical defenses, from sandboxing to agent-to-agent protocols.Full transcript with links to resources available at https://mlsecops.com/podcast/ai-agent-security-threats-defenses-for-modern-deploymentsThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
May 14, 2025 • 24min

Autonomous Agents Beyond the Hype

Send us a textPart 2 with Gavin Klondike dives into autonomous AI agents—how they really work, the attack paths they open, and practical defenses like least-privilege APIs and out-of-band auth. A must-listen roadmap for anyone building—or defending—the next generation of AI applications.Full transcript with links to resources available at https://mlsecops.com/podcast/autonomous-agents-beyond-the-hypeThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
4 snips
Apr 30, 2025 • 26min

Beyond Prompt Injection: AI’s Real Security Gaps

Send us a textIn Part 1 of this two-part MLSecOps Podcast, Principal Security Consultant Gavin Klondike joins Dan and Marcello to break down the real threats facing AI systems today. From prompt injection misconceptions to indirect exfiltration via markdown and the failures of ML Ops security practices, Gavin unpacks what the industry gets wrong—and how to fix it.Full transcript with links to resources available at https://mlsecops.com/podcast/beyond-prompt-injection-ais-real-security-gapsThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Apr 21, 2025 • 24min

What’s Hot in AI Security at RSA Conference 2025?

Send us a textWhat’s really hot at RSA Conference 2025? MLSecOps Community Manager Madi Vorbrich sits down with Protect AI Co‑Founder Daryan “D” Dehghanpisheh for a rapid rundown of must‑see sessions, booth events, and emerging AI‑security trends—from GenAI agents to zero‑trust AI and million‑model scans. Use this episode to build a bullet‑proof RSA agenda before you land in San Francisco.Full transcript with links to resources available at https://mlsecops.com/podcast/whats-hot-in-ai-security-at-rsa-conference-2025Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Apr 16, 2025 • 36min

Unpacking the Cloud Security Alliance AI Controls Matrix

Send us a textIn this episode of the MLSecOps Podcast, we sit down with three expert contributors from the Cloud Security Alliance’s AI Controls Matrix working group. They reveal how this newly released framework addresses emerging AI threats—like model poisoning and adversarial manipulation—through robust technical controls, detailed implementation guidelines, and clear auditing strategies.Full transcript with links to resources available at https://mlsecops.com/podcast/unpacking-the-cloud-security-alliance-ai-controls-matrix Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML ModelsRecon: Automated Red Teaming for GenAIProtect AI’s ML Security-Focused Open Source ToolsLLM Guard Open Source Security Toolkit for LLM InteractionsHuntr - The World's First AI/Machine Learning Bug Bounty PlatformThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Apr 2, 2025 • 41min

From Pickle Files to Polyglots: Hidden Risks in AI Supply Chains

Send us a textJoin Keith Hoodlet from Trail of Bits as he dives into AI/ML security, discussing everything from prompt injection and fuzzing techniques to bias testing and compliance challenges.Full transcript with links to resources available at https://mlsecops.com/podcast/from-pickle-files-to-polyglots-hidden-risks-in-ai-supply-chainsThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Mar 19, 2025 • 37min

Rethinking AI Red Teaming: Lessons in Zero Trust and Model Protection

Send us a textFull transcript with links to resources available at https://mlsecops.com/podcast/rethinking-ai-red-teaming-lessons-in-zero-trust-and-model-protectionThis episode is a follow up to Part 1 of our conversation with returning guest Brian Pendleton, as he challenges the way we think about red teaming and security for AI. Continuing from last week’s exploration of enterprise AI adoption and high-level security considerations, the conversation now shifts to how red teaming, zero trust, and privacy concerns intertwine with AI’s unique risks.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app