The MLSecOps Podcast cover image

The MLSecOps Podcast

Latest episodes

undefined
Apr 2, 2025 • 41min

From Pickle Files to Polyglots: Hidden Risks in AI Supply Chains

Send us a textJoin Keith Hoodlet from Trail of Bits as he dives into AI/ML security, discussing everything from prompt injection and fuzzing techniques to bias testing and compliance challenges.Full transcript with links to resources available at https://mlsecops.com/podcast/from-pickle-files-to-polyglots-hidden-risks-in-ai-supply-chainsThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Mar 19, 2025 • 37min

Rethinking AI Red Teaming: Lessons in Zero Trust and Model Protection

Send us a textFull transcript with links to resources available at https://mlsecops.com/podcast/rethinking-ai-red-teaming-lessons-in-zero-trust-and-model-protectionThis episode is a follow up to Part 1 of our conversation with returning guest Brian Pendleton, as he challenges the way we think about red teaming and security for AI. Continuing from last week’s exploration of enterprise AI adoption and high-level security considerations, the conversation now shifts to how red teaming, zero trust, and privacy concerns intertwine with AI’s unique risks.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Mar 13, 2025 • 41min

AI Security: Map It, Manage It, Master It

Send us a textFull transcript with links to resources available at https://mlsecops.com/podcast/ai-security-map-it-manage-it-master-itIn part one of our two-part MLSecOps Podcast episode, security veteran Brian Pendleton takes us from his early hacker days to the forefront of AI security. Brian explains why mapping every AI integration is essential for uncovering vulnerabilities. He also dives into the benefits of using SBOMs over model cards for risk management and stresses the need to bridge the gap between ML and security teams to protect your enterprise AI ecosystem.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Mar 5, 2025 • 23min

Agentic AI: Tackling Data, Security, and Compliance Risks

Send us a textFull transcript with links to resources available at https://mlsecops.com/podcast/agentic-ai-tackling-data-security-and-compliance-risksJoin host Diana Kelley and CTO Dr. Gina Guillaume-Joseph as they explore how agentic AI, robust data practices, and zero trust principles drive secure, real-time video analytics at Camio. They discuss why clean data is essential, how continuous model validation can thwart adversarial threats, and the critical balance between autonomous AI and human oversight. Dive into the world of multimodal modeling, ethical safeguards, and what it takes to ensure AI remains both innovative and risk-aware.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Feb 24, 2025 • 24min

AI Vulnerabilities: ML Supply Chains to LLM and Agent Exploits

Send us a textFull transcript with links to resources available at https://mlsecops.com/podcast/ai-vulnerabilities-ml-supply-chains-to-llm-and-agent-exploitsJoin host Dan McInerney and AI security expert Sierra Haex as they explore the evolving challenges of AI security. They discuss vulnerabilities in ML supply chains, the risks in tools like Ray and untested AI model files, and how traditional security measures intersect with emerging AI threats. The conversation also covers the rise of open-source models like DeepSeek and the security implications of deploying autonomous AI agents, offering critical insights for anyone looking to secure distributed AI systems.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Feb 14, 2025 • 39min

Implementing Enterprise AI Governance: Balancing Ethics, Innovation & Risk for Business Success

Send us a textFull transcript with links to resources available at https://mlsecops.com/podcast/implementing-a-robust-ai-governance-framework-for-business-successIn this episode of the MLSecOps podcast, host Charlie McCarthy sits down with Chris McClean, Global Lead for Digital Ethics at Avanade, to explore the world of responsible AI governance. They discuss how ethical principles, risk management, and robust security practices can be integrated throughout the AI lifecycle—from design and development to deployment and oversight. Learn practical strategies for building resilient AI frameworks, understanding regulatory impacts, and driving innovation safely. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
4 snips
Feb 5, 2025 • 52min

Unpacking Generative AI Red Teaming and Practical Security Solutions

Send us a textFull transcript with links to resources available at https://mlsecops.com/podcast/unpacking-generative-ai-red-teaming-and-practical-security-solutionsIn this episode, we explore LLM red teaming beyond simple “jailbreak” prompts with special guest Donato Capitella, from WithSecure Consulting. You’ll learn why vulnerabilities live in context—how LLMs interact with users, tools, and documents—and discover best practices for mitigating attacks like prompt injection. Our guest also previews an open-source tool for automating security tests on LLM applications.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Dec 9, 2024 • 38min

AI Security: Vulnerability Detection and Hidden Model File Risks

Send us a textIn this episode of the MLSecOps Podcast, the team dives into the transformative potential of Vulnhuntr: zero shot vulnerability discovery using LLMs. Madison Vorbrich hosts Dan McInerney and Marcello Salvati to discuss Vulnhuntr’s ability to autonomously identify vulnerabilities, including zero-days, using large language models (LLMs) like Claude. They explore the evolution of AI tools for security, the gap between traditional and AI-based static code analysis, and how Vulnhuntr enables both developers and security teams to proactively safeguard their projects. The conversation also highlights Protect AI’s bug bounty platform, huntr.com, and its expansion into model file vulnerabilities (MFVs), emphasizing the critical need to secure AI supply chains and systems.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Nov 7, 2024 • 38min

AI Governance Essentials: Empowering Procurement Teams to Navigate AI Risk

Dr. Cari Miller, an expert in AI governance and procurement, shares her insights from the AI Procurement Lab and ForHumanity. She discusses essential frameworks for mitigating AI acquisition risks, highlighting the OMB Memo M-24-18's role in guiding government AI procurements. The conversation emphasizes the importance of cross-functional collaboration and AI literacy in organizations. Dr. Miller also addresses the need for trained procurement professionals to navigate AI risks effectively, ensuring responsible and accountable AI deployment.
undefined
Oct 29, 2024 • 33min

Crossroads: AI, Cybersecurity, and How to Prepare for What's Next

Nicole Nichols, a Distinguished Engineer at Palo Alto Networks, specializes in AI security and bridging complex systems. In this discussion, she recounts her journey from mechanical engineering to AI. Topics include the critical importance of clear AI vocabularies and the intertwined concepts of fairness and safety in AI. Nicole also explores emerging threats like LLM backdoors and emphasizes the need for collaboration and a growth mindset among tech professionals to tackle evolving cybersecurity challenges in an AI-driven landscape.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode