The MLSecOps Podcast cover image

The MLSecOps Podcast

Latest episodes

undefined
Dec 9, 2024 • 38min

AI Security: Vulnerability Detection and Hidden Model File Risks

Send us a textIn this episode of the MLSecOps Podcast, the team dives into the transformative potential of Vulnhuntr: zero shot vulnerability discovery using LLMs. Madison Vorbrich hosts Dan McInerney and Marcello Salvati to discuss Vulnhuntr’s ability to autonomously identify vulnerabilities, including zero-days, using large language models (LLMs) like Claude. They explore the evolution of AI tools for security, the gap between traditional and AI-based static code analysis, and how Vulnhuntr enables both developers and security teams to proactively safeguard their projects. The conversation also highlights Protect AI’s bug bounty platform, huntr.com, and its expansion into model file vulnerabilities (MFVs), emphasizing the critical need to secure AI supply chains and systems.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Nov 7, 2024 • 38min

AI Governance Essentials: Empowering Procurement Teams to Navigate AI Risk

Send us a textFull transcript with links to resources available at https://mlsecops.com/podcast/ai-governance-essentials-empowering-procurement-teams-to-navigate-ai-risk.In this episode of the MLSecOps Podcast, Charlie McCarthy from Protect AI sits down with Dr. Cari Miller to discuss the evolving landscapes of AI procurement and governance. Dr. Miller shares insights from her work with the AI Procurement Lab and ForHumanity, delving into the essential frameworks and strategies needed to mitigate risks in AI acquisitions. They cover the AI Procurement Risk Management Framework, practical ways to ensure transparency and accountability, and how the September 2024 OMB Memo M-24-18 is guiding AI acquisition in government. Dr. Miller also emphasizes the importance of cross-functional collaboration and AI literacy to support responsible AI procurement and deployment in organizations of all types.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Oct 29, 2024 • 33min

Crossroads: AI, Cybersecurity, and How to Prepare for What's Next

Send us a textIn this episode of the MLSecOps Podcast, Distinguished Engineer Nicole Nichols from Palo Alto Networks joins host and Machine Learning Scientist Mehrin Kiani to explore critical challenges in AI and cybersecurity. Nicole shares her unique journey from mechanical engineering to AI security, her thoughts on the importance of clear AI vocabularies, and the significance of bridging disciplines in securing complex systems. They dive into the nuanced definitions of AI fairness and safety, examine emerging threats like LLM backdoors, and discuss the rapidly evolving impact of autonomous AI agents on cybersecurity defense. Nicole’s insights offer a fresh perspective on the future of AI-driven security, teamwork, and the growth mindset essential for professionals in this field.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Oct 1, 2024 • 41min

AI Beyond the Hype: Lessons from Cloud on Risk and Security

Send us a textOn this episode of the MLSecOps Podcast, we’re bringing together two cybersecurity legends. Our guest is the inimitable Caleb Sima, who joins us to discuss security considerations for building and using AI, drawing on his 25+ years of cybersecurity experience. Caleb's impressive journey includes co-founding two security startups acquired by HP and Lookout, serving as Chief Security Officer at Robinhood, and currently leading cybersecurity venture studio WhiteRabbit & chairing the Cloud Security Alliance AI Safety Initiative.Hosting this episode is Diana Kelley (CISO, Protect AI) an industry powerhouse with a long career dedicated to cybersecurity, and a longtime host on this show. Together, Caleb and Diana share a thoughtful discussion full of unique insights for the MLSecOps Community of learners.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Sep 19, 2024 • 32min

Generative AI Prompt Hacking and Its Impact on AI Security & Safety

Send us a textWelcome to Season 3 of the MLSecOps Podcast, brought to you by Protect AI!In this episode, MLSecOps Community Manager Charlie McCarthy speaks with Sander Schulhoff, co-founder and CEO of Learn Prompting. Sander discusses his background in AI research, focusing on the rise of prompt engineering and its critical role in generative AI. He also shares insights into prompt security, the creation of LearnPrompting.org, and its mission to democratize prompt engineering knowledge. This episode also explores the intricacies of prompting techniques, "prompt hacking," and the impact of competitions like HackAPrompt on improving AI safety and security.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Sep 7, 2024 • 41min

The MLSecOps Podcast Season 2 Finale

Send us a textThis compilation contains highlights from episodes throughout Season 2 of the MLSecOps Podcast, and it's a great one for community members who are new to the show. If there is a clip from this highlights reel that is especially interesting to you, you can note the name of the original episode that the clip came from and easily go check out that full length episode for a deeper dive.Extending enormous thanks to everyone who has supported this show, including our audience, Protect AI hosts, and stellar expert guests. Stay tuned for Season 3!Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Jul 26, 2024 • 38min

Exploring Generative AI Risk Assessment and Regulatory Compliance

David Rosenthal, a Partner at VISCHER, shares his expertise in data and technology law with over 25 years of experience. He dives into the intricacies of the EU AI Act, discussing the challenges organizations face in compliance and how it could stifle innovation. The conversation also introduces a generative AI risk assessment tool aimed at helping organizations mitigate potential risks. Finally, they reflect on the future of AI integration into daily life and the need for adaptation amid evolving regulations.
undefined
Jul 3, 2024 • 39min

MLSecOps Culture: Considerations for AI Development and Security Teams

Send us a textIn this episode, we had the pleasure of welcoming Co-Founder and CISO of Weights & Biases, Chris Van Pelt, to the MLSecOps Podcast. Chris discusses a range of topics with hosts Badar Ahmed and Diana Kelley, including the history of how W&B was formed, building a culture of security & knowledge sharing across teams in an organization, real-world ML and GenAI security concerns, data lineage and tracking, and upcoming features in the Weights & Biases platform for enhancing security.More about our guest speaker: Chris Van Pelt is a co-founder of Weights & Biases, a developer MLOps platform. In 2009, Chris founded Figure Eight/CrowdFlower. Over the past 12 years, Chris has dedicated his career optimizing ML workflows and teaching ML practitioners, making machine learning more accessible to all. Chris has worked as a studio artist, computer scientist, and web engineer. He studied both art and computer science at Hope College.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Jun 17, 2024 • 35min

Practical Offensive and Adversarial ML for Red Teams

Send us a textNext on the MLSecOps Podcast, we have the honor of highlighting one of our MLSecOps Community members and Dropbox™ Red Teamers, Adrian Wood. Adrian joined Protect AI threat researchers, Dan McInerney and Marcello Salvati, in the studio to share an array of insights, including what inspired him to create the Offensive ML (aka OffSec ML) Playbook, and diving into categories like adversarial machine learning (ML), offensive/defensive ML, and supply chain attacks.The group also discusses dual uses for "traditional" ML and LLMs in the realm of security, the rise of agentic LLMs, and the potential for crown jewel data leakage via model malware (i.e. highly valuable and sensitive data being leaked out of an organization due to malicious software embedded within machine learning models or AI systems).Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
May 20, 2024 • 26min

Expert Talk from RSA Conference: Securing Generative AI

Send us a textIn this episode, host Neal Swaelens (EMEA Director of Business Development, Protect AI) catches up with Ken Huang, CISSP at RSAC 2024 to talk about security for generative AI. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode