The MLSecOps Podcast cover image

The MLSecOps Podcast

Latest episodes

undefined
Jul 3, 2024 • 39min

MLSecOps Culture: Considerations for AI Development and Security Teams

Send us a textIn this episode, we had the pleasure of welcoming Co-Founder and CISO of Weights & Biases, Chris Van Pelt, to the MLSecOps Podcast. Chris discusses a range of topics with hosts Badar Ahmed and Diana Kelley, including the history of how W&B was formed, building a culture of security & knowledge sharing across teams in an organization, real-world ML and GenAI security concerns, data lineage and tracking, and upcoming features in the Weights & Biases platform for enhancing security.More about our guest speaker: Chris Van Pelt is a co-founder of Weights & Biases, a developer MLOps platform. In 2009, Chris founded Figure Eight/CrowdFlower. Over the past 12 years, Chris has dedicated his career optimizing ML workflows and teaching ML practitioners, making machine learning more accessible to all. Chris has worked as a studio artist, computer scientist, and web engineer. He studied both art and computer science at Hope College.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Jun 17, 2024 • 35min

Practical Offensive and Adversarial ML for Red Teams

Send us a textNext on the MLSecOps Podcast, we have the honor of highlighting one of our MLSecOps Community members and Dropbox™ Red Teamers, Adrian Wood. Adrian joined Protect AI threat researchers, Dan McInerney and Marcello Salvati, in the studio to share an array of insights, including what inspired him to create the Offensive ML (aka OffSec ML) Playbook, and diving into categories like adversarial machine learning (ML), offensive/defensive ML, and supply chain attacks.The group also discusses dual uses for "traditional" ML and LLMs in the realm of security, the rise of agentic LLMs, and the potential for crown jewel data leakage via model malware (i.e. highly valuable and sensitive data being leaked out of an organization due to malicious software embedded within machine learning models or AI systems).Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
May 20, 2024 • 26min

Expert Talk from RSA Conference: Securing Generative AI

Send us a textIn this episode, host Neal Swaelens (EMEA Director of Business Development, Protect AI) catches up with Ken Huang, CISSP at RSAC 2024 to talk about security for generative AI. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
May 13, 2024 • 38min

Practical Foundations for Securing AI

Send us a textIn this episode of the MLSecOps Podcast, we delve into the critical world of security for AI and machine learning with our guest Ron F. Del Rosario, Chief Security Architect and AI/ML Security Lead at SAP ISBN. The discussion highlights the contextual knowledge gap between ML practitioners and cybersecurity professionals, emphasizing the importance of cross-collaboration and foundational security practices. We explore the contrasts of security for AI to that for traditional software, along with the risk profiles of first-party vs. third-party ML models. Ron sheds light on the significance of understanding your AI system's provenance, having necessary controls, and audit trails for robust security. He also discusses the "Secure AI/ML Development Framework" initiative that he launched internally within his organization, featuring a lean security checklist to streamline processes. We hope you enjoy this thoughtful conversation!Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Apr 23, 2024 • 31min

Evaluating RAG and the Future of LLM Security: Insights with LlamaIndex

Simon Suo, Co-founder of LlamaIndex, discusses RAG evolution, LLM security concerns, and the importance of data orchestration. He highlights the need for input/output evaluation, robust security measures, and ongoing efforts in the LLM community to address security challenges. Simon also introduces LlamaCloud, an enterprise data platform for streamlined data processing.
undefined
Mar 13, 2024 • 32min

AI Threat Research: Spotlight on the Huntr Community

Send us a textLearn about the world’s first bug bounty platform for AI & machine learning, huntr, including how to get involved!This week’s featured guests are leaders from the huntr community (brought to you by Protect AI): Dan McInerney, Lead AI Threat Researcher Marcello Salvati, Sr. Engineer & Researcher Madison Vorbrich, Community Manager Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Feb 29, 2024 • 37min

Securing AI: The Role of People, Processes & Tools in MLSecOps

Send us a textIn this episode of The MLSecOps Podcast hosted by Daryan Dehghanpisheh (Protect AI) and special guest-host Martin Stanley, CISSP (Cybersecurity and Infrastructure Security Agency), we delve into critical aspects of AI security and operations. This episode features esteemed guests, Gary Givental (IBM) and Kaleb Walton (FICO).The group's discussion unfolds with insights into the evolving field of Machine Learning Security Operations, aka, MLSecOps. A recap of CISA's most recent Secure by Design and Secure AI initiatives sets the stage for the a dialogue that explores the parallels between MLSecOps and DevSecOps. The episode goes on to illuminate the challenges of securing AI systems, including data integrity and third-party dependencies. The conversation also travels to the socio-technical facets of AI security, explores MLSecOps and AI security posture roles within an organization, and the interplay between people, processes, and tools essential to successful MLSecOps implementation.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Feb 27, 2024 • 36min

ReDoS Vulnerability Reports: Security Relevance vs. Noisy Nuisance

Send us a textIn this episode, we delve into a hot topic in the bug bounty world: ReDoS (Regular Expression Denial of Service) reports. Inspired by reports submitted by the huntr AI/ML bug bounty community and an insightful blog piece by open source expert, William Woodruff (Engineering Director, Trail of Bits), this conversation explores: Are any ReDoS vulnerabilities worth fixing?Triaging and the impact of ReDoS reports on software maintainers.The challenges of addressing ReDoS vulnerabilities amidst developer fatigue and resource constraints.Analyzing the evolving trends and incentives shaping the rise of ReDoS reports in bug bounty programs, and their implications for severity assessment.Can LLMs be used to help with code analysis?Tune in as we dissect the dynamics of ReDoS, offering insights into its implications for the bug hunting community and the evolving landscape of security for AI/ML.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Feb 15, 2024 • 42min

Finding a Balance: LLMs, Innovation, and Security

Explore the challenges of managing large language models and balancing innovation with security in the dynamic world of AI. Learn about the risks and rewards of AI integration, addressing bias in AI systems, navigating security risks in open source models, trust issues with AI tools, and the evolving threats in machine learning models.
undefined
Feb 13, 2024 • 39min

Secure AI Implementation and Governance

Nick James, CEO of WhitegloveAI, discusses AI governance, ISO standards, and continuous improvement for AI security with host Chris King. They explore the importance of ethical AI development, risks of AI implementation, and the role of AI in enhancing cybersecurity. They emphasize the need for continuous risk assessments and adherence to technical standards for successful AI implementation and governance.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app