The MLSecOps Podcast cover image

The MLSecOps Podcast

Latest episodes

undefined
May 13, 2024 • 38min

Practical Foundations for Securing AI

Send us a textIn this episode of the MLSecOps Podcast, we delve into the critical world of security for AI and machine learning with our guest Ron F. Del Rosario, Chief Security Architect and AI/ML Security Lead at SAP ISBN. The discussion highlights the contextual knowledge gap between ML practitioners and cybersecurity professionals, emphasizing the importance of cross-collaboration and foundational security practices. We explore the contrasts of security for AI to that for traditional software, along with the risk profiles of first-party vs. third-party ML models. Ron sheds light on the significance of understanding your AI system's provenance, having necessary controls, and audit trails for robust security. He also discusses the "Secure AI/ML Development Framework" initiative that he launched internally within his organization, featuring a lean security checklist to streamline processes. We hope you enjoy this thoughtful conversation!Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Apr 23, 2024 • 31min

Evaluating RAG and the Future of LLM Security: Insights with LlamaIndex

Simon Suo, Co-founder of LlamaIndex, discusses RAG evolution, LLM security concerns, and the importance of data orchestration. He highlights the need for input/output evaluation, robust security measures, and ongoing efforts in the LLM community to address security challenges. Simon also introduces LlamaCloud, an enterprise data platform for streamlined data processing.
undefined
Mar 13, 2024 • 32min

AI Threat Research: Spotlight on the Huntr Community

Send us a textLearn about the world’s first bug bounty platform for AI & machine learning, huntr, including how to get involved!This week’s featured guests are leaders from the huntr community (brought to you by Protect AI): Dan McInerney, Lead AI Threat Researcher Marcello Salvati, Sr. Engineer & Researcher Madison Vorbrich, Community Manager Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Feb 29, 2024 • 37min

Securing AI: The Role of People, Processes & Tools in MLSecOps

Send us a textIn this episode of The MLSecOps Podcast hosted by Daryan Dehghanpisheh (Protect AI) and special guest-host Martin Stanley, CISSP (Cybersecurity and Infrastructure Security Agency), we delve into critical aspects of AI security and operations. This episode features esteemed guests, Gary Givental (IBM) and Kaleb Walton (FICO).The group's discussion unfolds with insights into the evolving field of Machine Learning Security Operations, aka, MLSecOps. A recap of CISA's most recent Secure by Design and Secure AI initiatives sets the stage for the a dialogue that explores the parallels between MLSecOps and DevSecOps. The episode goes on to illuminate the challenges of securing AI systems, including data integrity and third-party dependencies. The conversation also travels to the socio-technical facets of AI security, explores MLSecOps and AI security posture roles within an organization, and the interplay between people, processes, and tools essential to successful MLSecOps implementation.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Feb 27, 2024 • 36min

ReDoS Vulnerability Reports: Security Relevance vs. Noisy Nuisance

Send us a textIn this episode, we delve into a hot topic in the bug bounty world: ReDoS (Regular Expression Denial of Service) reports. Inspired by reports submitted by the huntr AI/ML bug bounty community and an insightful blog piece by open source expert, William Woodruff (Engineering Director, Trail of Bits), this conversation explores: Are any ReDoS vulnerabilities worth fixing?Triaging and the impact of ReDoS reports on software maintainers.The challenges of addressing ReDoS vulnerabilities amidst developer fatigue and resource constraints.Analyzing the evolving trends and incentives shaping the rise of ReDoS reports in bug bounty programs, and their implications for severity assessment.Can LLMs be used to help with code analysis?Tune in as we dissect the dynamics of ReDoS, offering insights into its implications for the bug hunting community and the evolving landscape of security for AI/ML.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Feb 15, 2024 • 42min

Finding a Balance: LLMs, Innovation, and Security

Explore the challenges of managing large language models and balancing innovation with security in the dynamic world of AI. Learn about the risks and rewards of AI integration, addressing bias in AI systems, navigating security risks in open source models, trust issues with AI tools, and the evolving threats in machine learning models.
undefined
Feb 13, 2024 • 39min

Secure AI Implementation and Governance

Nick James, CEO of WhitegloveAI, discusses AI governance, ISO standards, and continuous improvement for AI security with host Chris King. They explore the importance of ethical AI development, risks of AI implementation, and the role of AI in enhancing cybersecurity. They emphasize the need for continuous risk assessments and adherence to technical standards for successful AI implementation and governance.
undefined
Feb 6, 2024 • 38min

Risk Management and Enhanced Security Practices for AI Systems

In this episode, Omar Khawaja and Diana Kelley discuss a new framework for understanding AI risks, building a security-minded culture around AI, and challenges faced by CISOs in assessing risk. They explore supply chain security in AI systems, emphasize the importance of data provenance tracking, and highlight the challenges in securing the software supply chain for AI and ML systems.
undefined
Nov 28, 2023 • 41min

Evaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations

Speakers discuss concerns of customers and clients regarding security of AI applications and machine learning systems. They explore the distinction between robustness and security in adversarial attacks on ML models. The concept of mitigations in robust ML, including data encryption and secure backups, is discussed. The use of cryptographic signature for data and supply chain validation for data poisoning protection are examined. Techniques of model inversion and differential privacy in adversarial ML are explained. Building effective machine learning models with clear goals is emphasized.
undefined
Oct 24, 2023 • 43min

From Risk to Responsibility: Violet Teaming in AI; With Guest: Alexander Titus

Guest Alexander Titus, Founder & CEO of The In Vivo Group, discusses risks in AI & biotech, balancing innovation with caution. Explores AI model lifecycle, regulations vs. profitability, and challenges in ensuring safety in AI & biotech. Emphasizes responsible AI in healthcare & life sciences.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode