The MLSecOps Podcast

MLSecOps.com
undefined
Feb 29, 2024 • 37min

Securing AI: The Role of People, Processes & Tools in MLSecOps

Send us a textIn this episode of The MLSecOps Podcast hosted by Daryan Dehghanpisheh (Protect AI) and special guest-host Martin Stanley, CISSP (Cybersecurity and Infrastructure Security Agency), we delve into critical aspects of AI security and operations. This episode features esteemed guests, Gary Givental (IBM) and Kaleb Walton (FICO).The group's discussion unfolds with insights into the evolving field of Machine Learning Security Operations, aka, MLSecOps. A recap of CISA's most recent Secure by Design and Secure AI initiatives sets the stage for the a dialogue that explores the parallels between MLSecOps and DevSecOps. The episode goes on to illuminate the challenges of securing AI systems, including data integrity and third-party dependencies. The conversation also travels to the socio-technical facets of AI security, explores MLSecOps and AI security posture roles within an organization, and the interplay between people, processes, and tools essential to successful MLSecOps implementation.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Feb 27, 2024 • 36min

ReDoS Vulnerability Reports: Security Relevance vs. Noisy Nuisance

Send us a textIn this episode, we delve into a hot topic in the bug bounty world: ReDoS (Regular Expression Denial of Service) reports. Inspired by reports submitted by the huntr AI/ML bug bounty community and an insightful blog piece by open source expert, William Woodruff (Engineering Director, Trail of Bits), this conversation explores: Are any ReDoS vulnerabilities worth fixing?Triaging and the impact of ReDoS reports on software maintainers.The challenges of addressing ReDoS vulnerabilities amidst developer fatigue and resource constraints.Analyzing the evolving trends and incentives shaping the rise of ReDoS reports in bug bounty programs, and their implications for severity assessment.Can LLMs be used to help with code analysis?Tune in as we dissect the dynamics of ReDoS, offering insights into its implications for the bug hunting community and the evolving landscape of security for AI/ML.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Feb 15, 2024 • 42min

Finding a Balance: LLMs, Innovation, and Security

Explore the challenges of managing large language models and balancing innovation with security in the dynamic world of AI. Learn about the risks and rewards of AI integration, addressing bias in AI systems, navigating security risks in open source models, trust issues with AI tools, and the evolving threats in machine learning models.
undefined
Feb 13, 2024 • 39min

Secure AI Implementation and Governance

Nick James, CEO of WhitegloveAI, discusses AI governance, ISO standards, and continuous improvement for AI security with host Chris King. They explore the importance of ethical AI development, risks of AI implementation, and the role of AI in enhancing cybersecurity. They emphasize the need for continuous risk assessments and adherence to technical standards for successful AI implementation and governance.
undefined
Feb 6, 2024 • 38min

Risk Management and Enhanced Security Practices for AI Systems

In this episode, Omar Khawaja and Diana Kelley discuss a new framework for understanding AI risks, building a security-minded culture around AI, and challenges faced by CISOs in assessing risk. They explore supply chain security in AI systems, emphasize the importance of data provenance tracking, and highlight the challenges in securing the software supply chain for AI and ML systems.
undefined
Nov 28, 2023 • 41min

Evaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations

Speakers discuss concerns of customers and clients regarding security of AI applications and machine learning systems. They explore the distinction between robustness and security in adversarial attacks on ML models. The concept of mitigations in robust ML, including data encryption and secure backups, is discussed. The use of cryptographic signature for data and supply chain validation for data poisoning protection are examined. Techniques of model inversion and differential privacy in adversarial ML are explained. Building effective machine learning models with clear goals is emphasized.
undefined
Oct 24, 2023 • 43min

From Risk to Responsibility: Violet Teaming in AI; With Guest: Alexander Titus

Guest Alexander Titus, Founder & CEO of The In Vivo Group, discusses risks in AI & biotech, balancing innovation with caution. Explores AI model lifecycle, regulations vs. profitability, and challenges in ensuring safety in AI & biotech. Emphasizes responsible AI in healthcare & life sciences.
undefined
Oct 18, 2023 • 40min

Cybersecurity of Tomorrow: Exploring the Future of Security and Governance for AI Systems; With Guest: Martin Stanley, CISSP

Send us a text*This episode is also available in video format! Click to watch the full YouTube video.*Welcome to Season 2 of The MLSecOps Podcast! In this episode, we joined Strategic Technology Branch Chief,  Martin Stanley, CISSP,  from the Cybersecurity and Infrastructure Security Agency (CISA), to celebrate 20 years of Cybersecurity Awareness Month, as well as hear his expert and thoughtful insights about CISA initiatives, partnering with the National Institute of Standards and Technology (NIST) to promote the adoption of their AI Risk Management Framework, AI security and governance, and much more. We are so grateful to Martin for joining us for this enlightening talk!Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Sep 21, 2023 • 42min

AI/ML Security in Retrospect: Insights from Season 1 of The MLSecOps Podcast (Part 2)

Send us a text*This episode is also available in video format! Click to watch the full YouTube video.*Welcome back, everyone, to The MLSecOps Podcast. We’re thrilled to have you with us for Part 2 of our two-part season finale, as we wrap up Season 1 and look forward to an exciting and revamped Season 2.     In this two-part season recap, we’ve been revisiting some of the most captivating discussions from our first season, offering an overview on essential topics related to AI and machine learning security.     Part 1 of this series focused on topics like adversarial machine learning, ML supply chain vulnerabilities, and red teaming for AI/ML. Here in Part 2, we've handpicked some standout moments from Season 1 episodes that will take you on a tour through categories such as model provenance; governance, risk, & compliance; and Trusted AI. Our wonderful guests on the show delve into topics like defining  responsible AI, bias detection and prevention, model fairness, AI audits, incident response plans, privacy engineering, and the significance of data in MLSecOps.     These episodes have been a testament to the expertise and insights of our fantastic guests, and we're excited to share their wisdom with you once again. Whether you're a long-time listener or joining us for the first time, there's something here for everyone, and if you missed the full-length versions of any of these thought-provoking discussions or simply want to revisit them, you can find links to the full episodes and transcripts on our website at www.mlsecops.com/podcast.Chapters:0:00 Opening0:25 Intro by Charlie McCarthy2:29 S1E9 with Guest Diya Wynn6:32 S1E4 with Guest Dr. Cari Miller, CMP, FHCA11:03 S1E17 with Guest Nick Schmidt16:46 S1E7 with Guest Shea Brown, PhD22:06 S1E8 with Guest Patrick Hall26:12 S1E14 with Guest Katharine Jarmul32:01 S1E13 with Guest Jennifer Prendki, PhD36:44 S1E18 with Guest Rob van der VeerThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Sep 19, 2023 • 37min

AI/ML Security in Retrospect: Insights from Season 1 of The MLSecOps Podcast (Part 1)

Send us a text*This episode is also available in video format! Click to watch the full YouTube video.*Welcome to the final episode of the first season of The MLSecOps Podcast, brought to you by the team at Protect AI.In this two-part episode, we’ll be taking a look back at some favorite highlights from the season where we dove deep into machine learning security operations. In this first part, we’ll be revisiting clips related to things like adversarial machine learning; how malicious actors can use AI to fool machine learning systems into making incorrect decisions; supply chain vulnerabilities; and red teaming for AI/ML, including how security professionals might simulate attacks on their own systems to detect and mitigate vulnerabilities.If you’re new to the show, or if you could use a refresher on any of these topics, this episode is for you, as it’s a great place for listeners to start their learning journey with us and work backwards based on individual interests. And when something in this recap piques your interest, be sure to check out the transcript for links to the full-length episodes where each of these clips came from. You can visit the website and read the transcripts at www.mlsecops.com/podcast.So now, we invite you to sit back, relax, and enjoy this Season 1 recap of some of the most important MLSecOps topics of the year. And stay tuned for part 2 of this episode, where we’ll be revisiting MLSecOps conversations surrounding governance, risk, and compliance, model provenance, and Trusted AI. Thanks for listening.Chapters:0:00 Opening0:25 Intro by Charlie McCarthy, MLSecOps Community Leader2:15 S1E1 with Guest Disesdi Susanna Cox5:08 S1E2 with Guest Dr. Florian Tramèr10:16 S1E3 with Guest Pin-Yu Chen, PhD13:18 S1E5 with Guest Christina Liaghati, PhD17:59 S1E6 with Guest Johann Rehberger22:10 S1E10 with Guest Kai Greshake27:14 S1E11 with Guest Shreya Rajpal31:45 S1E12 with Guest Apostol Vassilev36:36 End/CreditsThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app