The MLSecOps Podcast cover image

The MLSecOps Podcast

Latest episodes

undefined
Oct 18, 2023 • 40min

Cybersecurity of Tomorrow: Exploring the Future of Security and Governance for AI Systems; With Guest: Martin Stanley, CISSP

Send us a text*This episode is also available in video format! Click to watch the full YouTube video.*Welcome to Season 2 of The MLSecOps Podcast! In this episode, we joined Strategic Technology Branch Chief,  Martin Stanley, CISSP,  from the Cybersecurity and Infrastructure Security Agency (CISA), to celebrate 20 years of Cybersecurity Awareness Month, as well as hear his expert and thoughtful insights about CISA initiatives, partnering with the National Institute of Standards and Technology (NIST) to promote the adoption of their AI Risk Management Framework, AI security and governance, and much more. We are so grateful to Martin for joining us for this enlightening talk!Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Sep 21, 2023 • 42min

AI/ML Security in Retrospect: Insights from Season 1 of The MLSecOps Podcast (Part 2)

Send us a text*This episode is also available in video format! Click to watch the full YouTube video.*Welcome back, everyone, to The MLSecOps Podcast. We’re thrilled to have you with us for Part 2 of our two-part season finale, as we wrap up Season 1 and look forward to an exciting and revamped Season 2.     In this two-part season recap, we’ve been revisiting some of the most captivating discussions from our first season, offering an overview on essential topics related to AI and machine learning security.     Part 1 of this series focused on topics like adversarial machine learning, ML supply chain vulnerabilities, and red teaming for AI/ML. Here in Part 2, we've handpicked some standout moments from Season 1 episodes that will take you on a tour through categories such as model provenance; governance, risk, & compliance; and Trusted AI. Our wonderful guests on the show delve into topics like defining  responsible AI, bias detection and prevention, model fairness, AI audits, incident response plans, privacy engineering, and the significance of data in MLSecOps.     These episodes have been a testament to the expertise and insights of our fantastic guests, and we're excited to share their wisdom with you once again. Whether you're a long-time listener or joining us for the first time, there's something here for everyone, and if you missed the full-length versions of any of these thought-provoking discussions or simply want to revisit them, you can find links to the full episodes and transcripts on our website at www.mlsecops.com/podcast.Chapters:0:00 Opening0:25 Intro by Charlie McCarthy2:29 S1E9 with Guest Diya Wynn6:32 S1E4 with Guest Dr. Cari Miller, CMP, FHCA11:03 S1E17 with Guest Nick Schmidt16:46 S1E7 with Guest Shea Brown, PhD22:06 S1E8 with Guest Patrick Hall26:12 S1E14 with Guest Katharine Jarmul32:01 S1E13 with Guest Jennifer Prendki, PhD36:44 S1E18 with Guest Rob van der VeerThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Sep 19, 2023 • 37min

AI/ML Security in Retrospect: Insights from Season 1 of The MLSecOps Podcast (Part 1)

Send us a text*This episode is also available in video format! Click to watch the full YouTube video.*Welcome to the final episode of the first season of The MLSecOps Podcast, brought to you by the team at Protect AI.In this two-part episode, we’ll be taking a look back at some favorite highlights from the season where we dove deep into machine learning security operations. In this first part, we’ll be revisiting clips related to things like adversarial machine learning; how malicious actors can use AI to fool machine learning systems into making incorrect decisions; supply chain vulnerabilities; and red teaming for AI/ML, including how security professionals might simulate attacks on their own systems to detect and mitigate vulnerabilities.If you’re new to the show, or if you could use a refresher on any of these topics, this episode is for you, as it’s a great place for listeners to start their learning journey with us and work backwards based on individual interests. And when something in this recap piques your interest, be sure to check out the transcript for links to the full-length episodes where each of these clips came from. You can visit the website and read the transcripts at www.mlsecops.com/podcast.So now, we invite you to sit back, relax, and enjoy this Season 1 recap of some of the most important MLSecOps topics of the year. And stay tuned for part 2 of this episode, where we’ll be revisiting MLSecOps conversations surrounding governance, risk, and compliance, model provenance, and Trusted AI. Thanks for listening.Chapters:0:00 Opening0:25 Intro by Charlie McCarthy, MLSecOps Community Leader2:15 S1E1 with Guest Disesdi Susanna Cox5:08 S1E2 with Guest Dr. Florian Tramèr10:16 S1E3 with Guest Pin-Yu Chen, PhD13:18 S1E5 with Guest Christina Liaghati, PhD17:59 S1E6 with Guest Johann Rehberger22:10 S1E10 with Guest Kai Greshake27:14 S1E11 with Guest Shreya Rajpal31:45 S1E12 with Guest Apostol Vassilev36:36 End/CreditsThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Sep 5, 2023 • 29min

A Holistic Approach to Understanding the AI Lifecycle and Securing ML Systems: Protecting AI Through People, Processes & Technology; With Guest: Rob van der Veer

Send us a textJoining us for the first time as a guest host is Protect AI’s CEO and founder, Ian Swanson. Ian is joined this week by Rob van der Veer, a pioneer in AI and security. Rob gave a presentation at Global AppSec Dublin earlier this year called “Attacking and Protecting Artificial Intelligence” which was a large inspiration for this episode. In it, Rob talks about the lack of security considerations and processes in AI production systems compared to traditional software development, and the unique challenges and particularities of building security into AI and machine learning systems.Together in this episode, Ian and Rob dive into things like practical threats to ML systems, the transition from MLOps to MLSecOps, the [upcoming] ISO 5338 standard on AI engineering, and what organizations can do if they are looking to mature their AI/ML security practices.This is a great dialogue and exchange of ideas overall between two super knowledgeable people in this industry. So thank you so much to Ian and to Rob for joining us on The MLSecOps Podcast this week.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Aug 18, 2023 • 36min

ML Model Fairness: Measuring and Mitigating Algorithmic Disparities; With Guest: Nick Schmidt

Nick Schmidt, the Chief Technology and Innovation Officer at SolasAI, discusses the role of fairness in AI/ML, highlighting real-life examples of bias and disparity in machine learning algorithms. He emphasizes the importance of model governance, accountability, and ownership in ensuring fairness. The podcast explores algorithmic fairness issues, consequences of biased algorithms, and the need for human involvement. Nick also offers advice for organizations assessing their AI security risk and advocates for seeking outside help when implementing fairness in ML models.
undefined
Aug 17, 2023 • 35min

Exploring AI/ML Security Risks: At Black Hat USA 2023 with Protect AI

AI security experts from Protect AI discuss the state of AI/ML security, open-source risks, unique threats in AI/ML system deployment, lack of understanding and need for more data, and securing AI and ML systems through threat modeling and proactive measures.
undefined
Aug 3, 2023 • 39min

Everything You Need to Know About Hacker Summer Camp 2023

Send us a textWelcome back to The MLSecOps Podcast for this week's episode, “Everything You Need to Know About Hacker Summer Camp 2023.” This week, our show is hosted by Protect AI's Chief Information Security Officer, Diana Kelley, and Diana talks with two more longtime security experts, Chloé Messdaghi and Dan McInerney, about all things related to what the security research community fondly refers to as Hacker Summer Camp. The group discusses various events held throughout the course of this exciting week in Las Vegas, including what to expect at Black Hat [USA 2023] and DEF CON [31]. (1:21) What is Hacker Summer Camp Week and what are the various events and Cons that take place during that time? (3:58) It’s my first time attending Black Hat, DEF CON, Hacker Summer Camp Week, etc.: where can I find groups to attend with or to help me navigate where to go and what events to attend? (9:53) Advice: if it’s my first time attending Black Hat, what other advice is there for me? What should I be thinking about? (13:25) If I attend Black Hat, does that mean I’m automatically able to attend DEF CON or how does that work? (TL;DR separate passes are needed for each event)(14:14) Are certain personas more welcomed at specific conferences? (15:49) What are some interesting panel talks we should know about? (20:53) There are a couple of other conferences going on during “Summer Camp Week” - BSides Las Vegas, Squadcon, Diana Initiative. What are those? When are they taking place? Can I go to all of them? How does that work?(23:26) What AI/ML security trends are happening? What should I be looking for at Black Hat and DEF CON this year in terms of talks and research?(28:55) How can I determine if a particular talk is going to be worth my time? (32:54) Any other tips on how to stay healthy and safe (both physical and electronic safety) throughout the week?Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Jul 12, 2023 • 47min

Privacy Engineering: Safeguarding AI & ML Systems in a Data-Driven Era; With Guest Katharine Jarmul

In this episode, renowned data scientist Katharine Jarmul discusses the risks of data privacy and security in ML models. They touch on topics such as OpenAI's ChatGPT, GDPR, challenges faced by organizations, privacy by design, and reputational risk. They emphasize the need for auditability, consent questions, and population selection, as well as promoting a culture of privacy champions. Building models in a secure and private way is crucial, and listeners have a chance to win Katharine's book on practical data privacy.
undefined
Jun 21, 2023 • 35min

The Intersection of MLSecOps and DataPrepOps; With Guest: Jennifer Prendki, PhD

Send us a textOn this week’s episode from The MLSecOps Podcast, we have the pleasure of hearing from Dr. Jennifer Prendki, founder and CEO of Alectio - The DataPrepOps Company. Alectio’s name comes from a blend of the acronym “AL,” standing for Active Learning, and the Latin term for the word “selection,” which is “lectio.”In this episode, Dr. Prendki defines DataPrepOps for us and describes its contrasts to MLOps, along with how DataPrepOps intersects with MLSecOps best practices. She also discusses data quality, security risks in data science, and the role that data curation plays in helping to mitigate security risks in ML models.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
undefined
Jun 14, 2023 • 31min

The Evolved Adversarial ML Landscape; With Guest: Apostol Vassilev, NIST

Send us a textIn this episode, we explore the National Institute of Standards and Technology (NIST) white paper, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. The report is co-authored by our guest for this conversation; Apostol Vassilev, NIST Research Team Supervisor. Apostol provides insights into the motivations behind this initiative and the collaborative research methodology employed by the NIST team.Apostol shares with us that this taxonomy and terminology report is part of the Trustworthy & Responsible AI Resource Center that NIST is developing. Additional tools in the resource center include NIST’s AI Risk Management Framework (RMF), the OECD-NIST Catalogue of AI Tools and Metrics, and another crucial publication that Apostol co-authored called Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.The conversation then focuses on the evolution of adversarial ML (AdvML) attacks, including prominent techniques like prompt injection attacks, as well as other emerging threats amidst the rise of large language model applications. Apostol discusses the changing AI and computing infrastructure and the scale of defenses required as a result of these changes. Concluding the episode, Apostol shares thoughts on enhancing ML security practices and invites stakeholders to contribute to the ongoing development of the AdvML taxonomy and terminology white paper. Join us now for a thought-provoking discussion that sheds light on NIST's efforts to further define the terminology of adversarial ML and develop a comprehensive taxonomy of concepts that will aid industry leaders in creating additional standards and guides.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode