AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
*This episode is also available in video format! Click to watch the full YouTube video.*
Welcome back, everyone, to The MLSecOps Podcast. We’re thrilled to have you with us for Part 2 of our two-part season finale, as we wrap up Season 1 and look forward to an exciting and revamped Season 2.
In this two-part season recap, we’ve been revisiting some of the most captivating discussions from our first season, offering an overview on essential topics related to AI and machine learning security.
Part 1 of this series focused on topics like adversarial machine learning, ML supply chain vulnerabilities, and red teaming for AI/ML. Here in Part 2, we've handpicked some standout moments from Season 1 episodes that will take you on a tour through categories such as model provenance; governance, risk, & compliance; and Trusted AI. Our wonderful guests on the show delve into topics like defining responsible AI, bias detection and prevention, model fairness, AI audits, incident response plans, privacy engineering, and the significance of data in MLSecOps.
These episodes have been a testament to the expertise and insights of our fantastic guests, and we're excited to share their wisdom with you once again. Whether you're a long-time listener or joining us for the first time, there's something here for everyone, and if you missed the full-length versions of any of these thought-provoking discussions or simply want to revisit them, you can find links to the full episodes and transcripts on our website at www.mlsecops.com/podcast.
Chapters:
0:00 Opening
0:25 Intro by Charlie McCarthy
2:29 S1E9 with Guest Diya Wynn
6:32 S1E4 with Guest Dr. Cari Miller, CMP, FHCA
11:03 S1E17 with Guest Nick Schmidt
16:46 S1E7 with Guest Shea Brown, PhD
22:06 S1E8 with Guest Patrick Hall
26:12 S1E14 with Guest Katharine Jarmul
32:01 S1E13 with Guest Jennifer Prendki, PhD
36:44 S1E18 with Guest Rob van der Veer
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform