

The Importance of Purple Teaming in AI Security - ft. Disesdi Susanna Cox
Episode Summary
In this episode, AI architect and security researcher Disesdi Susanna Cox explains the vast and complex attack surface of AI systems, highlighting the need for new security approaches like purple teaming and MLSecOps. Her insights help security leaders understand the unique risks and ethical challenges of AI, making this a must-listen for anyone responsible for securing modern AI-driven organizations.
Sponsors
Thank you to our sponsors who make this show possible.
→ Hampton North. Hampton North is the premium US based cybersecurity search firm.
→ Sysdig. Secure the cloud the right way with agentic AI.
About the Guest
Disesdi Susanna Cox is an AI architect, patent holder, and consulting security researcher recognized for her work with the OWASP AI Exchange. Her frameworks and research have been adopted globally to help organizations understand and address the evolving security landscape in AI. Connect with Susanna to follow her latest insights and contributions:
LinkedIn: https://www.linkedin.com/in/disesdi/
Newsletter: https://disesdi.substack.com/
OWASP AI Exchange: https://owasp.org/www-project-ai-exchange/
Episode Breakdown
00:00 Navigating the AI Security Landscape
03:30 Understanding Adversarial Attacks in AI
06:06 The Importance of Purple Teaming in AI Security
08:49 Establishing MLSecOps for AI Systems
11:40 The Role of Chief AI Security Officer
13:03 Ethics and Risks of AI in Decision Making
26:07 The Future of Red Teaming in AI Security
35:33 Intro Long - Final.mp4
Referenced Resources
- OWASP AI Exchange
- Disesdi Substack: The Adversarial Subspace Problem
- DO-178C (Guidance for Aerospace Software)
Subscribe & Share
If you found this episode useful, please share it and subscribe!
→ Spotify.
→ YouTube
→ Website
Follow You Hosts:
→ Conor Sherman: LinkedIn
→ Stuart Mitchell: LinkedIn