The Generative AI Security Podcast

Are Your Red Teaming Efforts Giving Bad Actors An Advantage? GenAI Security

May 23, 2025
Join Disesdi Susanna Cox, an AI security expert and contributor to the OWASP AI Exchange, as she dives into the complexities of AI security. She discusses the pivotal role of the OWASP AI Exchange in highlighting security threats. Susanna reveals the mathematical limits of red teaming and how certain tests might inadvertently benefit bad actors. The conversation also includes the critical intersection of generative AI and predictive use cases, emphasizing the need for vigilance in AI security advancements.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Security Roots Led To AI Career

  • Disesdi Susanna Cox described growing up in a security-focused family and transitioning from physical security to AI security around 2010.
  • She later retrained as a data scientist and worked in NLP, progressing to roles up to AI architect and purple-team focused work.
INSIGHT

OWASP AI Exchange Is A Practical Resource

  • The OWASP AI Exchange centralizes practical AI security threats and mitigations across the production lifecycle with contributions from practitioners.
  • The project links controls to further research and helps inform standards like ISO and AI Act requirements.
INSIGHT

Collaboration Beats Competition

  • The AI Exchange and the Gen AI Top 10 projects have complementary approaches and share findings to improve AI security overall.
  • Cross-project dialogue lets the teams sharpen each other's work and produce better community resources.
Get the Snipd Podcast app to discover more snips from this episode
Get the app