AI Security Podcast

AI Red Teaming & Securing Enterprise AI

13 snips
May 16, 2025
Leonard Tang, Co-founder and CEO of Haize Labs, shares insights on AI red teaming and its impact on enterprise security. He discusses the evolution of red teaming methodologies influenced by AI technology. Leonard highlights vulnerabilities in multimodal AI applications and explains how adversarial attacks pose significant risks. He also delves into the necessity of precise output control for developing sophisticated exploits and the importance of cybersecurity professionals adapting their skills to meet the challenges of AI. Expect engaging real-world examples and practical mitigation strategies!
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Hayes Labs' Evolution

  • Hayes Labs started by red-teaming LLM providers working with top AI labs.
  • Now, their focus is testing AI applications at the domain and use-case level.
INSIGHT

Quality Assurance Over Traditional Red Teaming

  • Hayes Labs focuses on assuring AI output quality more than traditional security flaws.
  • They provide QA-style functional testing for the AI responses rather than just adversary emulation.
ANECDOTE

AI Code of Conduct Example

  • Customers with articulated AI codes of conduct test for rule violations using Haize Labs.
  • Those without clear rules rely on Hazé's work to define their AI safety and quality criteria.
Get the Snipd Podcast app to discover more snips from this episode
Get the app