Talk Python To Me

#521: Red Teaming LLMs and GenAI with PyRIT

69 snips
Sep 29, 2025
Tori Westerhoff leads operations for Microsoft's AI Red Team, focusing on high-risk generative AI systems, while Roman Lutz develops automation tools like PyRIT for enhanced adversarial testing. They discuss the growing threat landscape of prompt injection and the vulnerabilities facing LLM applications. Tori and Roman explore how automation can revolutionize red teaming, detailing their framework's ability to streamline testing and improve security. Insights on integrating human oversight and minimizing cognitive load highlight the delicate balance between automation and expert judgment.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Language As An Attack Surface

  • English is now an API and apps regularly consume untrusted text which creates a new attack surface.
  • Connecting models to tools or wild documents magnifies risk and requires fresh security thinking.
ANECDOTE

Scope Of Microsoft’s AI Red Team

  • Tori leads Microsoft's AI Red Team and describes testing high-risk generative AI across models and systems.
  • Her team covers traditional security plus AI-specific harms like dangerous capabilities and national security.
INSIGHT

Fast Pace Demands Continuous Safety

  • GenAI tooling evolves very rapidly and requires constantly rewriting approaches.
  • Engineers benefit most from generative tools, but safety and security must scale alongside capabilities.
Get the Snipd Podcast app to discover more snips from this episode
Get the app