MLOps.community  cover image

MLOps.community

Guarding LLM and NLP APIs: A Trailblazing Odyssey for Enhanced Security // Ads Dawson // #190

Nov 14, 2023
Ads Dawson, Senior Security Engineer at Cohere, discusses securing large language models and NLP APIs. Topics include threat modeling, data breaches, defending against attacks, OWASP Top 10 vulnerabilities, Generative AI Red Teaming, and model hallucination. Also explores practical learning, prompt injections, model monitoring, data drift, and LLM top 1010.com project.
59:40

Episode guests

Podcast summary created with Snipd AI

Quick takeaways

  • Implementing security measures similar to traditional web applications is crucial for securing large language models (LLMs) and natural language programming (NLP) APIs.
  • Training data manipulation can lead to harmful outputs or unwanted behavior in LLM applications, necessitating the tracking, analysis, and safeguarding of training data sources.

Deep dives

Traditional security practices apply to LM applications

LM applications require implementing security measures similar to traditional web applications, such as rate limiting, vulnerability patching, and data sanitization.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner