MLOps.community  cover image

MLOps.community

Guarding LLM and NLP APIs: A Trailblazing Odyssey for Enhanced Security // Ads Dawson // #190

Nov 14, 2023
59:40
Snipd AI
Ads Dawson, Senior Security Engineer at Cohere, discusses securing large language models and NLP APIs. Topics include threat modeling, data breaches, defending against attacks, OWASP Top 10 vulnerabilities, Generative AI Red Teaming, and model hallucination. Also explores practical learning, prompt injections, model monitoring, data drift, and LLM top 1010.com project.
Read more

Podcast summary created with Snipd AI

Quick takeaways

  • Implementing security measures similar to traditional web applications is crucial for securing large language models (LLMs) and natural language programming (NLP) APIs.
  • Training data manipulation can lead to harmful outputs or unwanted behavior in LLM applications, necessitating the tracking, analysis, and safeguarding of training data sources.

Deep dives

Traditional security practices apply to LM applications

LM applications require implementing security measures similar to traditional web applications, such as rate limiting, vulnerability patching, and data sanitization.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode