Practical AI

Threat modeling LLM apps

18 snips
Aug 22, 2024
Donato Capitella, Principal Security Consultant at WithSecure, specializes in threat modeling for AI applications. He discusses the critical need for threat modeling in the context of large language models (LLMs) and shares insights on vulnerabilities, such as prompt injection risks. Donato emphasizes the importance of validating outputs to maintain trustworthiness and explores innovative strategies for secure integration in AI systems. The conversation also touches on the exciting future of LLM technology and the role of ethical hackers in enhancing cybersecurity.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

LLM Application Security

  • Donato Capitella argues that focusing on LLM security in isolation is meaningless, similar to asking if a knife is secure.
  • Instead, analyze the specific LLM application and its potential for misuse within its intended use case.
ADVICE

Building a Threat Model

  • Consider data sources, user inputs, and potential attacker actions when building an LLM threat model.
  • Identify vulnerabilities by analyzing how attackers might exploit the LLM within your application.
ADVICE

Untrusted LLM Output

  • Treat LLM output as untrusted, similar to handling emails from unknown senders.
  • Apply security controls to mitigate risks from this untrusted data, verifying and validating information before acting on it.
Get the Snipd Podcast app to discover more snips from this episode
Get the app