

Threat modeling LLM apps (Practical AI #283)
Aug 22, 2024
Donato Capitella, an expert in threat modeling AI applications at WithSecure, dives into the complexities of LLM security. He discusses the importance of creating an LLM security canvas and addresses the risks of prompt injection attacks that can jeopardize user data. The conversation emphasizes the need for skepticism towards AI outputs and highlights strategies for threat detection and validation. Donato also explores the future of AI, including the innovative role of autonomous agents and the contributions of ethical hackers in enhancing cybersecurity.
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7
Intro
00:00 • 4min
Navigating AI Security: Threat Models for LLMs
04:10 • 18min
Understanding Vulnerabilities in LLM Applications
21:56 • 3min
Validating Language Model Outputs
24:38 • 15min
Cybersecurity Challenges in LLMs: Strategies for Effective Threat Detection
39:25 • 4min
Navigating Validation Challenges in LLMs
43:19 • 7min
Exploring the Future of AI and Security Innovations
50:15 • 4min