The MLSecOps Podcast

Breaking and Securing Real-World LLM Apps

Jul 16, 2025
Rico Komenda, an AI security specialist at Adesso SE, and Javan Rasokat from Sage share their expertise on securing LLM-integrated systems. They dive into prompt injection attacks, explaining their seriousness and potential risks. The duo discusses how vulnerabilities extend beyond models to data pipelines and APIs, highlighting the need for robust security measures. They also tackle the concept of AI firewalls and innovative strategies to enhance application security. Their insights on the evolving landscape of AI security are both timely and crucial.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

How Both Entered AI Security

  • Rico described his transition from AppSec into AI security after engaging with the MLSecOps community and talks.
  • Javan recounted building a Sage LLM co-pilot which sparked his practical interest in AI security.
INSIGHT

Direct vs Indirect Prompt Injection

  • Direct prompt injection happens when a user embeds hidden instructions in input to make the model reveal or perform unauthorized actions.
  • Indirect prompt injection occurs when the model ingests external content (e.g., websites) that contains hidden instructions and then acts on them.
INSIGHT

The Prompt Injection Threat Has Evolved

  • The severity and priority for prompt injection have shifted as defenders accept some trade-offs and focus on higher-impact vulnerabilities like access and escalation.
  • Prompt injection remains a low-effort external entry point and thus still a meaningful risk despite reclassification.
Get the Snipd Podcast app to discover more snips from this episode
Get the app