
Breaking and Securing Real-World LLM Apps
The MLSecOps Podcast
00:00
Can We Separate Logic and Input to Prevent Injection?
Rico and Javan highlight research on prepared prompts and design-by-separation approaches to reduce prompt injection risks.
Transcript
Play full episode