The MLSecOps Podcast cover image

Breaking and Securing Real-World LLM Apps

The MLSecOps Podcast

00:00

Can We Separate Logic and Input to Prevent Injection?

Rico and Javan highlight research on prepared prompts and design-by-separation approaches to reduce prompt injection risks.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app