Security Now (Audio) cover image

SN 1050: Here Come the AI Browsers - Scareware Blockers

Security Now (Audio)

00:00

Prompt Injection and Exfiltration Risks Explained

Simon Willison's explanation of prompt injection, why LLMs may follow in-content instructions, and guardrail limits.

Play episode from 03:06:07
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app