
To write secure code, be less gullible than your AI
The Stack Overflow Podcast
00:00
AI gullibility and security incidents
Greg discusses how LLMs can be gullible to malicious prompts, enabling prompt-injection attacks and easy-to-ship hacks.
Play episode from 03:47
Transcript


