
To write secure code, be less gullible than your AI
The Stack Overflow Podcast
00:00
LLM judge and secondary evaluation for prompts
Greg proposes using trusted LLMs to evaluate prompts, add friction for risky actions, and apply existing sandbox patterns.
Play episode from 12:13
Transcript


