
The Stack Overflow Podcast To write secure code, be less gullible than your AI
Nov 4, 2025
Ryan chats with Greg Foster, CTO of Graphite, a platform revolutionizing AI-assisted code review. They dive into the trust issues surrounding AI-generated code, emphasizing how it can lower accountability and increase security risks. Greg discusses the dangers of prompt-injection attacks and the importance of readability in code. He advocates for shorter PRs and better tooling to maintain code quality amidst rapid AI changes. Their conversation highlights that, while AI can assist, the role of security engineers remains indispensable.
AI Snips
Chapters
Transcript
Episode notes
Trust Erodes With AI-Generated Code
- AI-generated code lowers reviewer trust because there's no human accountability behind it.
- Increased volume of AI-driven PRs compounds the review bottleneck and risk.
LLMs Are Too Gullible
- LLMs are gullible and will follow malicious prompts that humans would reject.
- That gullibility enables prompt-injection attacks that can expose secrets or perform harmful actions.
Ship Small Stacked Pull Requests
- Break changes into small stacked PRs to keep reviewers engaged and focused.
- Use tooling to manage stacked changes, rebases, and parallel reviews for faster secure shipping.
