

Context is king for secure, AI-generated code
12 snips Oct 7, 2025
Dimitri Stiliadis, CTO and co-founder of Endor Labs, dives into the evolving landscape of application security in the age of AI-generated code. He stresses the critical need for human oversight to manage vulnerabilities, highlighting the balance between security and efficiency. The conversation explores how AppSec scanning has adapted to handle the surge in AI code while also addressing the significance of context in vulnerability management. Dimitri advocates for integrating security at the platform level, ultimately ensuring safer AI adoption in software development.
AI Snips
Chapters
Books
Transcript
Episode notes
From Research To Building AppSec Tools
- Dimitri described moving from Bell Labs research to founding Endor Labs after frustration with noisy AppSec tools.
- He built Endor Labs to reduce irrelevant alerts and focus engineers on critical risks.
Freedom Versus Guardrail Tradeoff
- Generative AI tools fall into two categories: general-purpose agents with broad freedom and opinionated frameworks with tight constraints.
- The constrained tools often produce safer, more consistent code because they enforce structure and secure defaults.
Training Data Limits Make Feedback Crucial
- General-purpose agents can produce both good and bad code because training data mixes quality levels and is not fully up-to-date.
- Pairing agents with proper tools and feedback drastically improves outputs over time.