Secure Talk Podcast

AI Coding Hype vs Reality: The 2025 AI Code Security Report with Chris Wysopal

Sep 9, 2025
Chris Wysopal, Chief Security Evangelist and co-founder of Veracode, shares his extensive insights into the security vulnerabilities posed by AI-generated code. He reveals a startling 45% error rate in AI systems, matching that of human coders, while discussing the risks of faster coding without adequate testing. Wysopal warns against inexperienced developers using AI tools, stressing the necessity for firm governance. He also highlights the limitations of AI in tackling complex coding issues and urges for improved security frameworks.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

AI Matches Human Vulnerability Rate

  • Veracode found AI-generated code contains vulnerabilities about 45% of the time, matching human error rates.
  • Faster coding with AI multiplies vulnerabilities entering production unless testing increases accordingly.
INSIGHT

Contaminated Training Data Drives Insecurity

  • Training data contamination from Reddit and open-source repos teaches models insecure coding patterns at scale.
  • Models improved syntactic correctness but not secure coding, so they mask mistakes rather than fix them.
ANECDOTE

Loft Began In A Hat Factory

  • Chris recounts The Loft starting in a hat factory that became their hacking workspace.
  • The group grew after the hat business closed and they took over the whole space.
Get the Snipd Podcast app to discover more snips from this episode
Get the app