

Vibe Coding: Four Security Nightmares in a Trenchcoat (with Susanna Cox), 2025.07.21
4 snips Aug 13, 2025
In this discussion, Susanna Cox, an AI security researcher with OWASP AI Exchange, explores the controversial concept of 'vibe coding.' She critiques the security risks of using large language models in programming, likening it to gambling and highlighting potential vulnerabilities. The conversation challenges the hype around AI-assisted coding by emphasizing the importance of accountability and rigorous testing. Humorous anecdotes bring light to the absurdities of AI misconceptions, creating a lively debate on responsible technology use and the future of coding.
AI Snips
Chapters
Books
Transcript
Episode notes
LLM Coding Isn’t A Shortcut
- Susanna Cox argues debugging LLM-produced code isn't inherently faster than writing or debugging your own code.
- She warns natural language prompts can't replace precise design-specification languages and add security risks.
Lock Down Agent Permissions
- Avoid letting agents alter your production codebase or run arbitrary tool calls.
- Do not grant agents broad MCP/model communication or file-authoring privileges without strict controls.
More Code Means More Review Work
- Increasing code churn via LLMs multiplies review burden and technical debt.
- Productivity gains claimed as "10x" conflict with the reality of needing to inspect many more commits.