
The Data Exchange with Ben Lorica The Developer’s Guide to LLM Security
11 snips
Dec 18, 2025 Steve Wilson, Chief AI and Product Officer at Exabeam, dives into the complexities of securing Large Language Models and agent workflows. He highlights the unique risks of prompt injection and supply chain vulnerabilities that arise with democratized AI tools. Wilson discusses the importance of guardrails, the dangers of excessive agent authority, and lessons learned from web security mishaps. He also explores the concept of citizen developers and advocates for the OWASP GenAI Security Project to provide rapid community-driven guidance for safer AI practices.
AI Snips
Chapters
Transcript
Episode notes
LLMs Have Human-Like Attack Surfaces
- Large language models introduce human-like vulnerabilities such as deception and phishing-style attacks.
- Treat AI failures as social-engineering risks, not just software bugs.
Prioritize Three Core LLM Risks
- Assume prompt injection, supply-chain issues, and sensitive-data disclosure are top risks when deploying LLMs.
- Rethink architecture, testing, and data access before putting LLMs into production.
Vet Models And Integrate Supply-Chain Tools
- Vet model provenance and avoid blindly importing large model weights from unknown sources.
- Adopt supply-chain tooling and security analysis integrated into early coding workflows.

