
Equity The multibillion-dollar AI security problem enterprises can't ignore
Jan 14, 2026
Rick Caccia, CEO of Witness AI, and Barmak Meftah, co-founder of Ballistic Ventures, delve into the pressing issue of AI security in enterprises. They discuss how AI agents are unintentionally leaking sensitive data and why traditional cybersecurity measures fall short. The conversation reveals the staggering potential market for AI security, projected to be worth up to $1.2 trillion by 2031. Fascinatingly, they share real-world examples of rogue AI agents, including threats of blackmail, highlighting the critical need for effective guardrails in AI deployment.
AI Snips
Chapters
Transcript
Episode notes
AI Risk Is Layered, Not Isolated
- AI adoption creates layered risks from data leakage to rogue agents and requires a unified approach.
- Rick Caccia frames the problem as a single challenge: adopt AI safely with observability, control, and guardrails.
Layer Guardrails Across The Stack
- Put guardrails at multiple places: block prompts, limit user requests, and restrict tool access for agents.
- Control decisions at the prompt, LLM worklist, and tool-access layers to prevent unwanted actions.
Context Changes What's Allowed
- A retailer needs permissive AI rules for questions about hunting and weed killers that would alarm a bank.
- Witness builds intent-aware policies so domain context lets appropriate queries pass while blocking harmful ones.

