
Agents of Scale Sunlight on Shadow AI: When Security Learns to Tinker—Rob T. Lee from the SANS Institute on AI Risk
Dec 23, 2025
Rob T. Lee, Chief AI Officer at the SANS Institute, discusses the critical issue of shadow AI and its implications for security in organizations. He argues that a blanket 'no' on AI creates more risks and offers a 'lifeguard' approach to governance. Lee emphasizes the importance of hands-on training for executives and recommends micro-projects to foster security skills. With practical strategies like accountability partners and regular reviews, he outlines how companies can experiment safely while embracing AI, turning policy from obstruction into opportunity.
AI Snips
Chapters
Transcript
Episode notes
Strict 'No' Policies Create Shadow AI
- Current strict 'no' policies drive employees to unsanctioned shadow AI usage.
- Shadow AI creates greater risk than enabling guided experimentation inside the organization.
Default To A Cautious Yes
- Flip the default from no to a cautious yes so employees feel safe to experiment.
- Treat security like a lifeguard: enable small experiments and watch, rather than ban and drive usage underground.
Reclaim The Tinkering Mindset
- Security teams must reclaim the original 'hacker' tinkering mindset to learn AI.
- Hands-on experimentation reveals real risks and informs better protections than theory alone.
