Outcomes Rocket

Why AI Systems Fail When We Assume They Behave Like Software with Steve Wilson, Chief AI & Product Officer for Exabeam

Dec 18, 2025
Steve Wilson, Chief AI & Product Officer at Exabeam, dives into the complexities of AI security in healthcare. With a career rooted in software engineering, he highlights why AI systems cannot be treated like traditional software, discussing the risks of prompt injection and the need for continuous evaluation. Steve compares AI security to managing unpredictable employees, emphasizing dynamic training and monitoring. He also reflects on his journey through early Java days and startup culture, illustrating how modern AI is reshaping cybersecurity operations.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI Isn't Deterministic Software

  • AI systems are not repeatable like traditional software and thus can't be validated with one-time tests.
  • Treat AI behavior as unpredictable and design continuous evaluation and monitoring instead of single tests.
INSIGHT

AI Supply Chain Is Still A Big Risk

  • Supply chain risks persist in AI: component provenance, training data, and data grooming remain critical.
  • These are traditional software risks that still demand careful management in AI systems.
ADVICE

Continuously Test And Monitor Models

  • Continuously evaluate AI systems like employee security training with ongoing phishing-style tests.
  • Monitor model outputs and behaviors over time rather than relying on one-off validation.
Get the Snipd Podcast app to discover more snips from this episode
Get the app