CyberWire Daily

AI's impact on business [CISOP]

28 snips
Dec 2, 2025
In a riveting discussion, Eric Nagel, a former CISO with a diverse background in electrical engineering and patent law, delves into the complexities of responsible AI. He contrasts traditional machine learning with the unpredictable nature of generative AI, emphasizing the need for new safeguards like AI firewalls. Eric shares practical strategies for smaller organizations to manage AI risks and the importance of developer accountability in deploying AI tools. He also explores the evolving regulatory landscape and the need for robust governance in AI initiatives.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Early Warning Ignored Before ChatGPT

  • Kim Jones recounts a 2018 leadership meeting where AI was declared the next revolution and security teams were unprepared.
  • Four years later, after ChatGPT's release, the same leaders scrambled to answer the questions she had raised earlier.
INSIGHT

Deterministic ML Versus Generative AI

  • Eric Nagel explains classic ML gives deterministic outputs while generative AI is nondeterministic and randomizes responses.
  • That randomness enables surprising, useful outputs but creates new operational risk for software teams.
ADVICE

Deploy An AI Firewall

  • Build an "AI firewall" that screens prompts and completions with ML modules for bias, prompt injection, code, and emojis.
  • Retrain those modules continually and supplement vendor safety with in-house controls before production use.
Get the Snipd Podcast app to discover more snips from this episode
Get the app