
The AI in Business Podcast Copyright & Compliance for Enterprise AI From Demos to Defensible - Nina Edwards of Prudential Insurance
10 snips
Jan 22, 2026 Nina Edwards, Vice President of Emerging Technology and Innovation at Prudential Insurance, shares her expertise in AI compliance and governance. She highlights the risks posed by employee behavior with AI tools, advocating for structured licensing and provenance to safeguard against IP issues. Nina introduces the concept of instrumented sandboxes for safe experimentation, balancing speed with compliance through phased discovery. Her insights reveal how fostering a culture of awareness and regular audits can empower enterprises to innovate responsibly and effectively.
AI Snips
Chapters
Transcript
Episode notes
Everyday Behavior Drives Most AI Risk
- Everyday employee behavior, not malicious actors, creates the majority of AI-related compliance risk.
- Unvetted copying of code, customer data, and marketing content into public AI tools causes untracked IP and data exposure.
Samsung Example Shows Accidental Damage
- Nina cites the Samsung incident where engineers pasted proprietary code into a public chatbot, prompting a GenAI ban.
- That case shows accidental sharing can force broad tool shutdowns and major business disruption.
Use Instrumented Sandboxes
- Create instrumented sandboxes that isolate experiments and enforce model and data controls.
- Log prompts, outputs, telemetry and apply automatic redaction and spend limits to prevent leakage and runaway costs.
