

EP226 AI Supply Chain Security: Old Lessons, New Poisons, and Agentic Dreams
6 snips May 19, 2025
Christine Sizemore, a Cloud Security Architect at Google Cloud, dives into the complexities of AI supply chain security. She highlights the stark differences between AI and traditional software supply chains, using engaging examples like the Suez Canal incident. The discussion uncovers unexpected threats, such as data poisoning, and emphasizes the need for continuous security integration. Sizemore explores organizational pitfalls to avoid and humorously questions whether AI can secure itself—she even shares practical strategies for safeguarding AI systems.
AI Snips
Chapters
Transcript
Episode notes
AI Supply Chain Analogies
- Christine compares the AI supply chain to fragile real-world supply chains like the Suez Canal and car manufacturing.
- She illustrates AI threats like prompt injection using the example of the Kia car theft trend during the pandemic.
Security Risks Are Supply Chain Issues
- AI security risks like data poisoning, model poisoning, and prompt injection are all integral parts of the AI supply chain.
- Securing each component in the chain is essential to building trustworthy AI models.
AI Supply Chain Rhymes With Software
- Lessons from traditional software supply chains like provenance, artifact integrity, and logging apply strongly to AI supply chains.
- Unlike code, AI models are consumed in diverse ways, so establishing production pipelines is more complex.