Tech Talks Daily

Claroty on Combating Model Poisoning and Adversarial Prompts

Aug 26, 2025
Ty Greenhalgh, Healthcare Industry Principal at Claroty, specializes in healthcare cybersecurity and is dedicated to developing best practices in the field. In this discussion, he warns that the rapid adoption of AI in healthcare poses significant risks like model poisoning and adversarial prompts. Ty highlights the importance of creating a clear AI asset inventory, similar to past pitfalls with electronic health records. With upcoming regulations on AI, he urges hospitals to act now to safeguard patient safety and ensure systemic integrity.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

AI Training And Supply Chain Are Critical Risks

  • Data poisoning and supply-chain attacks create novel, clinical safety risks by corrupting AI training data or embedding backdoors in vendor updates.
  • These attacks can produce misdiagnosis, wrong treatment recommendations, or persistent access across many hospitals.
ANECDOTE

Email-Reading AI Fell For Prompt Injection

  • Ty recounts an email-reading AI that auto-inserted email content into prompts and could be socially engineered without a click.
  • The model obeyed embedded instructions like 'ignore all previous instructions' inserted into email text, showing real-world prompt-injection risk.
ADVICE

Start With A Complete AI Inventory

  • Build and maintain a complete AI asset inventory before doing vulnerability management or risk reduction.
  • Use continuous discovery, prioritization and validation processes so you can effectively scope and remediate AI exposures.
Get the Snipd Podcast app to discover more snips from this episode
Get the app