Ty Greenhalgh, Healthcare Industry Principal at Claroty, specializes in healthcare cybersecurity and is dedicated to developing best practices in the field. In this discussion, he warns that the rapid adoption of AI in healthcare poses significant risks like model poisoning and adversarial prompts. Ty highlights the importance of creating a clear AI asset inventory, similar to past pitfalls with electronic health records. With upcoming regulations on AI, he urges hospitals to act now to safeguard patient safety and ensure systemic integrity.
35:29
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
AI Training And Supply Chain Are Critical Risks
Data poisoning and supply-chain attacks create novel, clinical safety risks by corrupting AI training data or embedding backdoors in vendor updates.
These attacks can produce misdiagnosis, wrong treatment recommendations, or persistent access across many hospitals.
question_answer ANECDOTE
Email-Reading AI Fell For Prompt Injection
Ty recounts an email-reading AI that auto-inserted email content into prompts and could be socially engineered without a click.
The model obeyed embedded instructions like 'ignore all previous instructions' inserted into email text, showing real-world prompt-injection risk.
volunteer_activism ADVICE
Start With A Complete AI Inventory
Build and maintain a complete AI asset inventory before doing vulnerability management or risk reduction.
Use continuous discovery, prioritization and validation processes so you can effectively scope and remediate AI exposures.
Get the Snipd Podcast app to discover more snips from this episode
AI is rapidly becoming part of the healthcare system, powering everything from diagnostic tools and medical devices to patient monitoring and hospital operations. But while the potential is extraordinary, the risks are equally stark. Many hospitals are adopting AI without the safeguards needed to protect patient safety, leaving critical systems exposed to threats that most in the sector have never faced before.
In this episode of Tech Talks Daily, I speak with Ty Greenhalgh, Healthcare Industry Principal at Claroty, about why healthcare’s AI rush could come at a dangerous cost if security does not keep pace. Ty explains how novel threats like adversarial prompts, model poisoning, and decision manipulation could compromise clinical systems in ways that are very different from traditional cyberattacks. These are not just theoretical scenarios. AI-driven misinformation or manipulated diagnostics could directly impact patient care.
We explore why the first step for hospitals is building a clear AI asset inventory. Too many organizations are rolling out AI models without knowing where they are deployed, how they interact with other systems, or what risks they introduce. Ty draws parallels with the hasty adoption of electronic health records, which created unforeseen security gaps that still haunt the industry today.
With regulatory frameworks like the UK’s AI Act and the EU’s AI regulation approaching, Ty stresses that hospitals cannot afford to wait for legislation. Immediate action is needed to implement risk frameworks, strengthen vendor accountability, and integrate real-time monitoring of AI alongside legacy devices. Only then can healthcare organizations gain the trust and resilience needed to safely embrace the benefits of AI. This is a timely conversation for leaders across healthcare and cybersecurity. The sector is on the edge of an AI revolution, but the choices made now will determine whether that revolution strengthens patient care or undermines it.
You can learn more about Claroty’s approach to securing healthcare technology at claroty.com.