In this discussion, emergency physician researcher Hashem Kareemi delves into the integration of AI in emergency medical care, exploring its potential to improve clinical decisions under pressure. Dr. Kirsty Challen, a seasoned emergency medicine consultant, shares insights on the evolution of clinical decision support systems and the pressing need for ethical AI implementation. They tackle challenges in the current AI landscape, reflect on healthcare inequities, and emphasize the necessity of clinician involvement to ensure technology enhances patient care rather than complicates it.
38:44
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Skeptical Optimism Needed
Most AI-based clinical decision support tools remain in early development with limited real-world testing.
Emergency physicians should approach AI with skeptical optimism and focus on evidence and safety.
question_answer ANECDOTE
AI Inspiration from Family
Hashem Kareemi was inspired to explore AI in medicine through his wife's work on finance machine learning projects.
He recognized parallels between complex data in finance and healthcare emergency departments.
insights INSIGHT
Gap Between AI Models and Use
There is a large volume of diverse AI clinical decision support studies, but few have progressed to clinical implementation.
Advancements require moving beyond model creation to effective clinical evaluation.
Get the Snipd Podcast app to discover more snips from this episode
Reference: Kareemi et al Artificial intelligence-based clinical decision support in the emergency department: a scoping review. AEM April 2025.
Date: April 15, 2025
Guest Skeptic: Dr. Kirsty Challen is a Consultant in Emergency Medicine at Lancashire Teaching Hospitals.
Case: It may be April, but as you sit in your departmental meeting with your emergency physician colleagues, you all note that the winter “surge” of patients hasn’t stopped. The decision fatigue at the end of shifts is as present as ever. “Surely AI will be making some of these decisions better than us soon?” says one of your colleagues, only half joking. Another colleague chips in that the medical students at the nearby university have been warned against using ChatGPT to create differential diagnoses and you are left wondering whether AI might be “working” in the ED soon.
Background: Emergency departments can be a high-pressure environment. Clinical decisions must be made quickly and accurately, often with incomplete information. Clinical decision support (CDS) tools aim to address this challenge by offering real-time, evidence-informed recommendations that help clinicians make better diagnostic, prognostic, and therapeutic decisions.
CDS spans a wide spectrum from traditional paper-based clinical decision rules to smartphone apps (MDCalc) to more integrated systems into electronic health records (EHRs). These tools function by combining patient data with expert-driven algorithms or guidelines to inform care pathways. They can help determine disease likelihood, risk stratify patients and even guide resource utilization such as imaging or admission decisions.
Recent years have seen a growing interest in applying artificial intelligence (AI), particularly machine learning (ML), to CDS. Unlike traditional "knowledge-based" CDS that relies on literature-based thresholds, AI-driven tools derive patterns from large datasets ("big data") to identify associations and make predictions. These "non–knowledge-based" systems promise to augment human decision-making by uncovering insights that might be overlooked by clinicians or static rules.
However, the majority of AI-based CDS (AI-CDS) tools remain in early development. Few have been rigorously tested in the ED, and even fewer have demonstrated improvements in patient outcomes or clinician workflow. Despite FDA clearance for some tools, evidence for real-world impact remains limited. Emergency physicians are right to approach this technology with skeptical optimism. We will need to balance the transformative potential of AI with a critical eye toward evidence, safety, and usability.
Clinical Question: (1) What is the current landscape of AI-CDS tools for prognostic, diagnostic, and treatment decisions for individual patients in the ED? and (2) What phase of development have these AI-CDS tools achieved?
Reference: Kareemi et al Artificial intelligence-based clinical decision support in the emergency department: a scoping review. AEM April 2025.
Population: Studies involving AI or ML-based clinical decision support tools applied to individual patient care in the ED, published 2010 - 2023.
Excluded: Models that assessed a specific test (e.g. imaging) without clinical context, administrative or operational outcomes (e.g. patient census), models involving irrelevant data (e.g. vignettes or data not available following the emergency assessment), length of stay as a primary outcome, studies without full text or abstract in English.
Intervention: AI- or ML-based clinical decision support tools used during patient care in the ED.
Comparison: Not applicable for a scoping review. However, the review identified whether studies involved any comparison with usual care, clinician judgment, or non-AI tools.
Outcomes: The review didn’t focus on a single outcome but instead categorized studies by their targeted clinical decision task—diagnosis, prognosis, disposition, treatment,