The Skeptics Guide to Emergency Medicine cover image

The Skeptics Guide to Emergency Medicine

Latest episodes

undefined
Feb 22, 2025 • 22min

SGEM#468: Wide Open Monocytes – Using MDW to Diagnose Sepsis

Reference: Agnello et al. Monocyte distribution width (MDW) as a screening tool for early detecting sepsis: a systematic review and meta-analysis. Clinical Chemistry and Laboratory Medicine 2022; 60(5):786-792 Clin Chem Lab Med. 2022 Date: February 21, 2025 Guest Skeptic: Dr. Aaron Skolnik is an Assistant Professor of Emergency Medicine at the Mayo Clinic Alix School of Medicine and Vice Chair of Critical Care Medicine at Mayo Clinic Arizona.  He is board-certified in Emergency Medicine, Medical Toxicology, Addiction Medicine, Internal Medicine-Critical Care, and Neurocritical Care.  Aaron is a full-time multidisciplinary intensivist.  He is the Medical Director of Respiratory Care for Mayo Clinic Arizona and enjoys serving as medical student clerkship director for critical care. Case: A 62-year-old male presents to the Emergency Department (ED) with fever, confusion, and shortness of breath. His symptoms began two days ago, starting with generalized malaise and chills, followed by progressive dyspnea and mental status changes. The patient also reports decreased urine output over the past day. He has a history of Type 2 diabetes mellitus, hypertension, and chronic kidney disease (stage 3). His home medications include metformin, lisinopril, and amlodipine, though he hasn’t taken his antihypertensives for the last two days. He is tachycardic, tachypneic, and febrile with a BP of 92/58, and an oxygen saturation of 91% on room air. Physical examination reveals diffuse crackles bilaterally and chest x-ray shows bilateral infiltrates consistent with pneumonia. His WBC is elevated at 23,000 with a left shift. Lactate is 3.8 mmol/L and blood cultures are pending. Background: Rapid and accurate diagnosis of sepsis is critical, as early intervention can significantly reduce patient mortality. However, diagnosing sepsis early remains challenging because it presents with nonspecific symptoms. New biomarkers, such as procalcitonin and lactate, offer some utility but are not sufficiently reliable on their own​​. Recently, monocyte distribution width (MDW) has emerged as a promising biomarker in this diagnostic challenge. MDW, an indicator of variability in monocyte size, can be rapidly assessed as part of an automated complete blood count differential and may flag potential sepsis early in patients. This parameter has the advantage of being available very quickly without additional blood draws, which could be helpful in the fast-paced ED setting. Current evidence suggests MDW could be used alongside existing clinical criteria to help screen for sepsis risk and guide further assessment and treatment, even in patients where sepsis is not immediately suspected​​. Clinical Question: What is the diagnostic accuracy of monocyte distribution width for early detection of sepsis? Reference: Agnello et al. Monocyte distribution width (MDW) as a screening tool for early detecting sepsis: a systematic review and meta-analysis. Clinical Chemistry and Laboratory Medicine 2022; 60(5):786-792 Clin Chem Lab Med. 2022 Population: Adults admitted to various clinical settings, particularly the ED, intensive care unit (ICU), and one infectious diseases unit (I don’t have one of those Ken – do you?). Excluded: Pediatric patients, COVID-19 patients, non-diagnostic studies (ie studies evaluating only the prognostic role of MDW), non-English studies and case reports and reviews. Intervention: Monocyte distribution width (MDW) Comparison: Sepsis-2 and Sepsis-3 diagnostic criteria. Sepsis-2 is based on systemic inflammatory response syndrome (SIRS) markers, while Sepsis-3 relies on organ dysfunction as assessed by the sequential organ failure assessment (SOFA) score​. Outcome: Diagnostic accuracy of MDW for early sepsis detection, measured by pooled sensitivity, specificity, and likelihood ratios​. Type of Study: Systematic review and meta-analysis for diagnostic accuracy
undefined
Feb 15, 2025 • 0sec

SGEM Xtra: Rock, Robot Rock – AI for Clinical Research

Date: February 11, 2025 Dr. Ross Prager Guest Skeptic: Dr. Ross Prager is an Intensivist at the London Health Sciences Centre and an adjunct professor at Western University. His expertise in critical care medicine is complemented by his research interests in critical care ultrasound and evidence-based knowledge translation. This is another SGEM Xtra. On today’s episode, we’re diving into a fascinating and evolving topic—how artificial intelligence (AI) shapes clinical research. AI has the potential to streamline many aspects of medical research, from study design to statistical analysis and even manuscript preparation. But as always, we need to approach these innovations skeptically. There are a lot of promises being laid on the shoulders of AI, and increasingly, it can be difficult to separate the hype from reality. I certainly believe that AI will change what clinical research looks like in the next decade, but at its core, it will be the synergy between researchers and technology that drives innovation, not either in isolation. The easiest way to reflect on how AI might be used in clinical research is to think about the research lifecycle. Layered on top of this are themes like collaboration/team efficiency, security and privacy, and other general administrative efficiencies (accounting + Meeting scheduling + email management). Study inception and design Protocol Generation Ethics application Study Facilitation and Recruitment Data Extraction Data Analysis Manuscript writing Manuscript submission Knowledge Mobilization  Eleven Questions on Artifical Intelligence and Clinical Research Listen to the SGEM Podcast to hear Dr. Prager's responses to my eleven questions.  1. Designing a Study with AI – Formulating a PICO Question: Every good clinical study starts with a clear and well-defined research question. AI tools are now being used to help formulate the PICO (Population, Intervention, Control, Outcome). How can AI assist researchers in this first critical step? 2. Identifying Potential Study Participants from EMRs: One of the biggest challenges in research is identifying eligible patients. Traditionally, this has been done manually and has been a very time-intensive process (think medical students). How can AI help streamline this? 3. Determining the Most Important Patient-Oriented Outcome: Research should prioritize outcomes that matter to patients. How can AI help determine the most clinically meaningful and patient-centred outcomes for a study? In other words, can AI help us find the POO? 4. Estimating Effect Size and Sample Size Calculations: To conduct a well-powered study, researchers need to estimate the expected effect size and determine the required sample size. Can AI assist with these calculations? 5. AI for Statistical Analysis and Data Visualization: Once data is collected, the next step is analysis. How can AI assist with statistics and visualizing complex data? 6. AI-Assisted Manuscript Writing and Editing: Writing a research paper is a time-consuming process, especially for non-native English speakers. A friend of mine is a clinical researcher and editor for a major journal. They talk about knowing some brilliant researchers who cannot write/communicate well. Can AI help these people and improve the clarity and readability of their scientific manuscripts? 7. Verifying Citation Accuracy: We will be talking about the issue of inaccurate citations in the medical literature with Dr. Nick Peoples. His research reported that citations are not correct up to 25% of the time (reference). Concerns have been raised about AI hallucinated citations. We want to make things better, not worse, by using AI. How can AI be used to ensure accuracy and prevent misinformation in referencing? 8. AI in Systematic Reviews and Meta-Analyses: Another form of clinical research is performing systematic reviews and meta-analyses.
undefined
Feb 1, 2025 • 39min

SGEM #467: Send me on my way…without Cervical Spine Imaging

Reference: Leonard JC et al. PECARN prediction rule for cervical spine imaging of children presenting to the emergency department with blunt trauma: a multicentre prospective observational study. Lancet Child Adolesc Health. June 2024. Date: Oct 15, 2024 Dr. Tabitha Cheng Guest Skeptic: Dr. Tabitha Cheng is a Southern California native and board-certified emergency medicine physician and completed an EMS fellowship as well. The learning didn’t end because she then completed another fellowship in pediatric emergency medicine at Harbor UCLA. Case: An 8-year-old girl is brought in by EMS after a car accident. She was seat belted, sitting in the backseat of the family’s car when they were hit from the side by another vehicle that ran a red light. The airbags deployed, and the car spun a few times. When EMS arrived on the scene, they found both parents unconscious and the girl appeared slightly dazed and confused. EMS places her in a cervical collar and brings her to the emergency department (ED). On your examination, you see she is scared but answering questions appropriately. She does have some abrasions from her seatbelt and complains of pain around her ankle. The rest of her exam is unremarkable. After your evaluation, you are informed that her grandmother has arrived to be with the girl as her other family members are being treated. She looks at the contraption on the girl’s neck and asks you, “Is she okay? Is something wrong with her neck? Does she need an X-ray or CT scan?" Background: Pediatric cervical spine (c-spine) injuries are uncommon (1-3% of blunt trauma). These injuries typically result from blunt trauma caused by motor vehicle accidents, falls, sports injuries, or physical abuse. Although C-spine injuries represent a small fraction of pediatric trauma cases, their potential severity makes accurate and timely diagnosis critical. Younger kids tend to have big lollipop heads which makes them more prone to injury in the upper cervical spine compared to adults (their fulcrum is higher). It is also sometimes difficult to get a scared child to give an accurate history or cooperate with an exam. Many of us use CT or X-rays to help detect cervical spine injuries in this population. Clinicians working in EDs must strike a balance between ensuring they do not miss these rare but serious injuries and avoiding unnecessary imaging, particularly computed tomography (CT), which exposes children to ionizing radiation. Given the sensitivity of developing tissues to radiation, especially in younger children, avoiding unnecessary imaging is a high priority in pediatric care. Traditional diagnostic approaches often lead to the overuse of imaging tools, like CT scans and X-rays, even in low-risk children. This has prompted a movement toward more refined, evidence-based methods for identifying pediatric C-spine injuries, particularly through the development of clinical decision rules (CDRs). CDRs are designed to assist clinicians in making more accurate decisions about when imaging is truly necessary by identifying key clinical predictors of serious injuries. The Pediatric Emergency Care Applied Research Network (PECARN) has been instrumental in developing one of the most widely recognized CDRs for pediatric C-spine injuries. Based on large, multicenter studies, this tool identifies critical risk factors that signal the need for imaging, such as altered mental status, focal neurological deficits, and certain mechanisms of injury. The PECARN rule, validated in clinical settings, has demonstrated high sensitivity in detecting C-spine injuries, while also reducing unnecessary imaging. There are multiple CDRs for identifying pediatric c-spine injuries besides PECARN. The SGEM recently covered the Cochrane systematic review on pediatric CDRs on SGEM #441. Clinical Question: Can the new PECARN clinical prediction rule (tool) guide imaging decisions in detecting pediatric cervical spine injuries...
undefined
Jan 25, 2025 • 25min

SGEM#466: I Love ROC-n-Roll…But Not When It’s Hacked

Date: January 9, 2025 Reference: White et al. Evidence of questionable research practices in clinical prediction models. BMC Med 2023 Guest Skeptic: Dr. Jestin Carlson is the Program Director for the AHN-Saint Vincent EM Residency in Erie Pennsylvania.  He is the former National Director of Clinical Education for US Acute Care Solutions and an American Red Cross Scientific Advisory Council member.    Dr. Richard Bukata We have had the pleasure of both working for the Legend of EM, Dr. Richard Bukata. He is an amazing educator and a great human being.  He has been involved in medical education for over 40 years.  He helped create the Emergency Medicine and Acute Care course, a ‘year-in-review’ course where the faculty review over 200 articles from the last year in a rapid-fire, tag-team format, meaning one presents one article, the other provides additional commentary, and then they switch.  Each article takes about 2-3 minutes. The faculty is amazing, and the course is held in some wonderful locations: Vail, Maui, New York City, New Orleans, Hilton Head, San Diego, and Key West.  There is also a self-study option if you are not able to attend in person.  Case: You are working with a fourth-year medical student who is an avid listener to the Skeptics Guide to Emergency Medicine podcast.  They recently listened to an episode examining a paper that used receiver operating characteristic curves or ROC curves to determine the accuracy of a predictive model by looking at the area under the curve or AUC. The student knows from other SGEM podcasts that there has been evidence of p-hacking in the medical literature and wonders if there have been similar instances with ROC curves. They ask you if there is any evidence of ‘ROC’ or ‘AUC-hacking?’ Background: To answer that young skeptic’s question, they must understand ROC curves. The ROC is a tool used to evaluate the diagnostic performance of a test or prediction model. The curve is graphed with the true positive rate (sensitivity) on the y-axis and the false positive rate (1-specificity) on the x-axis at various threshold levels for classifying a test result as positive or negative. ROC curves help clinicians determine how well a test or model can differentiate between patients with and without a condition. A perfect test would have a point at the top-left corner of the graph (sensitivity = 1, specificity = 1). The area under the curve (AUC) is often used to summarize a prediction model's discriminatory capacity. A result of 1.0 indicates perfect discrimination, while an AUC of 0.5 suggests performance no better than chance—essentially, a coin toss. By comparing the ROC curves of different tests or models, clinicians can identify which performs better in discrimination. Interpretation of the AUC often hinges on thresholds. Values of 0.7, 0.8, and 0.9 are commonly labelled as “fair,” “good,” or “excellent.” These descriptors, while convenient, are arbitrary and lack scientific foundation. Their widespread use introduces a strong temptation for researchers to achieve “better” AUC values. This drive can lead to things like p-hacking, a questionable research practice in which investigators manipulate data or analyses to cross predefined thresholds. P-hacking is not exclusive to AUC but is a well-documented problem in broader research, particularly surrounding the 0.05 p-value significance threshold. In the context of AUC, p-hacking might include selectively reporting favourable results, re-analyzing data multiple times, or even tweaking model parameters to inflate values. Such practices risk misleading clinicians and compromising patient care by promoting overly optimistic models. Understanding the prevalence and mechanisms of AUC-related p-hacking is vital for emergency physicians who often rely on clinical prediction tools for critical decisions. As the use of these models grows, so does the importance of transparent and robust research practices.
undefined
Jan 18, 2025 • 1h 17min

SGEM Xtra: This is My Fight Song – FeminEM 2.0

Dr. Dara Kass, an emergency medicine physician and healthcare equity advocate, joins Dr. Esther Choo, a science communicator and racism/sexism opponent, alongside Dr. Jenny Beck-Esmay, an educator passionate about gender equity. They explore the FemInEM 2.0 initiative, sharing personal stories that highlight the challenges women face in healthcare, particularly during the pandemic. The trio discusses the implications of EMTALA on reproductive rights, emphasizing the need for community resilience and advocacy in emergency medicine.
undefined
Jan 11, 2025 • 35min

SGEM#465: Not A Second Time – Single Center RCTs Fail To Replicate In Multi-Center RCTs

In this discussion, Dr. Scott Weingart, an ED Intensivist from New York with a rich background in Trauma and Critical Care, dives into the reliability of clinical trials. He highlights the challenges of replicating single-center randomized trials in larger, multi-center settings, pointing out significant discrepancies in outcomes. The conversation also touches on the importance of methodology in trial design and the real-world applicability of results, encouraging ongoing training and clinical judgment in emergency medicine.
undefined
Jan 4, 2025 • 0sec

SGEM Xtra: Think, About It – Ten Commandments for Teachers

In this engaging discussion, Akil Dasan, a versatile artist known for his musical talents and his contribution to Us3, joins to explore the Ten Commandments for Teachers. They dive into Bertrand Russell's thoughts on liberalism versus tyranny, highlighting the significant role of critical thinking in education. The conversation challenges assumptions and emphasizes the necessity of open dialogue and mutual respect in doctor-patient relationships. Dasan also shares insights on fostering empathy and navigating the complexities of evidence-based practices, all while weaving in his artistic perspective.
undefined
Dec 28, 2024 • 44min

SGEM#464: I Can Do It with A Broken Heart – Compassion for Patients with OUD

Savannah Steinhauser, a fourth-year medical student and founder of CMSRU Outreach Alliance, discusses her experiences providing street outreach for opioid use disorder patients in Camden, NJ. The conversation delves into the critical need for compassion in emergency medicine, especially when treating individuals with opioid addiction. They explore the stigma these patients face and how empathetic care can significantly impact their treatment experiences. Additionally, Steinhauser highlights the importance of community engagement and dedicated outreach efforts in supporting this vulnerable population.
undefined
Dec 21, 2024 • 21min

SGEM Xtra: The 12 Days of Christmas the SGEM Gave to Me

Dr. Chris Carpenter, Vice Chair of Emergency Medicine at Mayo Clinic, shares his expertise in statistics with a festive twist. The discussion hilariously explores the 12 Nerdy Days of Christmas, highlighting common statistical faux pas. From the misunderstood P value to the importance of confidence intervals, Carpenter explains how to safeguard clinical decisions. With a playful debate on holiday movie traditions and whimsical insights into evidence-based medicine, this episode combines holiday cheer with vital statistical empowerment.
undefined
Dec 14, 2024 • 41min

SGEM #463: Like the Legend of the Phoenix… Criteria for Sepsis

Prof. Damian Roland, a Pediatric Emergency Medicine expert from the University of Leicester, and Dr. Halden Scott, a sepsis researcher at Children's Hospital Colorado, delve into the complexities of diagnosing pediatric sepsis. They discuss the alarming symptoms presented in a case of a critically ill child and highlight the urgent need for improved diagnostic criteria. The Phoenix Criteria are introduced as a revolutionary tool for identifying sepsis, emphasizing its evidence-based approach and potential to enhance clinical decision-making in emergency settings.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode