The Skeptics Guide to Emergency Medicine

Dr. Ken Milne
undefined
Dec 23, 2020 • 1h 1min

SGEM Xtra: Relax – Damm It!

Date: December 21st, 2020 Professor Tim Caulfield This is a SGEM Xtra book review. I had the pleasure of interviewing Professor Timothy Caulfield. Tim is a Canadian professor of law at the University of Alberta, the Research Director of its Health Law Institute, and current Canada Research Chair in Health Law and Policy. His area of expertise is in legal, policy and ethical issues in medical research and its commercialization. Tim came on the SGEM and discussed his new book called Relax, Dammit! A User's Guide to the Age of Anxiety. Listen to the podcast to hear us discuss his new book, skepticism, and science communication in general.   The SGEM has a global audience with close to 45,000 subscribers. Many of the SGEMers live in the US and Tim's book has a different title in America. It is called Your Day Your Way: The Facts and Fictions Behind Your Daily Decisions. Tim gives some insight on the podcast why there is a different title in Canada and the US. Tim and I met in 2015 at the Canadian Associate of Emergency Physicians (CAEP) Annual Conference in Edmonton. He was a keynote speaker and discussed his previous book Is Gwyneth Paltrow Wrong about Everything? How the Famous Sell Us Elixirs of Health, Beauty & Happiness. Tim gave a fantastic presentation. I was in Edmonton talking nerdy as part of the CAEP TV initiative. We have been in contact via social media ever since trying to improve science communication. Besides writing books, Tim has stared in his own Netflix series called: A User guide to Cheating Death. He has also collaborated Dr. Jennifer Gunter who wrote the book The Vagina Bible. Dr. Gunter visited BatDoc a few years ago for an SGEM Xtra extra episode. A Few of Professor Caulfield's academic publications: Commentary: the law, unproven CAM and the two‐hats fallacy. Focus on Alternative and Complementary Therapies, 17: 4-8. Stem cell hype: Media portrayal of therapy translation. Science Translational Medicine.11 Mar 2015: Vol. 7, Issue 278, pp. 278ps4 Injecting doubt: responding to the naturopathic anti-vaccination rhetoric. Journal of Law and the Biosciences, Volume 4, Issue 2, August 2017, Pages 229–249 COVID-19 and ‘immune boosting’ on the internet: a content analysis of Google search. BMJ Open 2020;10:e040989. Previous books reviewed on the SGEM: Jeanne Lenzer The Danger Within Us: America's Untested, Unregulated Medical Device Industry and One Man's Battle to Survive It. Dr. Steven Novella Skeptics Guide to the Universe: How to Know What's Really Real in a World Increasingly Full of Fake.  Dr. Brian Goldman The Power of Kindness: Why Empathy is Essential in Everyday Life  Tim's new book Relax Dammit! is organized into the day in the life of Tim Caulfield. It discusses the science behind our daily activities. On the podcast Tim provides five examples that he thinks might be interesting to the SGEM audience. This includes: Breakfast, coffee, commuting to work, napping and raw milk. I hope you like this type of SGEM Xtra. Let me know what you think and I will consider doing more book reviews with authors if the feedback is positive. The SGEM will be back episode with a structured critical review of a recent publication trying to cut the knowledge translation window down from over ten years to less than one year. Remember to be skeptical of anything you learn, even if you heard it on the Skeptics’ Guide to Emergency Medicine.
undefined
Dec 19, 2020 • 25min

SGEM#312: Oseltamivir is like Bad Medicine – for Influenza

Date: December 16th, 2020 Reference: Butler et al. Oseltamivir plus usual care versus usual care for influenza-like illness in primary care: an open-label, pragmatic, randomised controlled trial. The Lancet 2020 Guest Skeptic: Dr. Justin Morgenstern is an emergency physician and the creator of the #FOAMed project called First10EM.com. He has a great new blog post about how we are failing to protect our healthcare workers during COVID-19. Case: A 45-year-old female presents to her primary care clinician complaining of fever, sore throat and muscle aches. She did not get a flu shot this year. You diagnose her with an influenza-like illness (ILI). She wants to know if taking an anti-viral like oseltamivir (Tamiflu) will help? Background: We covered oseltamivir six years ago in SGEM#98. This is still the longest Cochrane review (300+ pages) I have ever read (Jefferson et al 2014a). The overall bottom line was when balancing potential risks and potential benefits, the evidence does not support routine use of neuraminidase inhibitors like oseltamivir for the treatment or prevention of influenza in any individual. There has been some controversy around oseltamivir. It was approved by licensing agencies and promoted by the WHO based on unpublished trials. None of those agencies had actually looked at the unpublished data. In fact, the primary authors of key oseltamivir trials had never been given access to the data – Roche just told them what the data supposedly said. Other papers were ghost-written (Cohen 2009). The BMJ was involved in a legal battle with Roche for half a decade trying to get access to that information. When they finally got their hands on the data, the conclusions of the reviews suddenly changed. After countries had spent billions stockpiling the drug, it turned out that oseltamivir had no effect on influenza complications, was not effective in prophylaxis, and had significantly more harms than originally reported (Jefferson 2014a; Jefferson 2014b).  You can read more details about this controversy in the BMJ. The oseltamivir issue is a great example of the problems with conflicts of interest (COI) in medical research. This is something I have spoken about often. It is not an ad hominem attack on any of the authors. Our current system of medical research involves industry funding. COIs are just another data point that needs to be considered. This is because the evidence shows COIs can introduce bias into RCTs, SRMA and Clinical Practice Guidelines.  When I use the term bias I am referring to something that systematically moves us away from the “truth”. There is specific evidence of bias in the oseltamivir literature. Dunn and colleagues looked at 37 assessments done in 26 systematic reviews and then compared their conclusions to the financial conflicts of interest of the authors. Among eight assessments where the authors had conflicts, seven (88%) had favourable conclusions about neuraminidase inhibitors. However, among the 29 assessments that were made by authors without conflicts, only five (17%) were positive (Dunn et al 2014). The current best evidence shows that oseltamivir (Jefferson et al 2014a): Decreased time to first alleviation of symptoms by less than one day Does not statistically change hospital admission rate (1.7% vs 1.8%) Does increase nausea (NNH 28) and vomiting (NNH 22) Does increase neuropsychiatric events (NNH 94) Does increase headaches (NNH 32) Clinical Question: Does oseltamivir improve time to recovery in patients presenting to their primary care clinician with an influenza-like illnesses? Reference: Butler et al. Oseltamivir plus usual care versus usual care for influenza-like illness in primary care: an open-label, pragmatic, randomised controlled trial. The Lancet 2020. Population: Patients from 15 European countries over three influenza seasons who were one year of age and older and who presented to their primary care clinician with symptoms of influenza-like illness (ILI). ILI was defined as a “sudden onset of self-reported fever, with at least one respiratory symptom (cough, sore throat, or running or congested nose) and one systemic symptom (headache, muscle ache, sweats or chills, or tiredness), with symptom duration of 72 h or less during a seasonal influenza epidemic.” Exclusions: Chronic renal failure, substantial impaired immunity, patients in whom the treating clinician thought Tamiflu or admission to hospital was required, allergy, planned general anesthesia in the next two weeks, life expectancy less than six months, severe hepatic impairment, requirement for any live viral vaccine in the next seven days, and in some jurisdictions pregnant or lactating women. Intervention: Oseltamivir (Tamiflu) 75 mg by mouth twice daily for five days in adults and children more than 40 kg. For children (13 years or younger), oral suspension was given according to weight (children weighing 10–15 kg received 30 mg, >15–23 kg received 45 mg, >23–40 kg received 60 mg, and >40 kg received 75 mg). Comparison: Usual care Outcome: Primary Outcome: Patient reported time to recovery based on daily symptom journals. Recovery was defined as having returned to usual daily activity and fever, headache, and muscle ache were rated as minor or no problem in key subgroups. Secondary Outcomes: Cost-effectiveness, hospital admissions, complications related to ILI, repeat attendance in general practice, time to alleviation of symptoms of ILI, incidence of new or worsening symptoms, time to initial reduction in severity of symptoms, use of additional symptomatic and prescribed medication, including antibiotic, transmission of infection within household, self-management of symptoms of ILI and adverse events/harms. Authors’ Conclusions: “Primary care patients with influenza-like illness treated with oseltamivir recovered one day sooner on average than those managed by usual care alone. Older, sicker patients with comorbidities and longer previous symptom duration recovered 2–3 days sooner.” Quality Checklist for Randomized Clinical Trials: The study population included or focused on those in the emergency department. No The patients were adequately randomized. Yes The randomization process was concealed. Yes The patients were analyzed in the groups to which they were randomized. Yes The study patients were recruited consecutively (i.e. no selection bias). Unsure The patients in both groups were similar with respect to prognostic factors. Yes All participants (patients, clinicians, outcome assessors) were unaware of group allocation. No All groups were treated equally except for the intervention. Yes Follow-up was complete (i.e. at least 80% for both groups). Yes All patient-important outcomes were considered. No The treatment effect was large enough and precise enough to be clinically significant. No Key Results: They enrolled 3,266 people from 15 European countries over three influenza seasons. Slightly more than half (52%) had a PCR-confirmed influenza infection. Primary Outcome: Time to recovery Mean benefit from oseltamivir was 1.02 days (95% [BCrI] 0.74 to 1.31) Some people may not have heard of the Bayesian Credible Interval (BCrI). It’s very much like the 95% confidence interval we talk about but reflects the fact that there is a big difference between Bayesian statistics and frequentist statistics.  Bayesian statistics simply tell us that prior probability matters. Secondary Outcomes:  No statistical differences identified in patient-reported repeat visits with health-care services, hospitalisations, x-ray confirmed pneumonia, or over-the-counter use of medication containing acetaminophen or ibuprofen More nausea or vomiting in the intervention group compared to usual group (21% vs. 16%) 1) Conflicts of Interest (COIs): Multiple authors reported COIs with Roche (make or oseltamivir). We already talked about this issue in the background material. We do not consider COI necessarily as a negative but rather a potential source of bias that needs to be considered when interpreting the literature. There is a good review on this issue of COI and reducing bias by Bradley et al in the JRSM 2020. 2) Blinding: The lack of blinding and the fact that the primary outcome is subjective are major limitations of this trial. With that combination, we expect significant bias. We expect that the patients given the fancy pill to think they are getting better (placebo effect), while the patient who were given nothing will have no impact or even feel worse. In my mind, there is really no reason to design the trial this way. The authors say that they “deliberately chose to do an open-label trial in the context of everyday practice, because effect sizes identified by placebo-controlled, efficacy studies with tight inclusion criteria might not be reproduced in routine care. We also wished to estimate time to patient reported recovery from the addition of an antiviral agent to usual care rather than benefit from oseltamivir treatment compared with placebo.” The logic here seems to be completely backwards. There is certainly a role for real world trials, because treatments often look worse in the real world, when medications are not always taken and patients are not so tightly selected. However, the existing evidence for oseltamivir is weak, so a trial designed to see a worse outcome in a real-world setting doesn’t make a lot of sense. More importantly, the desire to study oseltamivir combined with usual care has nothing to do with using a placebo or properly blinding a trial. There are many trials that compare usual care plus a treatment to usual care plus placebo. Deciding to make the trial unblinded simply introduces unnecessary bias. Interestingly,
undefined
Dec 12, 2020 • 33min

SGEM#311: Here We Go Loop De Loop to Treat Abscesses

Date: December 10th, 2020 Reference: Ladde et al. A Randomized Controlled Trial of Novel Loop Drainage Technique Versus Drainage and Packing in the Treatment of Skin Abscesses. AEM December 2020 Guest Skeptic: Dr. Kirsty Challen (@KirstyChallen) is a Consultant in Emergency Medicine and Emergency Medicine Research Lead at Lancashire Teaching Hospitals Trust (North West England). She is Chair of the Royal College of Emergency Medicine Women in Emergency Medicine group and involved with the RCEM Public Health and Informatics groups. Kirsty is also the creator of the wonderful infographics called #PaperinaPic. Case: A 52-year-old previously healthy woman presents to your emergency department (ED) with an abscess on her left forearm. She is systemically well and there is no sign of tracking, so you decide to perform incision and drainage in the ED. When you ask your nursing colleague to set up the equipment, he wants to know if you will be using standard packing or a vessel loop drainage technique. Background: We have covered the issue of abscesses multiple times on the SGEM. Way back in 2012 we looked at packing after incision and drainage (I&D) on SGEM#13 and concluded routine packing might not be necessary. Another topic covered was whether irrigating after I&D was superior to not irrigating (SGEM#156). The bottom line from that critical appraisal was that irrigation is probably not necessary. Chip Lange (PA) The use of antibiotics after I&D is another treatment modality that has been debated over the years. Chip Lange and I interviewed Dr. David Talan about his very good NEJM randomized control trial on SGEM#164. The bottom line was that the addition of TMP/SMX to the treatment of uncomplicated cutaneous abscesses represents an opportunity for shared decision-making. One issue that has not been covered yet is the loop technique. This is when one or multiple vessel loops are put through the abscess cavity. This is done by making a couple of small incisions. An advantage to this technique over packing (which is not necessary) is that the Vessel loops do not need to be changed or replaced. Clinical Question: In uncomplicated abscesses drained in the ED, does the LOOP technique reduce treatment failure? Reference: Ladde et al. A Randomized Controlled Trial of Novel Loop Drainage Technique Versus Drainage and Packing in the Treatment of Skin Abscesses. AEM December 2020 Population: Patients of any age undergoing ED drainage of skin abscesses Exclusions: Patient with abscess located on hand, foot, or face or if they required admission and/or operative intervention. Intervention: LOOP technique where a vessel tie is left in situ Comparison: Standard packing with sterile ribbon gauze Outcome: Primary Outcome: Treatment failure (need for a further procedure, IV antibiotics or operative intervention), assessed at 36 hours. Secondary Outcomes: Ease of procedure, pain at the time of treatment, ease of care at 36 hours, pain at 36 hours. Dr. Ladde This is an SGEMHOP episode which means we have the lead author on the show. Dr. Ladde is in an active academic emergency physician working at Orlando Regional Medical Center serving as core faculty and Senior Associate Program Director. Jay also has the rank Professor of Emergency Medicine for University of Central Florida College of Medicine. Authors’ Conclusions: “The LOOP and packing techniques had similar failure rates for treatment of subcutaneous abscesses in adults, but the LOOP technique had significantly fewer failures in children. Overall, pain and patient satisfaction were significantly better in patients treated using the LOOP technique.” Quality Checklist for Randomized Clinical Trials: The study population included or focused on those in the emergency department. Yes The patients were adequately randomized. Yes The randomization process was concealed. Yes The patients were analyzed in the groups to which they were randomized. Unsure The study patients were recruited consecutively (i.e. no selection bias). No The patients in both groups were similar with respect to prognostic factors. Yes All participants (patients, clinicians, outcome assessors) were unaware of group allocation. All groups were treated equally except for the intervention. Yes Follow-up was complete (i.e. at least 80% for both groups). Yes All patient-important outcomes were considered. Yes The treatment effect was large enough and precise enough to be clinically significant. No Key Results: They recruited 256 participants into the trial with 90% (196) having outcome data. The mean age was 22 years, 71% were thought to also have cellulitis and 83% (213/256) received antibiotics at discharge. More than 80% of those prescribed antibiotics were given the combination of cephalexin and TMX-SMP. No statistical difference in treatment failure between loop technique and packing. Primary Outcome: Treatment failure 20% (95% CI 12-28%) in packing group vs. 13% (6-20%) LOOP group; p=0.25. Secondary Outcomes:  Treatment Failure in Children: 21% (8-34%) in packing group vs 0% LOOP group p=0.002. Ease and pain of procedure were the same, but ease of care and pain over 36 hours and patient satisfaction at 10 days were improved in the LOOP group We have five nerdy questions for Jay. Listen to the podcast on iTunes to hear his responses. 1) Old Data: This study was conducted from March 14, 2009, until April 10, 2010. Why delay and do you think the results are still valid today? 2) Convenience Sample: You only recruited when the research team was available. This is a common limitation seen in EM research. Did you manage to cover the whole working week adequately? 3) Children: You did a subgroup analysis of children. This was not pre-planned and should be considered hypothesis generating. Why do you think they appear to have responded  differently and have you tried to confirm this result? 4) Blinding: We appreciate it can be difficult to blinding the clinician and patient to treatment allocation. However, would it have been possible to blind the outcome assessors? The clinician could have removed the packing or loop and then a research assistant could have assessed the outcome blinded to treatment modality. 5) Comparison Group: You compared this to ribbon packing. We have evidence that this is not necessary (SGEM#13). Have you considered repeating the trial and investigating the LOOP technique and comparing it to not packing the abscess? Comment on Authors’ Conclusion Compared to SGEM Conclusion: We agree with the authors’ conclusions about failure rates in adults, and pain and satisfaction overall. We are more cautious about the reduced failure rate in children and think this has room for further exploration. SGEM Bottom Line: Consider putting in a vessel LOOP on your next uncomplicated abscess. Case Resolution: You ask the nurse to set up for the LOOP technique and the patient leaves after the procedure with follow up planned. Dr. Kirsty Challen Clinical Application: Using the LOOP technique can result in less pain and easier care for the patient in the 36 hours following I&D. What Do I Tell My Patient?  There are two techniques for draining an abscess, which have similar failure rates, but leaving a small piece of rubber through the wound rather than filling it with cloth ribbon makes it more comfortable over the next 36 hours. Keener Kontest: Last weeks’ winner was Dr. Matt Runnalls. He is an EM physician from Cambridge, Ontario. Matt knew 3.2% of women over the age of 65, (1.9% of all people over the age of 65) present to the ED with dizziness/vertigo according to the 2017 NHAMCS database. Listen to the SGEM podcast to hear this weeks’ question. Send your answer to TheSGEM@gmail.com with “keener” in the subject line. The first correct answer will receive a cool skeptical prize. SGEMHOP: Now it is your turn SGEMers. What do you think of this episode on the loop technique to treat abscesses the ED? Tweet your comments using #SGEMHOP.  What questions do you have for Jay and his team? Ask them on the SGEM blog. The best social media feedback will be published in AEM. Also, don’t forget those of you who are subscribers to Academic Emergency Medicine can head over to the AEM home page to get CME credit for this podcast and article. We will put the process on the SGEM blog: Go to the Wiley Health Learning website Register and create a log in Search for Academic Emergency Medicine – “December” Complete the five questions and submit your answers Please email Corey (coreyheitzmd@gmail.com) with any questions or difficulties. Remember to be skeptical of anything you learn, even if you heard it on the Skeptics’ Guide to Emergency Medicine.
undefined
Dec 5, 2020 • 28min

SGEM#310: I Heard A Rumour – ER Docs are Not Great at the HINTS Exam

Date: November 30th, 2020 Reference: Ohle R et al. Can Emergency Physicians Accurately Rule Out a Central Cause of Vertigo Using the HINTS Examination? A Systematic Review and Meta-analysis. AEM 2020 Guest Skeptic: Dr. Mary McLean is an Assistant Program Director at St. John’s Riverside Hospital Emergency Medicine Residency in Yonkers, New York. She is the New York ACEP liaison for the Research and Education Committee and is a past ALL NYC EM Resident Education Fellow.  Case: A 50-year-old female presents to your community emergency department in the middle of the night with new-onset constant but mild vertigo and nausea. She has nystagmus but no other physical exam findings. You try meclizine, ondansetron, valium, and fluids, and nothing helps. Her head CT is negative (taken 3 hours after symptom onset). You’re about to call in your MRI tech from home, but then you remember reading that the HINTS exam is more sensitive than early MRI for diagnosis of posterior stroke. You wonder, “Why can’t I just rule out stroke with the HINTS exam? How hard can it be?” You perform the HINTS exam and the results are reassuring, but the patient’s symptoms persist…  Background: Up to 25% of patients presenting to the ED with acute vestibular syndrome (AVS) have a central cause of their vertigo - commonly posterior stroke. Posterior circulation strokes account for approximately up to 25% of all ischemic strokes [1]. MRI diffuse-weighted imagine (DWI) is only 77% sensitive for detecting posterior stroke when performed within 24h of symptom onset [2,3]. As an alternative diagnostic method, the HINTS exam was first established in 2009 to better differentiate central from peripheral causes of AVS [4]. But what is the HINTS exam? It’s a combination of three structured bedside assessments: the head impulse test of vestibulo-ocular reflex function, nystagmus characterization in various gaze positions, and the test of skew for ocular alignment. When used by neurologists and neuro-ophthalmologists with extensive training in these exam components, it has been found to be nearly 100% sensitive and over 90% specific for central causes of AVS [5-8]. Over the past decade, some emergency physicians have adopted this examination into their own bedside clinical assessment and documentation. We’ve used it to make decisions for our patients, particularly when MRI is not readily available. We’ve even used it to help decide whether or not to get a head CT. But we’ve done this without the extensive training undergone by neurologists and neuro-ophthalmologists, and without any evidence that the HINTS exam is diagnostically accurate in the hands of emergency physicians. Clinical Question: Can emergency physicians accurately rule out a central cause of vertigo using the HINTS examination? Reference: Ohle R et al. Can Emergency Physicians Accurately Rule Out a Central Cause of Vertigo Using the HINTS Examination? A Systematic Review and Meta-analysis. AEM 2020 Population: Adult patients presenting to an ED with AVS Exclusions: Non-peer-reviewed studies, unpublished data, retrospective studies, vertigo which stopped before or during workup, incomplete HINTS exam, or studies with data overlapping with another study used Intervention: HINTS examination by emergency physician, neurologist, or neuro-ophthalmologist Comparison: CT and/or MRI Outcome: Diagnosis of HINTS examination for central cause for AVS (i.e., posterior stroke) Authors’ Conclusions: “The HINTS examination, when used in isolation by emergency physicians, has not been shown to be sufficiently accurate to rule out a stroke in those presenting with AVS.” Quality Checklist for Systematic Review Diagnostic Studies:  The diagnostic question is clinically relevant with an established criterion standard. Unsure The search for studies was detailed and exhaustive. Yes The methodological quality of primary studies were assessed for common forms of diagnostic research bias. Yes The assessment of studies were reproducible. Yes There was low heterogeneity for estimates of sensitivity or specificity. No The summary diagnostic accuracy is sufficiently precise to improve upon existing clinical decision-making models. Unsure Key Results: They searched multiple electronic databases with no language or age restrictions and the gray literature. The authors identified 2,695 citations with five articles meeting inclusion criteria and a total of 617 patients. There were no studies that included only emergency physicians performing the HINTS examination. Essentially, the authors separated the studies into two cohorts according to the medical specialties of the HINTS examiners, and for each cohort they reported the sensitivity and specificity of the HINTS exam for diagnosis of posterior stroke. The first cohort included neurologists and neuro-ophthalmologists. The sensitivity and specificity of the HINTS examination were 96.7% (95% CI; 93.1 to 98.5) and 94.8% (95% CI; 91 to 97.1). In contrast, the second cohort (only one study) included emergency physicians and neurologists. The sensitivity and specificity were much lower at 83.3% (95% CI; 63.1 to 93.6) and 43.8% (95% CI; 36.7 to 51.2). From these results, it was deduced that emergency physicians’ participation in the latter cohort resulted in the reduced diagnostic accuracy. They did not combine the five studies into one summary result due to the heterogeneity of the included studies which was >40%.   1) Available Studies: Unfortunately, there were only five studies meeting the inclusion criteria, for a total of 617 patients. This is a known limitation of systematic reviews. Authors are limited by the available studies. 2) Biases: On the QUADAS-2 assessment, four of these studies had at least one component at high risk of bias, and three studies had unclear reports on at least one component, meaning that quality was low. Adherence to the STARD reporting guidelines was mediocre to poor overall because only two of the studies reported on most of the items in the guidelines. We will put the figure that represents the risk of bias of the included studies in the show notes. The reference standard (index test) used in these studies for all patients recruited was CT or MRI. We know CT imaging has a low sensitivity for posterior. One of the studies allowed negative head CT alone as adequate imaging to rule out posterior stroke. With such low sensitivity of CT imaging for posterior strokes, this crucial diagnosis can be missed. Even with MRI-DWI, only has a reported sensitivity of 77%. This problem in diagnostic testing studies is called the Imperfect Gold Standard Bias (Copper standard bias): It can happen if the “gold" standard is not that good of a test. Another bias identified was partial verification bias (referral or workup bias). This happens when only a certain set of patients who were suspected of having the condition are verified by the reference standard (CT or MRI). So, the AVS patients with suspected strokes with a positive HINTS exam were more likely to get advanced neuroimaging than those with a negative HINTS exam. This would increase sensitivity but decrease specificity. It is unknown if the original studies included consecutive patients or a convenience sample of patients. The later could introduce spectrum bias. Sensitivity can depend on the spectrum of disease, while specificity can depend on the spectrum of non-disease. Four out of the five studies had the ED physician identifying the patients for a referral. If patients with indeterminate or ambiguous presentations (rather than all patients presenting with AVS) were excluded this could falsely raise sensitivity. Because there were few studies it made assessing publication bias difficult. For those interested in understanding the direction of bias in studies of diagnostic test accuracy there is a fantastic article by Kohn et al AEM 2013. There is also a good book by Dr. Pines and colleagues on the topic  3) Heterogeneity: The authors used the I2 statistic to represent heterogeneity. Our overall I2 values are 53% for sensitivity and 94% for specificity likely representing moderate and considerable heterogeneity, respectively. Notably, for neurologists and neuro-ophthalmologist cohort alone, the I2 was noted to be 0, representing low or negligible heterogeneity [9]. 4) Precision and Reliability: There is poor precision overall - specifically for the cohort of emergency physicians with neurologists, the 95% confidence intervals were very wide for sensitivity (83%; 95% CI 63 to 94) and specificity (44%; 95% CI 37 to 51). The HINTS exam cannot yet be relied upon by emergency physicians as a bedside tool to rule out stroke. We simply do not have the evidence to support its adequacy as a diagnostic tool in the hands of emergency physicians, and in fact we may now have a “hint” of evidence to the contrary. I get it. We all got so excited in 2009 when we read about the HINTS exam and how well it worked for neurologists and neuro-ophthalmologists. The idea of it was spellbinding and almost hypnotic – a bedside test that was quick and free and more sensitive than early MRI. We all looked up YouTube videos on how to perform the exam, and we had to triple check how to interpret it after we got back to our desks. We dove into this too fast and too deep, before receiving structured training on this difficult exam that we thought was simple, and before learning exactly which kinds of patients it was appropriate for. We need to take a step back and be methodical, and what we really need is a large multi-center RCT on the diagnostic accuracy of the HINTS exam in the hands of emergency physicians. 5) Generalizability and Validity of Conclusions: The authors did not restrict their search to any particular language,
undefined
Nov 28, 2020 • 30min

SGEM#309: That’s All Joe Asks of You – Wear a Mask

Date: November 25th, 2020 Guest Skeptic: Dr. Joe Vipond has worked as an emergency physician for twenty years, currently at the Rockyview General Hospital.  He is the President of the national charity Canadian Association of Physicians for the Environment (CAPE), as well as the co-founder and co-chair of the local non-profit the Calgary Climate Hub, and during COVID, the co-founder of Masks4Canada. Joe grew up in Calgary and continues to live there with his wife and two daughters. Reference: Bundgaard et al. Effectiveness of Adding a Mask Recommendation to Other Public Health Measures to Prevent SARS-CoV-2 Infection in Danish Mask Wearers: A Randomized Controlled Trial. Annals of Internal Medicine 2020 Case: : Alberta is the last province in Canada that has yet to enact a mandatory mask policy. Should they do it? Mask4All Debate Background: During a respiratory pandemic, there still remains substantial questions about the utility and risk of facial masks for prevention of viral transmission. We debated universal mandatory masking back in the spring on an SGEM Xtra episode. Some very well known evidence-based medicine experts like Dr. Trisha Greenhalgh were advocating in favour of stricter mask regulations based on the precautionary principle (Greenhalgh et al BMJ 2020). She was challenged on her position (Martin et al BMJ 2020) and responded with an article called: Laying straw men to rest (Greenhalgh JECP 2020). A limitation of science is the available evidence. SARS-CoV-2 is a novel virus and we did not have much information specifically about the efficacy of masks. We needed to extrapolate from previous research on masks and other respiratory illnesses. However, we do have a firm understanding of the germ theory of disease and masks have been used for over 100 years as an infectious disease strategy. It was surgeons in the late 1890’s that began wearing masks in the operating theaters. There was skepticism back then as to the efficacy of a “surgical costume” (bonnet and mouth covering) to prevent disease and illness during surgery (Strasser and Schlich Lancet 2020). There was one recent cluster randomized control trial  looking at surgical masks, cloth masks or a control group in healthcare workers (MacIntyre et al BMJ 2015). The main outcomes were clinical respiratory illness, influenza-like illness and laboratory-confirmed respiratory virus infection. All infectious outcomes were highest in the cloth mask group, lower in the control group and lowest in the medical mask group. As with all studies this one had limitations. One of the main ones is this looked at healthcare workers wearing a mask as protection not in the general public as a source control. There has been a systematic review meta-analysis on physical distancing, face masks and eye protection to prevent SARS-Cov-2 (Chu et al Lancet 2020). With regards to masks, they found that face masks could result in a large reduction in risk of infection with a stronger association with N95 or similar respirators compared with disposable surgical masks or similar cloth masks. SRMA also have limitations and one of the main ones is they are dependent on the quality of the included studies. This review in the Lancet included ten studies (n=2,647) with seven from China, eight looking at healthcare workers (not general public) and only one looking at COVID19. All 10 studies were observational designs and the authors correctly only claim associations. They also say their level of certainty about masks being associated with a decrease in disease is considered “low certainty” based on the GRADE category of evidence. When considering an intervention, we cannot just consider the potential benefit, but we must also consider the potential harms. There is little or no evidence that wearing a face mask leads to potential harms. Yes, there are case reports of harm, children under 2 years of age should not wear face coverings (AAP News) and studies systematically under report adverse events (Hodkinson et al BMJ 2013) but the pre-test probability of individual harm is very low. What many studies on masks conclude is we need better evidence to inform our decisions. Now we have the first published randomized control trial on mask wearing in public to prevent transmission of COVID19. Clinical Question: Does recommending surgical mask use outside the home reduces wearers' risk for SARS-CoV-2 infection in a setting where masks were uncommon and not among recommended public health measures? Reference: Bundgaard et al. Effectiveness of Adding a Mask Recommendation to Other Public Health Measures to Prevent SARS-CoV-2 Infection in Danish Mask Wearers: A Randomized Controlled Trial. Annals of Internal Medicine 2020 Population: Danish adults > 18 years of age without symptoms associated with SARS-CoV-2 (or previously tested positive for SARS-CoV-2), working out-of-home with exposure to other people for more than three hours per day and who do not normally wear a face mask at work Exclusions: 18 years of age and younger, previously tested positive for SARS-CoV-2 or wears a face mask at work Intervention: Participants were encouraged to follow the authorities general COVID-19 precautions and to wear a surgical face mask for a 30-day period when out-of-home (50 surgical masks were provided) Comparison: Participants were encouraged to follow the authorities general COVID-19 precautions and no face masks were provided and no face mask recommendation Outcome: Primary Outcome: SARS-CoV-2 infection at one month by either antibody testing (IgG and/or IgM), polymerase chain reaction (PCR), or hospital diagnosis. Secondary Outcome: PCR positivity for other respiratory viruses Tertiary Outcomes: Returned swabs, Psychological aspects of face mask wearing in the community, Cost-effectiveness analyses on the use of surgical face masks, Preference for self-conducted home swab vs. healthcare conducted swab at hospital or similar, Symptoms of COVID-19, Self-assessed compliance with health authority guideline on hygiene, Willingness to wear face masks in the future, Health care diagnosed COVID-19 or SARS-CoV-2 (antibodies and/or PCR), mortality as with COVID-19 and all-cause mortality, Presence of bacteria; Mycoplasma pneumonia, Haemophilia influenza and Legionella pneumophila (to be obtained from registries when made available), Frequency of infected house-hold members between the two groups, Frequency of sick-leave between the two groups (to be obtained from registries when made available), and Predictors of primary outcome or its components Authors’ Conclusions: “The recommendation to wear surgical masks to supplement other public health measures did not reduce the SARS-CoV-2 infection rate among wearers by more than 50% in a community with modest infection rates, some degree of social distancing, and uncommon general mask use. The data were compatible with lesser degrees of self-protection.” Quality Checklist for Randomized Clinical Trials: The study population included or focused on those in the emergency department. No The patients were adequately randomized. Yes The randomization process was concealed. Yes The patients were analyzed in the groups to which they were randomized. Yes The study patients were recruited consecutively (i.e. no selection bias). No The patients in both groups were similar with respect to prognostic factors. Yes All participants (patients, clinicians, outcome assessors) were unaware of group allocation. No All groups were treated equally except for the intervention. Yes Follow-up was complete (i.e. at least 80% for both groups). Yes All patient-important outcomes were considered. Unsure The treatment effect was large enough and precise enough to be clinically significant. Unsure Key Results: The trial included 6,024 people with mean age of 47 years and almost 2/3 identified as female. No statistical difference in SARS-CoV-2 infection between the mask group and no mask group Primary Outcome: SARS-Cov-2 infection (Intension-to-Treat) 1.8% mask group vs. 2.1% no mask group − 0.3 percentage point (95% CI, −1.2 to 0.4) P= 0.38 Odds Ratio (OR) 0.82 (95% CI, 0.54 to 1.23) P= 0.33 Per-Protocol Analysis: 1.8% mask group vs. 2.1% no mask group with absolute difference -0.4% (95% CI -1.2 to 0.5) P = 0.40 and OR 0.84 (95% CI, 0.55 to 1.26) P = 0.40 Secondary Outcomes: Other viral infection 0.5% mask group vs. 0.6% no mask group There are a number of nerdy points we could have discussed but in typical fashion and to keep the blog/podcast to a digestible length we have highlighted five. 1) Methods: Some questions have been raised about the methodology. This trial was registered with ClinicalTrials.gov (NCT04337541). The trial protocol was registered with the Danish Data Protection Agency (P-2020-311), adhered to the recommendations for trials described in the SPIRIT Checklist and they published their methodology in the Danish Medical Journal (Bundgaard et al 2020). Some of the comments about the methodology specifically referenced the lack of ethics approval. However, the researchers presented the protocol to the independent regional scientific ethics committee of the Capital Region of Denmark, which did not require ethics approval in accordance with Danish legislation. The trial was also done in accordance with the principles of the Declaration of Helsinki. In the supplemental material there is a letter from the Chairman of the Ethics Committee saying they do not require ethics approval. It is hard to be critical of the researchers who took reasonable steps to address ethical concerns and were told they did not need ethics approval.
undefined
Nov 21, 2020 • 32min

SGEM#308: Taking Care of Patients Everyday with Physician Assistants and Nurse Practitioners

Date: November 19th, 2020 Guest Skeptic: Dr. Corey Heitz is an emergency physician in Roanoke, Virginia. He is also the CME editor for Academic Emergency Medicine. Reference: Pines et al. The impact of advanced practice provider staffing on emergency department care: productivity, flow, safety, and experience. AEM November 2020. Case: You are the medical director of a medium sized urban emergency department (ED). Volumes have increased over the past few years and you’re considering adding an extra shift or two. Your hospital has asked you to consider adding some advanced practice providers (APPs) instead of physician hours. Background: Advanced practice providers (APPs) such as nurse practitioners (NPs) and Physician Assistants (PAs) are increasingly used to cover staffing needs in US emergency departments. This is in part driven by economics, as APPs are paid less per hour than physicians. The calculation works if APP productivities are similar enough to physicians to offset differentials in billing rates. However, little data exists comparing productivity, safety, flow, or patient experiences in emergency medicine. The American Academy of Emergency Medicine (AAEM) has a position statement on what they refer to as non-physician practitioners that was recently updated. The American College of Emergency Physicians (ACEP) has a number of documents discussing APPs in the ED. There has been a concern about post-graduate training of NPs and PAs in the ED. A joint statement on the issue was published in September this year by AAEM/RSA, ACEP, ACOEP/RSO, CORD, EMRA, and SAEM/RAMS.  Clinical Question: How does the productivity of advanced practice providers compare to emergency physicians and what is its impact on emergency department operations? Reference: Pines et al. The impact of advanced practice provider staffing on emergency department care: productivity, flow, safety, and experience. AEM November 2020. Population: National emergency medicine group in the USA that included 94 EDs in 19 states Exposure: Proportion of total clinician hours staffed by APPs in a 24-hour period at a given ED Comparison: Emergency physician staffing Outcome: Primary Outcome: Productivity measures (patients per hour, RVUs/hour, RVUs/visit, RVUs per relative salary for an hour) Safety Outcomes: Proportion of 72-hour returns and proportion of 72-hour returns resulting in admission Other Outcomes: ED flow by length of stay (LOS), left without completion of treatment (LWOT) Dr. Jesse Pines This is an SGEMHOP episode which means we have the lead author on the show. Dr. Jesse Pines is the National Director for Clinical Innovation at US Acute Care Solutions and a Professor of Emergency Medicine at Drexel University. In this role, he focuses on developing and implementing new care models including telemedicine, alternative payment models, and also leads the USACS opioid programs.  Authors’ Conclusions: “In this group, APPs treated less complex visits and half as many patients/hour compared to physicians. Higher APP coverage allowed physicians to treat higher-acuity cases. We found no economies of scale for APP coverage, suggesting that increasing APP staffing may not lower staffing costs. However, there were also no adverse observed effects of APP coverage on ED flow, clinical safety, or patient experience, suggesting little risk of increased APP coverage on clinical care delivery. Quality Checklist for Observational Study: Did the study address a clearly focused issue? Yes Did the authors use an appropriate method to answer their question? Unsure Was the cohort recruited in an acceptable way? Yes Was the exposure accurately measured to minimize bias? Yes Was the outcome accurately measured to minimize bias? Yes Have the authors identified all-important confounding factors? Unsure Was the follow up of subjects complete enough? Yes How precise are the results? Fairly precise Do you believe the results? Yes Can the results be applied to the local population? Unsure Do the results of this study fit with other available evidence? Unsure Key Results: Over five years there were more than 13 million ED visits at these 94 sites.  The majority (75%) of visits were treated by physicians independently. PAs treated 18.6%, NPs 5.4% and 1.4% were treated by both a physician and an APP. Physicians were more productive than physician assistants and nurse practitioners. Effect of 10% increase in APP coverage: Patients/hour: -0.12 (95% CI; -0.15 to -0.10) RVUs/hour: -0.4 (95% CI; -0.5 to -0.3) Safety and Outcome: No significant effect on length of stay, left without treatment, and 72-hour returns Listen to the podcast on iTunes to hear Jesse’s responses to our five nerdy questions. 1) Surprise: These results surprise me somewhat due to personal experience where APPs see lower acuity patients, often in a “fast-track” area. I don’t know our facility data, but would be surprised if the APPs had significantly lower overall patients/hour than the doctors. 2) Physician Satisfaction: You looked at the productivity and safety as an outcome. What about physician satisfaction? I know some doctors who can’t function well without an APP and other doctors who prefer working without an APP. 3) Not All Equal: You mention that when making the schedules, one physician hour was equal to two APP hours. For your analysis, it was unclear to me if you calculated your numbers using 1:1 physician to APP hours, or if you kept the 1:2 ratio. 4) Patient Satisfaction: You had an exploratory outcome using a Press-Ganey (PG) percentile rank as a measure of patient experience. Those outside of the USA may not be familiar with the Press-Ganey patient satisfaction survey. Can you explain this metric and what did you find in your study about patient satisfaction? 5) External Validity: This was a large study with 19 states, 94 sites and 13 million ED visits. However, it represents one large national ED group. Do you think the results would apply to small groups, democratic physician-led groups, or rural sites? Comment on Authors’ Conclusion Compared to SGEM Conclusion: We agree with the authors’ conclusions SGEM Bottom Line: Increasing advanced practice provider coverage has minimal effect on emergency department productivity, flow and safety outcomes. Case Resolution: You continue the discussion with hospital administration, understanding that APP hours need to be added in such a way as to utilize their skillsets best, but not as a full replacement for physician hours. You suggest considering a higher number of APP hours to replace one physician hour. Dr. Corey Heitz Clinical Application: APPs can be utilized to “offload” lower acuity cases, while allowing physicians to care for higher acuity patients. Physicians overall had higher levels of productivity, both as measured by patients/hour and RVUs/hour. What Do I Tell My Patient? Not applicable Keener Kontest: Last weeks’ winner was Dr. Daniel Walter. He is an Emergency Medicine & Critical Care registrar working in the UK. Dan knew the LAST thing you want to see happen after injecting someone with lidocaine is Local Anesthetic Systemic Toxicity. Listen to the SGEM podcast to hear this weeks’ question. Send your answer to TheSGEM@gmail.com with “keener” in the subject line. The first correct answer will receive a cool skeptical prize. SGEMHOP: Now it is your turn SGEMers, what do you think of this episode on APPs in the ED? Tweet your comments using #SGEMHOP.  What questions do you have for Jesse and his team? Ask them on the SGEM blog. The best social media feedback will be published in AEM. Also, don’t forget those of you who are subscribers to Academic Emergency Medicine can head over to the AEM home page to get CME credit for this podcast and article. We will put the process on the SGEM blog: Go to the Wiley Health Learning website Register and create a log in Search for Academic Emergency Medicine – “November” Complete the five questions and submit your answers Please email Corey (coreyheitzmd@gmail.com) with any questions or difficulties. Remember to be skeptical of anything you learn, even if you heard it on the Skeptics’ Guide to Emergency Medicine.   .
undefined
Oct 31, 2020 • 20min

SGEM#307: Buff up the lido for the local anesthetic

Date: October 29th, 2020 Guest Skeptic: Martha Roberts is a critical and emergency care, triple-certified nurse practitioner currently living and working in Sacramento, California. She is the host of EM Bootcamp in Las Vegas, as well as a usual speaker and faculty member for The Center for Continuing Medical Education (CCME). She writes a blog called The Procedural Pause for Emergency Medicine News and is the lead content editor and director for the video series soon to be included in Roberts & Hedges' Clinical Procedures in Emergency Medicine. Reference: Vent et al. Buffered lidocaine 1%, epinephrine 1:100,000 with sodium bicarbonate (hydrogencarbonate) in a 3:1 ratio is less painful than a 9:1 ratio: A double-blind, randomized, placebo-controlled, crossover trial. JAAD (2020) Case: A 35-year-old female arrives to the emergency department with a 3 cm laceration to the palmar surface of her left forearm sustained by a clean kitchen knife while emptying the dishwasher. The patient reports a fear of needles and has concerns about locally anaesthetizing the area because, “I got stitches on my arm once before and that shot burned like crazy”! The patient asks the practitioner if there is any chance, she can get a shot that “burns less” than her last one. Background: We have covered wound care a number of times on the SGEM. This has included some myth busing way back in SGEM#9 called Who Let the Dogs Out. That episode busted five myths about simple wound care in the Emergency Department: Patients Priorities: Infection is not usually the #1 priority for patients. For non-facial wounds it is function and for facial wounds it is cosmetic. This is in contrast to the clinicians’ #1 priority that is usually infection. Dilution Solution: You do not need some fancy solution (sterile water, normal saline, etc) to clean a wound. Tap water is usually fine. Sterile Gloves: You do not need sterile gloves for simple wound treatment. Non-sterile gloves are fine. Save the sterile gloves for sterile procedures (ex. lumbar punctures). Epinephrine in Local Anesthetics: This will not make the tip of things fall off (nose, fingers, toes, etc). Epinephrine containing local anesthetics can be used without the fear of an appendage falling off. All Simple Lacerations Need Sutures: Simple hand lacerations less than 2cm don’t need sutures. Glue can be used in many other areas including criss-crossing hair for scalp lacerations. Other SGEM episodes on wound care include: SGEM#63: Goldfinger (More Dogma of Wound Care) This episode looked at how long do you have to close a wound. The bottom line was that there is no good evidence to show that there is an association between infection and time from injury to repair. SGEM#156: Working at the Abscess Wash The question from that episode was: does irrigation of a cutaneous abscess after incision and drainage reduce the need for further intervention? Answer: Irrigation of a cutaneous abscess after an initial incision and drainage is probably not necessary.  SGEM#164: Cuts Like a Knife – But you Might Also Need Antibiotics for Uncomplicated Skin Abscesses. SGEM Bottom Line: The addition of TMP/SMX to the treatment of uncomplicated cutaneous abscesses represents an opportunity for shared decision-making. The issue of buffering lidocaine was covered on SGEM #13. This episode briefly reviewed a Cochrane SRMA that looked at buffering 9ml of 1% or 2% lidocaine with 1ml of 8.4% sodium bicarbonate (Cepeda et al 2010). The SRMA of buffering lidocaine contained 23 studies with 8 of the 23 studies having moderate to high risk of bias. The SGEM bottom line was that patients might appreciate the extra effort of buffering the lidocaine. Interestingly, this Cochrane Review was withdrawn from publication in 2015. The reason provided was that the review was no longer compliant with the Cochrane Commercial Sponsorship Policy. The non-conflicted authors have decided not to update the review. Clinical Question: Does buffering lidocaine with sodium bicarbonate make local anesthetic less painful? Reference: Vent et al. Buffered lidocaine 1%, epinephrine 1:100’000 with sodium bicarbonate (hydrogencarbonate) in a 3:1 ratio is less painful than a 9:1 ratio: A double-blind, randomized, placebo-controlled, crossover trial. JAAD (2020) Population: Healthy volunteers age 18-75 years of age Exclusions: Hypersensitivity or allergies to local anesthetics of the amide type or to auxiliary substances such as sulfites, pregnant, damaged skin on the arms, or inability to give informed consent. Intervention: IMP (investigational medicinal products) were injected 5cm distal from the cubital fossa IMP1: 1% lidocaine with epinephrine plus sodium bicarbonate in a 3:1 mixing ratio IMP2: 1% lidocaine with epinephrine plus sodium bicarbonate in a 9:1 mixing ratio IMP3: 1% lidocaine with epinephrine Comparison: Placebo of 0.9% sodium chloride (IMP4) Outcomes: Primary Outcome: Pain during infiltration on a numerical rating scale (0-10, with 0=no pain and 10=unacceptable pain) Secondary Outcomes: Patient comfort during infiltration (four categorical terms: desirable, acceptable, less acceptable or unacceptable) and duration of local anesthesia (30-minute intervals up to 3 hours) using a standardized laser stimulus (numbness yes or no?)  Authors’ Conclusions: “Lido/Epi-NaHCO3 mixtures effectively reduce burning pain during infiltration. The 3:1 mixing ratio is significantly less painful than the 9:1 ratio. Reported findings are of high practical relevance given the extensive use of local anesthesia today.” Quality Checklist for Randomized Clinical Trials: The study population included or focused on those in the emergency department. No The patients were adequately randomized. Yes The randomization process was concealed. Yes The patients were analyzed in the groups to which they were randomized. Yes The study patients were recruited consecutively (i.e. no selection bias). Unsure The patients in both groups were similar with respect to prognostic factors. Unsure All participants (patients, clinicians, outcome assessors) were unaware of group allocation. Unsure All groups were treated equally except for the intervention. Yes Follow-up was complete (i.e. at least 80% for both groups). Yes All patient-important outcomes were considered. Yes The treatment effect was large enough and precise enough to be clinically significant. Unsure Key Results: They enrolled 48 healthy volunteers, 21 males and 27 females aged 21-62 with a mean age of 31 years. Buffering lidocaine made injections less painful Primary Outcome: Pain during infiltration IMP1 (3:1 mixture) was less painful than IMP2 (9:1 mixture) IMP3 (unbuffered) was more painful than IMP1 or IMP2 IMP4 (placebo) was more painful than IMP1-3 Secondary Outcomes: Patient Comfort Discomfort During Infiltration: IMP1 (3:1 mixture) had the least reported discomfort and IMP4 (placebo group) reported the most discomfort. Duration of Local Anesthetic: Laser-induced pain was absent in the injection areas for IMP1-3 (intervention groups) between 5 minutes and 3 hours after infiltration but not for IMP4 (placebo)   1) External Validity: These healthy volunteers with a mean age of 31 years may not represent the patients we see for simple wound repairs in the emergency department. We do not know any details about the volunteers except their age and self-identified gender. The study was also conducted in Germany. Cultural and social factors can play a role in the perception of acute and chronic pain (Peacock and Patel 2018, MM Free 2002 and MM Free 2012). 2) Blinding: Local anesthetic hurts. If the patients were aware of the hypothesis (buffering lidocaine to minimize pain), this could have biased the subjective self-reporting for the primary outcome to have a larger effect size. 3) Sample Size: This was a relatively small study with only 48 volunteers. Are the results large enough (3 points on an NRS value) and precise enough (no 95% CI were provided for the point estimates) to be clinically relevant? 4) Shelf-Life: We stock large bottles of sodium bicarbonate and would usually only require a small amount to buffer the amount of lidocaine needed to treat a single patient. This could lead to a great deal of waste. Sodium bicarbonate is not expensive but take a small number and multiply it by a big number (number of simple wound repairs done per day) can end up being a large number. One way around that would be to mix-up a larger amount at the start of a shift. However, the stability of the buffered lidocaine-sodium bicarbonate solution is limited. It would be great if a stable commercial product was available in the <10ml solutions we typically require. 5) Alternatives: There are other methods that can be used to minimize the pain of local anesthetic injection. Those includes but are not limited to L.E.T. (lidocaine, epinephrine and tetracaine) used topically. Comment on Authors’ Conclusion Compared to SGEM Conclusion: We agree that buffering lidocaine with sodium bicarbonate decreases pain during infiltration and that a 3:1 mixture is better than a 9:1 mixture. We are not as sure of the “high” practical relevance due to the issues mentioned in nerdy point #4. SGEM Bottom Line: Consider buffering your lidocaine with a 3:1 sodium bicarbonate mixture to decrease the discomfort of local anesthetic infiltration. Case Resolution: You inform her that there is a way to make the local injection burn less. You mix up your 1% lidocaine in a 3:1 mixture with sodium bicarbonate.  She leaves very happy with post-suture instructions. Clinical Application: If the patient expresses fears about the anesthetic injection,
undefined
Oct 24, 2020 • 21min

SGEM#306: Fire Brigade and the Staying Alive App for OHCAs in Paris

Date: October 21st, 2020 Guest Skeptic: Dr. Justin Morgenstern is an emergency physician, creator of the excellent #FOAMed project called First10EM.com and a member of the #SGEMHOP team. Reference: Derkenne et al. Mobile Smartphone Technology Is Associated With Out-of-hospital Cardiac Arrest Survival Improvement: The First Year "Greater Paris Fire Brigade" Experience. AEM Oct 2020. Case: You are waiting in line for coffee, discussing the latest SGEM Hot Off the Press episode on twitter, when an alert pops up on your phone. It says that someone in the grocery store next door has suffered a cardiac arrest and needs your help. You remember installing this app at a conference last year, but this is the first time you have seen an alert. You abandon your coffee order and quickly head next-door, where you are able to start cardiopulmonary resuscitation (CPR) and direct a bystander to find the store’s automated external defibrillator (AED) while waiting for emergency medical services (EMS) to arrive. After the paramedics take over, you wonder about the evidence for this seemingly miraculous intervention. Background: Out of hospital cardiac arrest (OHCA) is something that we have covered many times on the SGEM. SGEM#64: Classic EM Papers (OPALS Study) SGEM#136: CPR – Man or Machine? SGEM#143: Call Me Maybe for Bystander CPR SGEM#152: Movin’ on Up – Higher Floors, Lower Survival for OHCA SGEM#162: Not Stayin’ Alive More Often with Amiodarone or Lidocaine in OHCA SGEM#189: Bring Me to Life in OHCA SGEM#231: You’re So Vein – IO vs. IV Access for OHCA SGEM#238: The Epi Don’t Work for OHCA SGEM#247: Supraglottic Airways Gonna Save You for an OHCA? SGEM#275: 10th Avenue Freeze Out – Therapeutic Hypothermia after Non-Shockable Cardiac Arrest The American Heart Association promotes the “Chain-of-Survival”. There are five steps in the Chain-of-Survival for OHCA: Step One – Recognition and activation of the emergency response system Step Two – Immediate high-quality cardiopulmonary resuscitation Step Three – Rapid defibrillation Step Four – Basic and advanced emergency medical services Step Five – Advanced life support and post arrest care Bystander CPR and early defibrillation are key components of the out of hospital cardiac arrest chain of survival. Unfortunately, most patients don’t receive these crucial interventions. Many people are trained in CPR but never use their skills, because it is unlikely that they will happen to be in exactly the right place at the right time. They may be willing and able to help, but if the patient in need is one block over, they may never know about it. The advent of the smart phone with GPS capability means that we should be better able to direct individuals trained in basic life support (BLS) to those in need around them. We should also be able to use smart phones to more easily identify the closest AEDs. Over the last decade, numerous apps have been developed to do exactly that, but the impact of those apps on clinical outcomes is still unclear. Clinical Question: Is the use of a smart phone app that can match trained responders to cardiac arrest victims and indicate the closest available AEDs associated with better clinical outcomes? Reference: Derkenne et al. Mobile Smartphone Technology Is Associated With Out-of-hospital Cardiac Arrest Survival Improvement: The First Year "Greater Paris Fire Brigade" Experience. AEM Oct 2020. Population: Cardiac arrests from a single emergency medical service (EMS) agency in Paris, France that were called through the central dispatch center and occurred while the chief dispatcher was available to participate, occurred in a public area, and in which there was not obvious environmental danger. Intervention: Alerts were sent through the Staying Alive app to volunteers trained in BLS who were within 500 meters of the reported cardiac arrest. The intervention group is the group of patients for whom someone responded to the alert and provided BLS treatment. Comparison: The control group consisted of patients in whom no volunteer was within 500 meters at the time of the arrest, for whom no volunteer responded to the alert, or for whom the volunteer responded to the alert but did not perform BLS. Outcomes: Return of spontaneous circulation (ROSC) upon hospital admission, survival outcomes upon hospital discharge and impact of first responders (commonly referred to as“Bons Samaritains”[BS]) on survival outcomes. Dr. Clementt Derkenne This is an SGEMHOP episode which means we usually have the lead author on the show.  Dr. Clement Derkenne is an emergency physician in the Emergency Medical Department, Paris Fire Brigade, Clamart, France. He did not feel comfortable doing a podcast in English which we completely understand. Authors’ Conclusions: “We report for the that mobile smartphone technology was associated with OHCA survival through accelerated initiation of efficient cardiopulmonary resuscitation by first responders in a large urban area.” Quality Checklist for Observational Study: Did the study address a clearly focused issue? Yes Did the authors use an appropriate method to answer their question? Unsure Was the cohort recruited in an acceptable way? Was the exposure accurately measured to minimize bias? Yes Was the outcome accurately measured to minimize bias? Yes and Unsure Have the authors identified all-important confounding factors? Unsure Was the follow up of subjects complete enough? Yes How precise are the results? Moderate Do you believe the results? Probably Can the results be applied to the local population? Unsure Do the results of this study fit with other available evidence? Unsure Key Results: They recorded 4,107 OHCA in 2018. The mean age was in the mid 50’s, ~75% were male, 91% were medical cardiac arrests and most arrests took place outside the home. The Staying Alive app was activated 366 times (9.8% of the total arrests). There were 46 patents in the intervention group (24 received CPR only, 18 AED only and 4 both) and 320 in the control group (97 cases where no volunteer responded to the notification, and 226 who responded to the notification but either couldn’t locate the patient or failed to start BLS). Getting treatment as a result of the Staying Alive App was associated with more ROSC and more survival to hospital discharge. ROSC: 48% SA vs. 23% control, p<0.001 Survival to Hospital Discharge: 35% SA vs 16% control, p=0.004 Adjusted Odds Ratio = 5.9 (95% CI; 2.1 to 16.5), p < 0.001 1. External Validity: This study looked at the large urban area of Paris, France. It is unclear if this would translate to smaller urban centres or rural communities. 2. Control Group: In the conclusion for this paper, the authors say that smartphone technology is associated with out of hospital cardiac arrest survival. However, they didn’t compare a group of patients who had smart phone technology available to a group of patients who didn’t have such technology. What they actually compared is a group of patients who got treatment – CPR or and AEDs – to a group of patients who didn’t get treatment, even though the app was activated. I think this data only shows us that there is an association between CPR and AED use and survival – but we already knew that. In order to see an association with the app, we need a control group who didn’t have the app available – maybe a different city that isn’t using an app, or maybe a different time period, like historical controls in a before and after study. But as it stands, I don’t think control group tells us anything about the app itself. 3. Excluded Patients: More than 90% (3,737/4,107) of the OHCA patients were not included in this study. There were many differences between those included and those not included. It would be interesting to know what the outcomes were for this group and compare them to the intervention and control groups. 4. Primary Outcome: When critically appraising studies, it is very important to know the primary outcome in order to interpret the reported statistics. The authors' looked at a number of very important outcomes, but we didn’t see a primary outcome explicitly reported in the manuscript. 5. All-Patient Oriented Outcomes: We have seen many studies that have a primary outcome of ROSC, admission to hospital, or survival to hospital discharge. A better patient-oriented outcome (POO) is survival to hospital discharge with good neurologic function. 6. Generalizability and Cost Effectiveness: Out of 4,107 arrests, only 46 patients received treatment through the app. This very small number could result in selection bias that would affect the generalizability of these results. Further, the fact that the app only resulted in treatment for a small number of cases may indicate that the costs of the app and training might overshadow its benefits. 7. Confounders: This is observational data, so we are limited to finding associations. There were many differences between patients with OHCA where the app was activated and the patients with OHCA while the app was not activated. There was another dichotomy between when a volunteer responded and when they did not respond. We wonder what various factors might have influenced whether a patient ended up in the control or intervention group? For example, people might be less likely to respond to an arrest in a poorer area of town, but patients in a poorer area might have worse outcomes, confounding these results. 8. Harms and Unintended Consequences: These apps make a lot of intrinsic sense. However, nothing in life is free. If the app is going to be used at scale, there will be some cost in development, advertising, and training. The use of an app could also take attention away from other important interventions,
undefined
Oct 21, 2020 • 48min

SGEM Xtra: How to Think, Not What to Think

Date: October 21st, 2020 This is an SGEM Xtra episode. I had the honour of presenting at the Department of Family Medicine's Grand Rounds at the Schulich School of Medicine and Dentistry. The title of the talk was: How to think, not what to think. The presentation is available to watch on YouTube, listen to on iTunes and all the slides can be downloaded from this LINK. Five Objectives: Discuss what is science Talk about who has the burden of proof Discuss Evidence-based medicine (EBM), limitations and alternatives Provide a five step approach to critical appraisal Briefly talk about COVID19 and the importance of EBM What is Science? It is the most reliable method for exploring the natural world. There are a number of qualities of science: Iterative, falsifiable, self-correcting and proportional. What science isn’t is “certain”. We can have confidence around a point estimate of an observed effect size and our confidence should be in part proportional to the strength of the evidence. Science also does not make “truth” claims. Scientists do make mistakes, are flawed and susceptible to cognitive biases. Physicians took on the image of a scientist by co-opting the white coat. Traditionally, scientists wore beige and physicians wore black to signify the somber nature of their work (like the clergy). Then came along the germ theory of disease and other scientific knowledge. It was the Flexner Report in 1910 that fundamentally changed medical education and improved standards. You could get a medical degree in only one year before the Flexner Report. The white coat was now a symbol of scientific rigour separating physicians from “snake oil salesman”.  Many medical schools still have white coat ceremonies. However, only 1 in 8 physicians still report wearing a white lab coat today (Globe and Mail). Science is Usually Iterative: Sometimes science takes giants leaps forward, but usually it takes baby steps. You probably have heard the phrase "standing on the shoulders of giants"? In Greek mythology, the blind giant Orion carried his servant Cedalion on his shoulders to act as the giant's eyes. The more familiar expression is attributed to Sir Isaac Newton, "If I have seen further it is by standing on the shoulders of Giants.” It has been suggested that Newton may have been throwing shade at Robert Hooke. Hooke was the first head of the Royal Society in England. Hooke was described as being a small man and not very attractive. The rivalry between Newton and Hooke is well documented. The comments about seeing farther because of being on the shoulders of giants was thought to be a dig at Hooke's short stature. However, this seems to be gossip and has not been proven. Science is Falsifiable:  If it is not falsifiable it is outside the realm/dominion of science. This philosophy of science was put forth by Karl Popper in 1934. A great example of falsifiability was the claim that all swans are white. All it takes is one black swan to falsify the claim.  Science and Proportionality: The evidence required to accept a claim should be in part proportional to the claim itself. The classic example was given by the famous scientist Carl Sagan (astronomer, astrophysicist and science communicator). Did the TV series Cosmos and wrote a number of popular science books (The Dragons of Eden). Sagan made the claim that there was a “fire-breathing dragon that lives in his garage”. How much evidence would it take for you to accept the claim about the dragon? His word, pictures, videos, bones, other biological evidence, how about knowing any other dragons or dragons that breathe fire? Compare that to if I said we just got a new puppy and it’s in the garage. You would probably take my word for it. There is nothing extraordinary about the claim. Most of you should be familiar and have had experience with a puppy at some point in your life.  So the quality of evidence to convince you of something should be in part proportional to the claim being asserted. The summary is the famous quote by Carl Sagan that "extraordinary claims require extraordinary evidence".  Science is Self-Correcting: Because science is iterative and falsifiable it is also self correcting. Science gets updated. We hopefully learn and get closer to the “truth” over time. Medical reversal is a thing and there is a great book and by Drs. Prasad and Cifu on this issue called Ending Medical Reversal: Improving Outcomes, Saving Lives. Burden of Proof: Those making the claim have the burden of proof. It is called a burden because it hard - not because it is easy. We start with the null hypothesis (no superiority). Evidence is presented to convince us to reject the null and accept there is superiority to their claim. If the evidence is convincing we should reject the null. If the evidence is not convincing we need to accept the null hypothesis. It is a logical fallacy to shift the burden of proof onto those who say they do not accept the claim. They do not have to prove something wrong but rather not be convinced that the claim is valid/“true” and this is an important distinction in epistemology. Real World Example: Probiotics have been promoted for acute gastroenteritis (AGE) in children. Previous work in this area has been described as being “underpowered or had methodology problems related to the trial design and choice of appropriate end points.” Schnadower et al. did a randomized control trial (RCT) of Lactobacillus rhamnosus GG vs. placebo for AGE in children (NEJM 2018). They included children 3 months to 4 years of age with gastroenteritis. The trial enrolled 971 children who to took the probiotic twice a day for five days or placebo.  The results showed no statistical difference between the two groups for their primary outcome. We covered this RCT on SGEM#254: Probiotics for Pediatric Gastroenteritis. Can we say probiotics don’t work? No that would shift the burden of proof. However, without sufficient evidence of superiority we should accept the null hypothesis. This study was limited to only the probiotic tested in the trial. However, Freedman et al in this same 2018 NEJM edition had similar trial looking at L. rhamnosus and L. helveticus found the same thing (no superiority of probiotics vs placebo). It should be noted that there is some weak evidence for probiotic efficacy in antibiotic associated diarrhea (Goldenberg et al 2015). The bottom line is that probiotics cannot be routinely recommended at this time for acute gastroenteritis in children (SGEM# Evidence-Based Medicine (EBM): This was defined by Dr. David Sackett over 20 years ago (Sackett et al BMJ 1996). He defined EBM as “The conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.”  I really like this definition, and the only tweak I would have added would be to include the word "shared".   The definition of EBM can be visually displayed as a Venn diagram. There are three components: The literature, our clinical judgement, and the patients values/preferences. Many people make the mistake of thinking that EBM is just about the scientific literature. This is not true. You need to know about the relevant scientific information. The literature should inform our care but not dictate our care. Clinical judgement is very important. Sometimes you will have lots of experience and other times you may have very limited experience. The third component of EBM is the patient. We need to ask them what they value and prefer. The easiest way to do this is to ask the patient. It should start with patients care and it ends with patient care. We all want patients to get the best care, based on the best evidence.  Levels of Evidence: There is a hierarchy to the evidence and we want to use the best evidence to inform our patient care. The levels of evidence is usually described using a pyramid. The lowest level is expert opinion. the middle of the hierarchy is a randomized control trial and the top is considered a systematic review. The systematic review +/- a meta-analysis is put on the top of the EBM level of evidence pyramid. However, we need to watch out for garbage in, garbage out (GIGO). This means if you take a number of crappy little studies (CLS), mash them all up into a meat grinder and spit out a point estimate down to the 5th decimal place that results is some impressive p-value is an illusion of certainty when certainty does not exist. EBM Limitations: Harm and the parachutes Smith and Pell BMJ 2003 Hayes et al CMAJ 2018 Yeh et al BMJ 2018 Most published research findings are false Ioannidis PLoS 2005 Guidelines are just cookbook medicine Good evidence is ignored  Too busy for EBM Five Alternatives to EBM: This was adapted from a paper by Isaacs and Fitzgerald BMJ 1999. To paraphrase Sir Winston Churchill, EBM is the worst form of medicine except for all the others that have been tried. Eminence Based Medicine - The more senior the colleague, the less importance he or she placed on the need for anything as mundane as evidence. Experience, it seems, is worth any amount of evidence. These are the senior physicians on staff that make the "same mistakes with increasing confidence over an impressive number of years.” Vehemence Based Bedicine - The substitution of volume for evidence as an effective technique for brow beating your colleagues and for convincing relatives of your ability. The quality of the evidence is more important than the quantity of evidence. Eloquence Based Medicine - This is the physician with the year round suntan, Armani suit, pocket handkerchief and tongue that is as silky smooth as his silk tie. Sartorial and verbal eloquence should be no substitute for high-quality,
undefined
Oct 17, 2020 • 28min

SGEM#305: Somebody Get Me A Doctor – But Do I Need TXA by EMS for a TBI?

Date: October 14th, 2020 Guest Skeptic: Dr.Salim Rezaie is a community emergency physician at Greater San Antonio Emergency Physicians (GSEP), where he is the director of clinical education.  Salim is probably better known as the creator and founder of the wonderful knowledge translation project called REBEL EM. It is a free, critical appraisal blog and podcast that try to cut down knowledge translation gaps of research to bedside clinical practice. Reference: Rowell et al. Effect of Out-of-Hospital Tranexamic Acid vs Placebo on 6-Month Functional Neurologic Outcomes in Patients With Moderate or Severe Traumatic Brain Injury. JAMA 2020.  Case: A 42-year-old helmeted bicycle rider is involved in an accident where he hits his head on the ground.  At the time of emergency medical services (EMS) arrival, the patient is alert but seems a bit confused.  The accident was within one hour of injury and his Glasgow Coma Scale (GSC) score was 12. Vital signs show a slight tachycardia but otherwise normal. Pupils were both equal and reactive and he doesn’t appear to have any other traumatic injuries, or focal neurologic deficits. Other injuries appear minimal with some abrasions from the fall. Background: The CRASH-2 trial, published in 2010, showed a 1.5% mortality benefit (NNT 67) for patients with traumatic hemorrhage who received tranexamic acid (TXA) compared to placebo. Dr. Anand Swaminathan and I covered that classic paper on SGEM#80. TXA has become standard practice in many settings as a result of this data.  However, patients with significant head injury were excluded in this study and it was unclear of the effect of TXA in this group. CRASH-3 Fast forward to October 2019, when CRASH-3 was published. This large, very well-done randomized placebo-controlled trial examined the use of TXA in patients with traumatic brain injuries (TBIs) with GCS score of 12 or lower or any intracranial bleed on CT scan and no extracranial bleeding treated within 3 hours of injury. The authors reported no statistical superiority of TXA compared to placebo for the primary outcome of head injury-related deaths within 28 days. We reviewed that article published in the Lancet in SGEM#270. Subgroup analysis did demonstrate that certain patients (GCS 9 to 15 and ICH on baseline CT) showed a mortality benefit with TXA. While very interesting and potentially clinically significant, we need to be careful not to over-interpret this subgroup analysis. We did express concern over the possibility that this subgroup would be highlighted and “spun”. Unfortunately, that did happen with a subsequent media blitz and a misleading infographic. Further data is clearly needed to elucidate the role of TXA in patients with TBI. Clinical Question: Does pre-hospital administration of TXA to patients with moderate or severe traumatic brain injury improve neurologic outcomes at 6 months? Reference: Rowell et al. Effect of Out-of-Hospital Tranexamic Acid vs Placebo on 6-Month Functional Neurologic Outcomes in Patients With Moderate or Severe Traumatic Brain Injury. JAMA 2020. Population: Patients 15 years of age or older with moderate or severe blunt or penetrating TBI. Moderate to severe TBI was defined as a GCS 3 to 12, at least one reactive pupil, systolic blood pressure ≥90mmHg prior to randomization, able to receive intervention or placebo within two hours from injury, and destination to a participating trauma center. Exclusions: Prehospital GCS=3 with no reactive pupil, start of study drug bolus dose greater than two hours from injury, unknown time of injury, clinical suspicion by EMS of seizure activity, acute MI or stroke, or known history, of seizures, thromboembolic disorders or renal dialysis, CPR by EMS prior to randomization, burns > 20% total body surface area, suspected or known prisoners, suspected or known pregnancy), prehospital TXA or other pro-coagulant drug given prior to randomization or subjects who have activated the “opt-out” process. Interventions: They had two intervention groups. The Bolus Maintenance Group received an out-of-hospital TXA 1g intravenous (IV) bolus and in-hospital TXA 1g IV 8-hour infusion. The Bolus Only Group received an out-of-hospital TXA 2g IV and in-hospital placebo 8-hour IV infusion. Comparison: The Placebo Group received Out-of-hospital saline IV bolus and in-hospital saline 8-hour infusion. Outcome: Primary Outcome: Favorable neurologic function at 6 months (defined as Glasgow Outcome Scale-Extended Score >4 which is considered moderate disability or good recovery) Secondary Outcomes: There were 18 secondary endpoints, of which 5 reported statistical analysis in this trial 28 day mortality 6-Month Disability Rating Scale Score (0 equals no disability and 30 equals death) Progression of intracranial hemorrhage (Defined as >33% increase in the combined volume of hemorrhage) Incidence of seizures Incidence of thromboembolic events Authors’ Conclusions: “Among patients with moderate to severe TBI, out-of-hospital tranexamic acid administration within 2 hours of injury compared with placebo did not significantly improve 6-month neurologic outcome as measured by the Glasgow Outcome Scale-Extended.” Quality Checklist for Randomized Clinical Trials: The study population included or focused on those in the emergency department. Yes and No The patients were adequately randomized. Yes The randomization process was concealed. Yes The patients were analyzed in the groups to which they were randomized. No The study patients were recruited consecutively (i.e. no selection bias). Unsure The patients in both groups were similar with respect to prognostic factors. Yes All participants (patients, clinicians, outcome assessors) were unaware of group allocation. Yes All groups were treated equally except for the intervention. Yes Follow-up was complete (i.e. at least 80% for both groups). Unsure All patient-important outcomes were considered. Yes The treatment effect was large enough and precise enough to be clinically significant. No Key Results: They enrolled and randomized 1,063 patients with 966 patients included in the primary analysis group. The mean age was in the late 30’s, ¾ were male, the vast majority (>95%) of patients had blunt trauma, mean out-of-hospital GCS score was 8 and the mean time from injury to out-of-hospital study drug administration was just over just over 40 minutes. No statistical difference in favorable neurologic function at 6 months with TXA compared to placebo. Primary Outcome: Favorable Neurologic Function at 6 Months 65% TXA groups vs. 63% Placebo group Difference 3.5% (90% 1-sided confidence limit for benefit, −0.9%); P = 0.16 Secondary Outcomes:  28 Day Mortality: No statistical difference between groups (14% vs. 17%) Disability Rating Scale Score: No statistical difference between groups (6.8 vs. 7.6) Progression of Intracranial Hemorrhage: No statistical difference between groups (16% vs. 20%) Incidence of Seizures: Bolus Only (5%), Bolus Maintenance (2%) and placebo (2%). Not statistically significant. Thrombotic Events: Bolus Only (9%), Bolus Maintenance (4%) and placebo (10%) 1. Survival Bias: When a selection process of a trial favors certain individuals who make it past a certain point in time and ignores the individuals who did not. In other words, patients who die shortly after the start of follow-up may not have had an opportunity to become exposed and will not have their results recorded. This introduces an artificial survival advantage associated with the exposed subjects regardless of treatment effectiveness. When survivor treatment selection is not addressed, ineffective treatment may appear to prolong survival or worsen adverse events. Severely injured patients who survive longer in the bolus only group may have lived long enough to experience more complications and only those who survive out to 6 months can have an outcome recorded. 2. Glasgow Coma Scale (GCS) Score: The GCS has limitations including inter rater reliability (IRR) issues. This could have contributed to 20% of patients originally given a score of <13 in the pre-hospital setting arrived at the hospital with a GCS of 13 or greater.  Another concern is that the GCS is not a diagnostic tool. It cannot reliably discriminate between CNS depressed states (intoxication, hypoglycemia, sedation, shock, seizure, etc) and intracranial hemorrhages. 3. Few Intracranial Hemorrhages (ICHs): This trial ultimately only had a small number of patients with ICH (58%). This means many patients without ICH were given TXA and were included in the analysis. While being practical it could dilute any potential treatment benefit of TXA in patients with isolated TBI.  4. Loss to Follow-Up: Depending on how you define and calculate loss to follow-up. It was at least 15% and could have been as high as 23%. I usually get concerned when the loss to follow-up is larger than the difference in the primary outcome. This is more important when the authors are claiming superiority which they are not doing in this case. The authors correctly do not conclude superiority but that does not mean we should conclude TXA given pre-hospital does not work in patients with isolated TBI. However, this trial does not support that TXA does work but, given the limitations we have discussed, it is still a reasonable hypothesis that it may work and merits further testing. 5. Minimal Clinically Important Difference (Statistical vs Clinical Significance): Although the study was not powered to detect a difference in mortality (secondary outcome), in this trial we see a ≈3.0% difference in 28-day mortality (not statistically significant) which could be clinically important at a population level. If we assume an estimated 56,

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app