
The Skeptics Guide to Emergency Medicine SGEM#495: Tell Me Lies, Tell Me Sweet Little Lies – Reporting of Noninferiority Margins on ClinicalTrials.gov.
Dec 6, 2025
24:54
Date: December 4, 2025
Guest Skeptic: Dr. Jestin Carlson – Long-time listener, second-time guest.
Reference: Reinaud et al. Reporting of Noninferiority Margins on ClinicalTrials.gov: A Systematic Review. JAMA Netw Open. 2025
Case: You are working with a resident who asks you about a new thrombolytic they heard about on the SGEM for acute ischemic stroke. This new treatment was found not to be inferior to the existing thrombolytic, but they are not sure how the paper reached that conclusion. You start to discuss noninferiority margins when the resident asks you, “Are the noninferiority margins reported on ClinicalTrials.gov consistent with the final publications?”
Background: A non-inferiority (NI) trial asks whether a new strategy is “not unacceptably worse” than an established, effective strategy by more than a pre-specified amount. The non-inferiority margin (Δ) or delta is the largest loss of effectiveness we would tolerate in exchange for another advantage (lower cost, easier logistics, fewer adverse effects). Regulators and methods groups emphasize that Δ must be clinically justified, pre-specified, and not chosen after seeing the data. The Δ is then tested using a one-sided hypothesis procedure or, equivalently, by checking whether the confidence interval for the treatment difference stays within Δ.
For example, a new medicine to treat hypertension lowers patients’ systolic blood pressure by 1 point more than the standard treatment but causes gastrointestinal (GI) upset in 50% of patients. That difference may be statistically significant, but clinically it doesn’t result in a net benefit for the patients since so many of them get GI upset. Ideally, the noninferiority margins should be set up before the trial is conducted to minimize bias.
Many modern ED trials rely on NI logic (TNK vs tPA for stroke, non-operative treatment of appendicitis, Simple Aspiration versus Drainage for Complete Pneumothorax, etc). However, prior work suggested poor reporting of noninferiority margins with reporting rates as low as 2.6% for studies published between 2012 and 2014. That was over 10 years ago...hopefully we have improved since then.
Clinical Question: What proportion of registered noninferiority randomized trials report the noninferiority margin at registration, and how consistent are margins between ClinicalTrials.gov and corresponding publications?
Reference: Reinaud et al. Reporting of Noninferiority Margins on ClinicalTrials.gov: A Systematic Review. JAMA Netw Open. 2025
Population: All registered non‑inferiority trials on ClinicalTrials.gov with primary completion 2010–2015 (Stage 1) and all first‑posted 2022–2023 (Stage 2).
Excluded: Nonrandomized, single-arm, phase 1–2/2–3, diagnostic/screening trials where noninferiority was only a secondary outcome.
Exposure: Presence of a prespecified noninferiority margin reported on ClinicalTrials.gov (at registration / during enrollment / after primary completion / in posted results).
Comparison: Descriptive contrasts across timepoints and between the registry and corresponding publications (consistency).
Outcome:
Primary Outcome: Proportion reporting the noninferiority margin at registration on ClinicalTrials.gov.
Secondary Outcomes: Timing of first reporting (registration, during enrollment, after completion, or in posted results); proportion reporting margin in posted results; proportion reporting margin in the corresponding publication; justification of margin; consistency between registry and publication; reporting of primary analysis population and Type I Error.
Type of Study: A systematic review of registered randomized trials’ methods reporting.
Authors’ Conclusions: “Reporting of the noninferiority margin on ClinicalTrials.gov was low (3.0% in 2010–2015 sample, 9.2% in 2022-2023 sample). Because margins are central to design and interpretation, mandatory reporting of trial design and the noninferiority margin at registration would improve transparency and reliability of noninferiority trial results.”
Quality Checklist for Systematic Review:
Was the main question clearly stated? Yes
Was the search detailed and exhaustive? Yes
Were the inclusion criteria appropriate? Yes
Included studies sufficiently valid? Yes
Results similar from study to study? Yes
Any financial conflicts of interest? Authors do not report any financial conflicts of interest.
Results: In the 2010 to 2015 cohort (n=266), 60% were industry‑funded; most evaluated drugs/biologics (~67%); parallel‑arm designs predominated (94%); open‑label was common (49%); adults‑only accounted for 74%; and the median planned sample size was 304 (IQR 63 to 545). The 2022 to 2023 cohort (n=327) showed similar patterns with more adult-only studies (83%) and a median planned sample size 228 (IQR 50 to 406).
Key Result: Very few trials pre-specified a Δ at registration, a super majority reported a Δ in their publication and registry‑to‑publication consistency could only be evaluated in a handful of studies.
2010 to 2015 sample (n=266)
Only 8 trials (3%) reported the planned noninferiority margin at registration.
31 trials (11.7%) first reported a margin after registration (11 during enrollment; 20 after primary completion).
Of 132 trials with results posted on ClinicalTrials.gov, 79 (59.8%) reported the noninferiority margin in the posted results.
Corresponding publications were found for 208 trials (2010–2015 sample); 196/208 (94.2%) publications reported the noninferiority margin, and 86/196 (41.3%) justified it.
2022 to 2023 sample (n=327)
30 trials (9%) reported the margin at registration (a modest improvement but still low); only 6 of these justified.
When margins were reported in both the registry and the publication, they were identical in the 5 trials that reported margins at registration and in publication; margins in posted results and publications were consistent for all but 1 of 63 trials.
Registry Transparency: ClinicalTirals.gov lacks a mandatory, structured field for trial design type and noninferiority margin. The authors suggest mandatory fields to prevent untraceable post-hoc margin changes. Building and maintaining trust in the scientific literature depends on ensuring we are honest and transparent in the scientific process. This includes transparency in registration.
Potential for Bias: Post-hoc or late specification of margins can bias conclusions. A margin change after seeing the data can turn a noninferior result from “fail” to “pass”. This would be like p-hacking or HARKing (hypothesizing after results are known).
Overinterpreting Non-inferiority Trials: The goal of non-inferiority trials is exactly that...to determine if one treatment is not inferior to another. It does not prove whether the treatment is effective. In addition, even if margins are pre-specified, they can be clinically meaningless. Readers still need to appraise whether the Δ represent something the patient would consider non-inferior.
Single Trial Registry: This study used a single registry (ClinicalTrials.gov) and did not search the study protocols. There are many registries where clinical trials can be registered, including the Australian New Zealand Clinical Trials Registry, the Chinese Clinical Trial Registry and EU Clinical Trials Register, to name a few. How these results generalize to other registries is unknown.
Ensuring Consistency with Reporting: FDA guidance and the CONSORT extension for noninferiority trials emphasize pre-specification and justification of margins. We should expect this in both registration and publication. In addition, journals, editors, and reviewers may insist that the author report not only the margins at the time of publication but also whether the margins were published at the time of registration.
Comment on the Authors’ Conclusion Compared to the SGEM Conclusion: We generally agree with the authors’ conclusions.
SGEM Bottom Line: Non-inferiority margins need to be pre-specified, justified, and clinically acceptable, and this new review shows we often can’t verify that from the trial registry alone.
Case Resolution: You tell the resident that when you read a noninferiority trial, check the publication for margin justification, when possible, verify pre-specification in the trial registry or protocol and reflect on whether the margin is clinically relevant. Treat noninferiority claims cautiously if the margin is not prospectively registered.
Clinical Application: Be skeptical when reading the results of a non-inferiority trial and cross-check them against what is reported on clinicaltrials.gov if it was registered there.
What Do I Tell the Patient? N/A
Keener Kontest: Last week’s winner was Brad Roney. He knew the pain was defined by the International Association for the Study of Pain (IASP) as: “An unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage.”
Listen to the SGEM podcast for this week’s question. If you know, then send an email to thesgem@gmail.com with “keener” in the subject line. The first correct answer will receive a shoutout on the next episode.
Other FOAMed:
First10EM - You Don't Understand Non-Inferiority Trials (and neither do I)
Remember to be skeptical of anything you learn, even if you heard it on the Skeptics’ Guide to Emergency Medicine.
Additional Readings & Resources:
D'Agostino RB Sr, Massaro JM, Sullivan LM. Non-inferiority trials: design concepts and issues - the encounters of academic consultants in statistics. Stat Med. 2003 Jan 30;22(2):169-86. doi: 10.1002/sim.1425. PMID: 12520555.
Kaul S, Diamond GA. Good enough: a primer on the analysis and interpretation of noninferiority trials. Ann Intern Med. 2006 Jul 4;145(1):62-9.
