In this discussion, Bob Kaplan, a collaborator on the 'Studying Studies' series, sheds light on how to navigate the complexities of scientific research. He and Peter dissect various study types, clinical trial phases, and biases that can distort findings. They emphasize the importance of differentiating relative and absolute risk and understanding statistical significance. Kaplan shares practical strategies for reading scientific papers critically and highlights the significance of rigorous study design, particularly in nutrition and drug trials.
01:50:45
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
volunteer_activism ADVICE
Hypothesis-Driven Science
Frame your scientific inquiry as a hypothesis, ideally a null hypothesis.
Design a rigorous experiment to test it, considering factors like randomization, blinding, and sample size.
insights INSIGHT
Study Types
Studies can be categorized into observational, experimental, and reviews/analyses.
Observational studies watch without intervention, while experimental studies test cause and effect.
question_answer ANECDOTE
Case Report Example
Peter Attia's first published papers were individual case reports, like one about a melanoma patient with high calcium.
These reports, while not generalizable, can be valuable for future diagnoses.
Get the Snipd Podcast app to discover more snips from this episode
This special episode is a rebroadcast of AMA #30, now made available to everyone, in which Peter and Bob Kaplan dive deep into all things related to studying studies to help one sift through the noise to find the signal. They define various types of studies, how a study progresses from idea to execution, and how to identify study strengths and limitations. They explain how clinical trials work, as well as biases and common pitfalls to watch out for. They dig into key factors that contribute to the rigor (or lack thereof) of an experiment, and they discuss how to measure effect size, differentiate relative risk from absolute risk, and what it really means when a study is statistically significant. Finally, Peter lays out his personal process when reading through scientific papers.
We discuss:
The ever-changing landscape of scientific literature [2:30];
The process for a study to progress from idea to design to execution [5:00];
Various types of studies and how they differ [8:00];
The different phases of clinical trials [19:45];
Observational studies and the potential for bias [27:00];
Experimental studies: randomization, blinding, and other factors that make or break a study [44:30];
Power, p-values, and statistical significance [56:45];
Measuring effect size: relative risk vs. absolute risk, hazard ratios, and “number needed to treat” [1:08:15];
How to interpret confidence intervals [1:18:00];
Why a study might be stopped before its completion [1:24:00];
Why only a fraction of studies are ever published and how to combat publication bias [1:32:00];
Frequency of training for Olympic weightlifting [1:22:15];
How post-activation potentiation (and the opposite) can improve power training and speed training [1:24:30];
The Strongman competition: more breadth of movement, strength, and stamina [1:32:00];
Why certain journals are more respected than others [1:41:00];
Peter’s process when reading a scientific paper [1:44:15]; and