

Metascience 101 - EP5: "How and Why to Run an Experiment"
Oct 9, 2024
Join Professors Heidi Williams and Paul Niehaus, along with Emily Oehlsen from Open Philanthropy and Jim Savage from Schmidt Futures, as they unravel the art of experimentation in metascience. Discover how careful evaluation can inform policy-making and improve research quality. They tackle the complexities of impact evaluations, share insights from the RISE program on identifying talent through innovative methods, and discuss the importance of evidence-based practices in philanthropy for real-world change.
AI Snips
Chapters
Transcript
Episode notes
Validating Talent Selection In RISE
- Jim Savage described RISE: a global talent search for 15–17 year olds with tens of thousands of applicants and 100 winners per year.
- Their validation experiments found many common interview questions failed and interviews biased against poorer candidates, changing RISE's design.
Define Objectives, Pick Metrics, Then Randomize
- Paul Niehaus advised clearly define your objective and choose good metrics before running impact evaluation.
- Then use counterfactual reasoning (randomization or credible quasi-experiments) to infer true impact.
Spend On Outcomes, Not Randomization
- Randomization itself is cheap; outcome measurement usually drives cost and time.
- Design experiments sized large enough to give decision-makers the confidence they need and avoid underpowered trials.