

Everything Hertz
Dan Quintana
Methodology, scientific life, and bad language. Co-hosted by Dr. Dan Quintana (University of Oslo) and Dr. James Heathers (Cipher Skin)
Episodes
Mentioned books

6 snips
Jan 18, 2018 • 59min
55: The proposal to redefine clinical trials
In this episode, Dan and James discuss the US National Institutes of Health's new definition of a “clinical trial”, which comes into effect on the 25th of January.
Here’s the new definition: “A research study in which one or more human subjects are prospectively assigned to one or more interventions (which may include placebo or other control) to evaluate the effects of those interventions on health-related biomedical or behavioural outcomes”.
Over the course of this episode, they cover the pros and cons of this decision along with the implications for researchers and science in general.
Here are a few things they cover:
The traditional definition of a clinical trial
We go through James’ old work to determine if he’s been a clinical trialist all along
The lack of clarity surrounding the new definition
Why are adopting a clinical trial approach when this approach has obvious weaknesses?
What do you actually have to do when running a clinical trial?
Will institutions also adopt this new definition, thus putting basic research through clinical trial IRBs?
What if this extra red tape actually improves science?
One argument against the proposal is that registering more studies on clinicaltrials.gov will confuse the public. We don’t buy that.
Clinical trial registrations generally miss the many nuances of study design
The new clinical trial definition will eliminate some of the ‘forking paths’ when analysing and reporting data
How this new definition will affect grant applications for early career researchers?
What happens to exploratory research?
NIH case studies of what may constitute a clinical trial
Links
NIH clinical trial definition https://grants.nih.gov/policy/clinical-trials/definition.htm
The NIH “clinical trial decision tree” https://grants.nih.gov/policy/clinical-trials/CT-decision-tree.pdf
NIH case studies of what may constitute a clinical trial https://grants.nih.gov/policy/clinical-trials/case-studies.htm#case1Support Everything Hertz

6 snips
Dec 15, 2017 • 55min
54: Cuckoo Science
In this episode, James sits in the guest chair as Dan interviews him on his recent work find and exposing inconsistent results in the scientific literature.
Stuff they cover:
How James got into finding and exposing inconsistent results
The critiques of James’ critiques
How James would do things differently, if he were start over again?
Separating nefarious motives from sloppiness
The indirect victims of sloppy science
Grants that fund sloppy science take resources from responsible science projects
If people actually posted their data and methods, James’ job would be much easier
Registered reports improve the quality of science
If James could show one slide to every introductory psychology lecture what would it say?
The one thing James believes that others think is crazy
What James has changed his mind about in the last year
Links
The Sokal hoax: https://en.wikipedia.org/wiki/Sokal_affair
James’ Psychological Science paper: http://journals.sagepub.com/doi/full/10.1177/0956797615572908
The @IamSciComm Tweetstorm on podcasting: https://twitter.com/iamscicomm/status/935851867661357057Support Everything Hertz

Nov 17, 2017 • 1h 7min
53: Skin in the game
Dan and James discuss whether you need to have “skin in the game” to critique research.
Here's what else they cover in the episode:
Should scientists be required to communicate their science?
If your research is likely to be misinterpreted try and get out of in front of what's going to be said
Will science communication just become another metric?
The distinction between “science communication” and “science media”
Who’s going to pay for all science communicators that we’ll need to communicate everyone’s science?
Dan and James mispronounce Dutch and German names and give a formal apology to the nation of The Netherlands
Outcome switching in clinical trials
Does having skin in the game guarantee expertise, or just wild biases?
James’ recent desk rejection from a Journal Editor
Dan’s method to invite manuscript reviewers as an Associate Editor
Links:
The science communication Twitter thread https://twitter.com/ocaptmycapt/status/927193779693645825
ERC comics https://www.erccomics.com
The “skin in the game” tweet https://twitter.com/paperbag1/status/914923706648055813
That study in neuopsychopharmacology on a IL-6 receptor antibody to treat residual symptoms in schizophrenia https://www.nature.com/articles/npp2017258Support Everything Hertz

8 snips
Oct 20, 2017 • 1h 3min
52: Give p's a chance (with Daniel Lakens)
In this episode, Dan and James welcome back Daniel Lakens (Eindhoven University of Technology) to discuss his new paper on justifying your alpha level.
Highlights:
Why did Daniel write this paper?
Turning away from mindless statistics
Incremental vs. seismic change in statistical practice
The limitations to justifying your alpha
The benefits of registered reports
Daniel’s coursera course
What’s better? Two pre-registered studies at .05 or one unregistered study at .005?
Testing at the start of semester vs. the end of semester
Thinking of controlling for Type 1 errors as driving speed limits
Error rates mean different things between fields
What if we applied the “5 Sigma” threshold used in physics to the biobehavioral sciences?
What about abandoning statistical significance
How did Daniel co-ordinate a paper with 88 co-authors?
Using time zones to your benefit when collaborating
How can junior researchers contribute to these types of discussions?
Science by discussion, not manifesto
The dangers of blanket recommendations
How do you actually justify your alpha from scratch?
Links
Daniel on Twitter - https://www.twitter.com/lakens
Daniel’s courser course - https://www.coursera.org/learn/statistical-inferences
Justify your alpha paper - https://psyarxiv.com/9s3y6
Abandon statistical significance - https://arxiv.org/abs/1709.07588
Using the costs of error rates to set your alpha - https://doi.org/10.1111/j.1461-0248.2004.00625.xSpecial Guest: Daniel Lakens.Support Everything Hertz

Oct 6, 2017 • 56min
51: Preprints (with Jessica Polka)
In this episode, Dan and James are joined by Jessica Polka, Director of ASAPbio, to chat about preprints.
Highlights:
What is ASAPbio?
Differences between the publication processes in the biological sciences vs. the biomedical sciences
Common concerns with preprints
Media embargoes
How peer review isn’t necessarily a mark of quality
Do preprints make it harder to curate information?
Specialty preprint servers vs. broad servers?
How well do you need to format your preprint?
How do you bring up preprints to lab heads and PIs?
An example of a good preprint experience from Dan
Using preprints for your grant applications
What Jessica has changed her mind about
The one article that Jessica thinks everyone should read
Links
Jessica's Twitter account - @jessicapolka
ASAPbio - http://asapbio.org & @asapbio_
Rescuing Biomedical science conference 2014 resources - http://rescuingbiomedicalresearch.org/events/
Sherpa/Romeo - http://www.sherpa.ac.uk/romeo/index.php
PaleoArxiv - https://osf.io/preprints/paleorxiv
Principles for Open Scholarly Infrastructures paper - https://figshare.com/articles/Principles_for_Open_Scholarly_Infrastructures_v1/1314859Special Guest: Jessica Polka.Support Everything Hertz

Sep 14, 2017 • 1h 40min
50: Special 50th episode (LIVE)
Dan and James celebrate their 50th episode with a live recording! They cover a blog post that argues grad students shouldn’t be publishing, what’s expected of today’s postdocs, and the ‘tone’ debate in psychology.
BONUS: You can also watch the video of this episode on the Everything Hertz podcast channel (link below)
Other stuff they cover:
James offends a sociologist, as is his wont
The argument for why grad students shouldn’t publish
Gatekeepers controlling what’s being published
Editors that Google authors before sending papers out for review
Judging researchers on their institution’s location
James on networking
How do you challenge reviewers when they say you are "too junior"
The standards of Frontiers papers
Writing review papers for the wrong reasons
Why are there so many meta-analyses?
Pre-registering your meta-analysis
Registered reports vs. pre-registration
What’s expected of today’s postdocs
How many papers should you peer review?
How James tried to ward off review requests
Things that millennials are ruining
The role of humour in the tone debate
Links
Episode video: https://www.youtube.com/watch?v=pj3WsTiUuLo&t=3s
The “should grad students publish" article: https://www.insidehighered.com/news/2017/08/23/renewed-debate-over-whether-graduate-students-should-publish#.WaGAeN_v8jI.link
Prospero meta-analysis registration: https://www.crd.york.ac.uk/prospero/
Eiko Fried’s tweet on postdoc expectations: https://twitter.com/eikofried/status/902470702892290048
James’ publons profile: https://publons.com/author/1171358/james-aj-heathers#profile
JANE: http://jane.biosemantics.org
Anonymous PubPeer comments: https://pubpeer.com/publications/0E0DAEBEC6183646F18F4FAED03B1A#7Support Everything Hertz

Jul 31, 2017 • 56min
49: War and p's
In this episode Dan and James discuss a forthcoming paper that's causing a bit of a stir by proposing that biobehavioral scientists should use a 0.005 p-value statistical significance threshold instead of 0.05.
Stuff they cover:
A summary of the paper and how they decided on 0.005.
Whether raising the threshold the best way to improve reproducibility?
Is 0.005 too stringent?
Would this new threshold unfairly favour “super” labs?
If we keep shifting the number does any threshold really matter?
Dan and James’ first impressions of the paper
A crash course on Mediterranean taxation systems
What would a 0.005 threshold practically mean for researchers?
Links
The paper https://osf.io/mky9j/
ENIGMA consortium http://enigma.ini.usc.edu
Music credits: Lee Rosevere freemusicarchive.org/music/Lee_Rosevere/Support Everything Hertz

Jul 21, 2017 • 54min
48: Breaking up with the impact factor (with Jason Hoyt)
Dan and James are joined by Jason Hoyt, who is the CEO and co-founder of PeerJ, an open access journal for the biological and medical sciences.
Here's some of what they cover:
PeerJ’s model and how it got started
What goes into running a journal
Impact factors vs. low-cost publishing
When the journal user experience is too good
Getting a quick reviewer turnaround
The need scientists to change their practices (not publishers)
PeerJ’s membership model
Glamour journals
Future plans for PeerJ
Predatory journals
Researchers don’t want cheap journals, only impact factors
Links
PeerJ: https://peerj.com
The Phoenix project: https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business-ebook/dp/B00AZRBLHO
The Goal: https://www.amazon.com/Goal-Process-Ongoing-Improvement-ebook/dp/B002LHRM2O/ref=pd_sim_351_2?_encoding=UTF8&psc=1&refRID=EMTE1M9W2XW5Q24X4GE8
Music credits: Lee Rosevere freemusicarchive.org/music/Lee_Rosevere/Special Guest: Jason Hoyt.Support Everything Hertz

Jul 7, 2017 • 1h 9min
47: Truth bombs from a methodological freedom fighter (with Anne Scheel)
In this episode, Dan and James are joined by Anne Scheel (LMU Munich) to discuss open science advocacy.
Highlights:
How Anne became an open science advocate
Open science is better science
Methodological terrorists/freedom fighters
The time Anne stood up after a conference keynote and asked a question
Asking poor PhD students to pay for conference costs upfront and then reimbursing them 6 months later
Is it worth if for early career researchers to push open science practices?
How to begin with implementing open science practices
Power analysis should be normal practice, it shouldn’t be controversial
Anne’s going to start a podcast
The 100%CI: A long copy blog with 4 writers
The benefits of preprints and blogging
Science communication in English for non-native English speakers
Doing stuff that interests you vs. stuff that’s meant to advance your career
Twitter accounts of people/things we mentioned:
@dalejbarr - 2:10
@siminevazire - 2:45
@lakens - 2:45
@nicebread303 (Felix Schönbrodt)- 3:50
@annaveer - 21:40
@methodpodcast - 29:20
@the100ci - 30:40
@realscientists - 31:40
@upulie - 31:55
@fMRI_guy (Jens Foell) - 32:20
@realsci_DE (Real scientists Germany) - 32:30
@maltoesermalte, @_r_c_a, @dingding_peng (100% CI team) - 33:55
@stuartJRitchie - 65:05
Links
Early Career Researchers and publishing practices: http://onlinelibrary.wiley.com/doi/10.1002/leap.1102/full (paywalled)
Pre-registration in social psychology—A discussion and suggested template” Paywalled link: http://www.sciencedirect.com/science/article/pii/S0022103116301925, Preprint link: https://osf.io/preprints/psyarxiv/4frms/
The CI 100%: http://www.the100.ci
Music credits: Lee Rosevere freemusicarchive.org/music/Lee_Rosevere/Special Guest: Anne Scheel.Support Everything Hertz

12 snips
Jun 23, 2017 • 1h 20min
46: Statistical literacy (with Andy Field)
In this episode, Dan and James are joined by Andy Field (University of Sussex), author of the “Discovering Statistics” textbook series, to chat about statistical literacy.
Highlights:
The story behind Andy’s new book
SPSS and Bayesian statistics
Andy explains why he thinks the biggest problem in science is statistical illiteracy
Researcher degrees of freedom and p-hacking
The story behind the the first version of ‘Discovering statistics’
How to improve your statistical literacy
Does peer review improve the statistics of papers
Researchers will draw different conclusions on the same dataset
The American Statistical Association’s statement on p-values
How has the teaching of statistics for psychology degrees changed over the years
Andy fact checks his own Wikipedia page
Andy’s thoughts on Bayesian statistics and how he applied it in a recent paper
The peer review of new statistical methods
Andy’s future textbook plans
The rudeness of mailing lists/discussion forums
What is something academia or stats-related that Andy believes that others think is crazy?
The one book that Andy recommends that everyone should read
We learn the crossover in James and Andy’s taste in metal bands
Links
Andy’s books: https://uk.sagepub.com/en-gb/eur/author/andy-field-0
The ‘PENIS of statistics’ lecture from Andy: https://www.youtube.com/watch?v=oe3_DeLC2JE
Daniel Lakens’ Coursera course: https://www.coursera.org/learn/statistical-inferences
The American Statistical Association’s statement on p-values: http://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108
The refereeing decision paper: https://osf.io/gvm2z/
R stan: https://cran.r-project.org/web/packages/rstan/index.html
Statistical rethinking book: https://www.crcpress.com/Statistical-Rethinking-A-Bayesian-Course-with-Examples-in-R-and-Stan/McElreath/p/book/9781482253443
Music credits: Lee Rosevere freemusicarchive.org/music/Lee_Rosevere/Special Guest: Andy Field.Support Everything Hertz


