#18 How to ask good Research Questions and encourage Open Science, with Daniel Lakens
Jun 18, 2020
auto_awesome
Daniel Lakens, an experimental psychologist at Eindhoven University of Technology, dives into the art of crafting effective research questions and experimental designs. He sheds light on the importance of open science and how it can reshape funding and publishing practices. The discussion also tackles the ongoing reproducibility crisis in psychology and the value of acknowledging flawed research. Lakens champions transparency and collaboration, advocating for better statistical education to enhance the credibility of scientific findings.
Clearly formulating research questions and understanding psychological concepts are crucial for designing robust experimental studies.
The reproducibility crisis highlights the need for rigorous replication studies to enhance the credibility and validity of psychological research.
Open science encourages transparency through data sharing, fostering collaboration and improving the overall quality of scientific research.
Deep dives
Designing Effective Experimental Studies
Creating a robust experimental study relies heavily on clearly formulating research questions and understanding the underlying psychological concepts. Experimental psychologists should be more concerned about the efficacy of their study designs rather than merely following traditional methods. Daniel Lackens emphasizes the importance of justified sample sizes and power analyses to better evaluate study designs and outcomes. This ensures that researchers not only collect data but also engage in thoughtful analysis that can generate actionable insights.
The Reproducibility Crisis
The reproducibility crisis in psychology highlights the alarming inability of many studies to be replicated, raising concerns about the integrity of research findings. Daniel describes how this crisis has led to increased scrutiny of statistical practices and methodologies, as failures to reproduce original results often expose faulty experimental practices. This has prompted researchers to prioritize rigorous replication studies, enabling a clearer understanding of what constitutes reliable scientific knowledge. Consequently, addressing these challenges contributes to enhancing the credibility and validity of psychological research.
Embracing Open Science
Open science promotes transparency in research by advocating for the sharing of data, methodologies, and findings to allow for greater collaboration and public trust in scientific work. Daniel Lackens supports this initiative by prioritizing reviews of articles that adhere to open science principles, exemplifying how researchers can align their practices with broader scientific values. He notes that making data and materials accessible not only facilitates better scrutiny but also encourages innovation by allowing others to build upon previous work. This cultural shift within the scientific community is vital for improving research quality and fostering a collaborative environment.
Improving Reward Structures in Science
The current reward structures in academia often incentivize novel findings over replication and methodological rigor, creating a skewed appreciation for quality research. Daniel emphasizes the need for systemic change, wherein funding bodies support replication studies, thus reshaping researchers' motivations to engage in thorough re-evaluation of their and others' work. By acknowledging and addressing the disconnect between individual incentives and collective scientific progress, the field can foster an environment that values both new discoveries and the confirmation of existing knowledge. This change could help rectify detrimental practices and restore credibility to psychological research.
Collaboration and Specialization in Research
The importance of collaboration among researchers is underscored as a means to improve the quality and applicability of studies in psychology. Daniel points out the limitations of expecting individual researchers to possess a comprehensive skill set across different domains. By allowing researchers to specialize while promoting collaboration, projects can benefit from focused expertise, leading to more rigorous and effective studies. Consequently, fostering an environment that encourages sharing skills and resources can help address gaps in knowledge and mitigate research errors.
How do you design a good experimental study? How do you even know that you’re asking a good research question? Moreover, how can you align funding and publishing incentives with the principles of an open source science?
Let’s do another “big picture” episode to try and answer these questions! You know, these episodes that I want to do from time to time, with people who are not from the Bayesian world, to see what good practices there are out there. The first one, episode 15, was focused on programming and python, thanks to Michael Kennedy.
In this one, you’ll meet Daniel Lakens. Daniel is an experimental psychologist at the Human-Technology Interaction group at Eindhoven University of Technology, in the Netherlands. He’s worked there since 2010, when he received his PhD in social psychology.
His research focuses on how to design and interpret studies, applied meta-statistics, and reward structures in science. Daniel loves teaching about research methods and about how to ask good research questions. He even crafted free Coursera courses about these topics.
A fervent advocate of open science, he prioritizes scholar articles review requests based on how much the articles adhere to Open Science principles. On his blog, he describes himself as ‘the 20% Statistician’. Why? Well, he’ll tell you in the episode…
Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !