AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Importance hacking involves tricking peer reviewers into thinking that valueless or uninteresting findings are valuable or interesting. Researchers achieve this by subtly presenting their results in a way that sounds significant, but in reality, the findings are inconsequential or lack real value. One example is a paper on American politics, which claimed that the preferences of average citizens had negligible influence on policies, while economic elites and interest groups held significant sway. However, a closer look at the study reveals that the model could only explain a mere 7% of policy variation, indicating a lack of understanding about policy determinants. Importance hacking is a significant problem that rarely gets addressed and can result in misleading and unremarkable research being published.
P-curve analysis is a technique used to assess the reliability of research findings by examining the distribution of p-values in a set of studies. A uniform distribution of p-values indicates a lack of p-hacking, while a bulge of p-values near 0.05 suggests potential p-hacking. This statistical tool allows researchers to scrutinize the p-value distributions of various papers or researchers, providing insights into potential biases or questionable practices. Although p-curve analysis is not a definitive method, it offers a useful and innovative approach in identifying potential p-hacking and evaluating the reliability of research findings.
Research suggests that the p-value of a study has an impact on its replicability. Studies with lower p-values, closer to 0, are more likely to be successfully replicated, compared to those with higher p-values. For example, studies with p-values equal to or below 0.01 have a replication success rate of approximately 72%, while studies with p-values above 0.1 have a replication success rate of only around 48%. This finding indicates that a smaller p-value corresponds to a more reliable result, highlighting the importance of stringent significance thresholds in publishing research with stronger evidential support.
A study conducted on the Daily Ritual habit formation tool showed that participants who used the tool were able to form habits more reliably. The study randomized participants into a control group and an intervention group. The intervention group used the Daily Ritual tool, which included techniques such as listing the benefits of the habit, using home reminders, seeking support from a friend, practicing mini habits, and reflecting on past successful habits. Over an eight-week period, the intervention group showed a statistically significant increase in the frequency of practicing their habit compared to the control group. The tool was found to be effective in helping participants stick to their habits, with an average increase of 0.61 days per week. While the effect size was not massive, the results suggest that the tool can be a valuable aid in habit formation, especially considering its low time and effort requirement.
Behavior change is a difficult process, as evidenced by research in the field. One large-scale study focused on promoting gym attendance among 61,000 participants found that behavior change interventions had limited success. The study involved multiple researchers developing 53 different interventions, implemented through text messages. Upon closer examination, it was determined that only a small number of the interventions showed any notable impact, emphasizing the challenges associated with behavior change. This highlights the need for effective tools and frameworks to support individuals in making lasting changes.
Ideologies can start with good ideas and attract well-intentioned individuals. However, as people join the group, there is a pressure to show group membership and suppress doubts. This leads to the formation of an in-group versus out-group mentality, where outsiders are seen as bad and the in-group as good. As some false beliefs emerge within the ideology, there is a tendency to avoid looking at the world too closely. This can result in groups doing harm despite their initial good intentions. It is important to beware of the dangers of groupthink and encourage diversity of perspectives to avoid these pitfalls.
The Fire Framework suggests when to rely on intuition versus deliberation. In fast decisions, where quick reactions are necessary, trusting your gut is advised. In irrelevant decisions with low stakes, intuition can guide choices without deep analysis. Repetitious decisions that draw on past experiences can rely on intuition, provided feedback has allowed the learning algorithm in the brain to update. Lastly, evolutionary decisions, rooted in hard-coded instincts, can often be trusted. Understanding when to rely on intuition and when to engage in deliberate thinking is key for effective decision making.
Valuism is a life philosophy that encourages individuals to identify and prioritize their intrinsic values. By reflecting on what they truly care about, people can align their actions and decisions with these values. This approach is in contrast to living by external or rigid philosophies that may not align with one's personal values. Valuism emphasizes the importance of self-reflection and introspection to understand one's own values, rather than blindly following societal expectations or imitating others. By pursuing what one intrinsically values, individuals can find greater motivation, clarity, and satisfaction in life.
Some individuals may attempt to adhere to external philosophies or moral frameworks, despite not believing in objective moral truth or feeling fully convinced of their validity. Valuism offers an alternative by not claiming objective moral truth or imposing external constraints on individuals. This flexible approach allows people to explore and prioritize their own intrinsic values without the need for a rigid moral system. Valuism encourages individuals to consider what they truly value and to use effective methods to increase the presence of these values in their lives. By adopting this approach, people can live a life aligned with their personal values, experience greater motivation, and avoid the psychological tension that can arise from conflicting beliefs and values.
Can you trust the things you read in published scientific research? Not really. About 40% of experiments in top social science journals don't get the same result if the experiments are repeated.
Two key reasons are 'p-hacking' and 'publication bias'. P-hacking is when researchers run a lot of slightly different statistical tests until they find a way to make findings appear statistically significant when they're actually not — a problem first discussed over 50 years ago. And because journals are more likely to publish positive than negative results, you might be reading about the one time an experiment worked, while the 10 times was run and got a 'null result' never saw the light of day. The resulting phenomenon of publication bias is one we've understood for 60 years.
Today's repeat guest, social scientist and entrepreneur Spencer Greenberg, has followed these issues closely for years.
Links to learn more, summary and full transcript.
He recently checked whether p-values, an indicator of how likely a result was to occur by pure chance, could tell us how likely an outcome would be to recur if an experiment were repeated. From his sample of 325 replications of psychology studies, the answer seemed to be yes. According to Spencer, "when the original study's p-value was less than 0.01 about 72% replicated — not bad. On the other hand, when the p-value is greater than 0.01, only about 48% replicated. A pretty big difference."
To do his bit to help get these numbers up, Spencer has launched an effort to repeat almost every social science experiment published in the journals Nature and Science, and see if they find the same results.
But while progress is being made on some fronts, Spencer thinks there are other serious problems with published research that aren't yet fully appreciated. One of these Spencer calls 'importance hacking': passing off obvious or unimportant results as surprising and meaningful.
Spencer suspects that importance hacking of this kind causes a similar amount of damage to the issues mentioned above, like p-hacking and publication bias, but is much less discussed. His replication project tries to identify importance hacking by comparing how a paper’s findings are described in the abstract to what the experiment actually showed. But the cat-and-mouse game between academics and journal reviewers is fierce, and it's far from easy to stop people exaggerating the importance of their work.
In this wide-ranging conversation, Rob and Spencer discuss the above as well as:
• When you should and shouldn't use intuition to make decisions.
• How to properly model why some people succeed more than others.
• The difference between “Soldier Altruists” and “Scout Altruists.”
• A paper that tested dozens of methods for forming the habit of going to the gym, why Spencer thinks it was presented in a very misleading way, and what it really found.
• Whether a 15-minute intervention could make people more likely to sustain a new habit two months later.
• The most common way for groups with good intentions to turn bad and cause harm.
• And Spencer's approach to a fulfilling life and doing good, which he calls “Valuism.”
Here are two flashcard decks that might make it easier to fully integrate the most important ideas they talk about:
• The first covers 18 core concepts from the episode
• The second includes 16 definitions of unusual terms.
Chapters:
Producer: Keiran Harris
Audio mastering: Ben Cordell and Milo McGuire
Transcriptions: Katy Moore
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode