AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
A recent paper in the journal of Judgment and Decision Making titled On the reception and detection of pseudo-profound bullshit explores empirical questions around a reader's ability to detect statements which may sound profound but are actually a collection of buzzwords that fail to contain adequate meaning or truth. These statements are definitively different from lies and nonesense, as we discuss in the episode.
This paper proposes the Bullshit Receptivity scale (BSR) and empirically demonstrates that it correlates with existing metrics like the Cognitive Reflection Test, building confidence that this can be a useful, repeatable, empirical measure of a person's ability to detect pseudo-profound statements as being different from genuinely profound statements. Additionally, the correlative results provide some insight into possible root causes for why individuals might find great profundity in these statements based on other beliefs or cognitive measures.
The paper's lead author Gordon Pennycook joins me to discuss this study's results.
If you'd like some examples of pseudo-profound bullshit, you can randomly generate some based on Deepak Chopra's twitter feed.
To read other work from Gordon, check out his Google Scholar page and find him on twitter via @GordonPennycook.
And just for fun, if you think you've dreamed up a Data Skeptic related pseudo-profound bullshit statement, tweet it with hashtag #pseudoprofound. If I see an especially clever or humorous one, I might want to send you a free Data Skeptic sticker.