Nuclear power triggers people's disgust mechanisms due to its invisible nature, similar to how we avoid things that can make us sick, like feces or rotten food. The perception is that radiation is like invisible germs that make people sick even with a small exposure. This false belief is perpetuated by past incidents like Hiroshima and Nagasaki, where radiation survivors were wrongly perceived as contagious. The lack of visibility and understanding of radiation compared to particulate pollution contributes to the distrust and confusion surrounding nuclear energy, making it harder for the general public to trust its safety without a deeper understanding of the topic.
The World Economic Forum’s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years — ranking it ahead of war, environmental problems, and other threats from AI.
And the discussion around misinformation and disinformation has shifted to focus on how generative AI or a future super-persuasive AI might change the game and make it extremely hard to figure out what was going on in the world — or alternatively, extremely easy to mislead people into believing convenient lies.
But this week’s guest, cognitive scientist Hugo Mercier, has a very different view on how people form beliefs and figure out who to trust — one in which misinformation really is barely a problem today, and is unlikely to be a problem anytime soon. As he explains in his book Not Born Yesterday, Hugo believes we seriously underrate the perceptiveness and judgement of ordinary people.
Links to learn more, summary, and full transcript.
In this interview, host Rob Wiblin and Hugo discuss:
- How our reasoning mechanisms evolved to facilitate beneficial communication, not blind gullibility.
- How Hugo makes sense of our apparent gullibility in many cases — like falling for financial scams, astrology, or bogus medical treatments, and voting for policies that aren’t actually beneficial for us.
- Rob and Hugo’s ideas about whether AI might make misinformation radically worse, and which mass persuasion approaches we should be most worried about.
- Why Hugo thinks our intuitions about who to trust are generally quite sound, even in today’s complex information environment.
- The distinction between intuitive beliefs that guide our actions versus reflective beliefs that don’t.
- Why fake news and conspiracy theories actually have less impact than most people assume.
- False beliefs that have persisted across cultures and generations — like bloodletting and vaccine hesitancy — and theories about why.
- And plenty more.
Chapters:
- The view that humans are really gullible (00:04:26)
- The evolutionary argument against humans being gullible (00:07:46)
- Open vigilance (00:18:56)
- Intuitive and reflective beliefs (00:32:25)
- How people decide who to trust (00:41:15)
- Redefining beliefs (00:51:57)
- Bloodletting (01:00:38)
- Vaccine hesitancy and creationism (01:06:38)
- False beliefs without skin in the game (01:12:36)
- One consistent weakness in human judgement (01:22:57)
- Trying to explain harmful financial decisions (01:27:15)
- Astrology (01:40:40)
- Medical treatments that don’t work (01:45:47)
- Generative AI, LLMs, and persuasion (01:54:50)
- Ways AI could improve the information environment (02:29:59)
Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore