The prevalence of misleading content might not lead people to change their opinions frequently. Instead, individuals might become less inclined to alter their views as they could feel incapable of discerning the truth behind complex arguments or manipulated content. If individuals start recognizing that they can be easily deceived by false information, they might choose to stick with their existing beliefs rather than updating them. The foundation of belief typically relies more on reputation than technical possibilities, with trust in reputable sources shaping opinions. In a scenario where trustworthy sources lose credibility, people may become indifferent or stubborn, resulting in minimal persuasion and a stagnant mindset.
The World Economic Forum’s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years — ranking it ahead of war, environmental problems, and other threats from AI.
And the discussion around misinformation and disinformation has shifted to focus on how generative AI or a future super-persuasive AI might change the game and make it extremely hard to figure out what was going on in the world — or alternatively, extremely easy to mislead people into believing convenient lies.
But this week’s guest, cognitive scientist Hugo Mercier, has a very different view on how people form beliefs and figure out who to trust — one in which misinformation really is barely a problem today, and is unlikely to be a problem anytime soon. As he explains in his book Not Born Yesterday, Hugo believes we seriously underrate the perceptiveness and judgement of ordinary people.
Links to learn more, summary, and full transcript.
In this interview, host Rob Wiblin and Hugo discuss:
- How our reasoning mechanisms evolved to facilitate beneficial communication, not blind gullibility.
- How Hugo makes sense of our apparent gullibility in many cases — like falling for financial scams, astrology, or bogus medical treatments, and voting for policies that aren’t actually beneficial for us.
- Rob and Hugo’s ideas about whether AI might make misinformation radically worse, and which mass persuasion approaches we should be most worried about.
- Why Hugo thinks our intuitions about who to trust are generally quite sound, even in today’s complex information environment.
- The distinction between intuitive beliefs that guide our actions versus reflective beliefs that don’t.
- Why fake news and conspiracy theories actually have less impact than most people assume.
- False beliefs that have persisted across cultures and generations — like bloodletting and vaccine hesitancy — and theories about why.
- And plenty more.
Chapters:
- The view that humans are really gullible (00:04:26)
- The evolutionary argument against humans being gullible (00:07:46)
- Open vigilance (00:18:56)
- Intuitive and reflective beliefs (00:32:25)
- How people decide who to trust (00:41:15)
- Redefining beliefs (00:51:57)
- Bloodletting (01:00:38)
- Vaccine hesitancy and creationism (01:06:38)
- False beliefs without skin in the game (01:12:36)
- One consistent weakness in human judgement (01:22:57)
- Trying to explain harmful financial decisions (01:27:15)
- Astrology (01:40:40)
- Medical treatments that don’t work (01:45:47)
- Generative AI, LLMs, and persuasion (01:54:50)
- Ways AI could improve the information environment (02:29:59)
Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore