Amplifying Cognition cover image

Amplifying Cognition

Jason Burton on LLMs and collective intelligence, algorithmic amplification, AI in deliberative processes, and decentralized networks (AC Ep68)

Oct 30, 2024
36:12

“When you get a response from a language model, it’s a bit like a response from a crowd of people, shaped by the preferences of countless individuals.”

– Jason Burton

Robert Scoble
About Jason Burton

Jason Burton is an assistant professor at Copenhagen Business School and an Alexander von Humboldt Research fellow at the Max Planck Institute for Human Development. His research applies computational methods to studying human behavior in a digital society, including reasoning in online information environments and collective intelligence.

LinkedIn: Jason William Burton

Google Scholar page: Jason Burton

University Profile (Copenhagen Business School): Jason Burton

What you will learn

  • Exploring AI’s role in collective intelligence
  • How large language models simulate crowd wisdom
  • Benefits and risks of AI-driven decision-making
  • Using language models to streamline collaboration
  • Addressing the homogenization of thought in AI
  • Civic tech and AI’s potential in public discourse
  • Future visions for AI in enhancing group intelligence

Episode Resources

Transcript

Ross: Ross, Jason, it is wonderful to have you on the show.

Jason Burton: Hi, Ross. Thanks for having me.

Ross: So you and 27 co-authors recently published in Nature Human Behavior a wonderful article called How Large Language Models Can Reshape Collective Intelligence. I’d love to hear the backstory of how this paper came into being with 28 co-authors.

Jason: It started in May 2023. There was a research retreat at the Max Planck Institute for Human Development in Berlin, about six months or so after ChatGPT had really come into the world, at least for the average person. We convened a sort of working group around this idea of the intersection between language models and collective intelligence, something interesting that we thought was worth discussing.

At that time, there were just about five or six of us thinking about the different ways to view language models intersecting with collective intelligence: one where language models are a manifestation of collective intelligence, another where they can be a tool to help collective intelligence, and another where they could potentially threaten collective intelligence in some ways. On the back of that working group, we thought, well, there are lots of smart people out there working on similar things. Let’s try to get in touch with them and bring it all together into one paper. That’s how we arrived at the paper we have today.

Ross: So, a paper being the manifestation of collective intelligence itself?

Jason: Yes, absolutely.

Ross: You mentioned an interesting part of the paper—that LLMs themselves are an expression of collective intelligence, which I think not everyone realizes. How does that work? In what way are LLMs a type of collective intelligence?

Jason: Sure, yeah. The most obvious way to think about it is these are machine learning systems trained on massive amounts of text. Where are the companies developing language models getting this text? They’re looking to the internet, scraping the open web. And what’s on the open web? Natural language that encapsulates the collective knowledge of countless individuals.

By training a machine learning system to predict text based on this collective knowledge they’ve scraped from the internet, querying a language model becomes a kind of distilled form of crowdsourcing. When you get a response from a language model, you’re not necessarily getting a direct answer from a relational database. Instead, you’re getting a response that resembles the answer many people have given to similar queries.

On top of that, once you have the pre-trained language model, a common next step is training through a process called reinforcement learning from human feedback. This involves presenting different responses and asking users, “Did you like this response or that one better?” Over time, this system learns the preferences of many individuals. So, when you get a response from a language model, it’s shaped by the preferences of countless individuals, almost like a response from a crowd of people.

Ross: This speaks to the mechanisms of collective intelligence that you write about in the paper, like the mechanisms of aggregation. We have things like markets, voting, and other fairly crude mechanisms for aggregating human intelligence, insight, or perspective. This seems like a more complex and higher-order aggregation mechanism.

Jason: Yeah. I think at its core, language models are performing a form of compression, taking vast amounts of text and forming a statistical representation that can generate human-like text. So, in a way, a language model is just a new aggregation mechanism.

In an analog sense, maybe taking a vote or deliberating as a group leads to a decision. You could use a language model to summarize text and compress knowledge down into something more digestible.

Ross: One core part of your article discusses how LLMs help collective intelligence. We’ve had several mechanisms before, and LLMs can assist in existing aggregation structures. What are the primary ways that LLMs assist collective intelligence?

Jason: A lot of it boils down to the realization of how easy it is to query and generate text with a language model. It’s fast and frictionless. What can we do with that? One straightforward use is that, if you think of a language model as a kind of crowd in itself, you can use it to replace traditional crowdsourcing.

If you’re crowdsourcing ideas for a new product or marketing campaign, you could instead query a language model and get results almost instantaneously. Crowdsourcing taps into crowd diversity, producing high-quality, diverse responses. However, it requires setting up a crowd and a mechanism for querying, which can be time and resource-intensive. Now, we have these models at our fingertips, making it much quicker.

Another potential use that excites me is using language models to mediate deliberative processes. Deliberation is beneficial because individuals exchange information, allowing them to become more knowledgeable about a task. I have some knowledge, and you have some knowledge. By communicating, we learn from each other.

Ross: Yeah, and there have been some other researchers looking at nudges for encouraging participation or useful contributions. I think another point in your paper is around aggregating group discussions so that other groups or individuals can effectively take those in, allowing for scaled participation and discussion.

Jason: Yeah, absolutely. There’s a well-documented trade-off. Ideally, in a democratic sense, you want to involve everybody in every discussion, as everyone has knowledge to share. By bringing more people into the conversation, you establish a shared responsibility in the outcome. But as you add more people to the room, it becomes louder and noisier, making progress challenging.

If we can use technological tools, whether through traditional algorithms or language models, we could manage this trade-off. Our goal is to bring more people into the room while still producing high-quality outputs. That’s the ideal outcome.

Ross: So, one of the outcomes of bringing people together is decisions. There are other ways in which collective intelligence manifests, though. Are there specific ways, outside of what we’ve discussed, where LLMs can facilitate better decision-making?

Jason: Yes, much of my research focuses on collective estimations and predictions, where each individual submits a number, which can then be averaged across the group. This works in contexts with a concrete decision point or where there’s an objective answer, though we often debate subjective issues with no clear-cut answers.

In those cases, what we want is consensus rather than just an average estimate. For instance, we need a document that people with different perspectives can agree on for better coordination. One of my co-authors, Michael Baker, has shown that language models fine-tuned for consensus can be quite effective. These models don’t just repeat existing information but generate statements that identify points of agreement and disagreement—documents that diverse groups can look at and discuss further. That’s a direction I’d love to see more of.

Ross: That may be a little off track, but it brings up the idea of hierarchy. Implicitly, in collective intelligence, you assume there’s equal participation. However, in real-world decision-making, there’s typically a hierarchy—a board, an executive team, managers. You don’t want just one person making the decision, but you still want effective input from various groups. Can these collective intelligence structures apply to create more participatory decision-making within hierarchical structures?

Jason: Yeah, I think that’s one of the unique aspects of what’s called the civic technology space. There are platforms like Polis, for example, which level the playing field. In an analog room, certain power structures can discourage some people from speaking up while encouraging others to dominate, which might not be ideal because it undermines the benefits of diversity in a group.

Using language models to build more civic technology platforms can make it more attractive for everyday people to engage in deliberation. It could help reduce hierarchies where they may not be necessary.

Ross: Your paper also discusses some downsides of LLMs and collective intelligence. One concern people raise is that LLMs may homogenize perspectives, mashing everything together so that outlier views get lost. There’s also the risk that interacting too much with LLMs could homogenize individuals’ thinking. What are the potential downsides, and how might we mitigate them?

Jason: There’s definitely something to unpack there. One issue is that if everyone starts turning to the same language model, it’s like consulting the same person for every question. If we all rely on one source for answers, we risk homogenizing our beliefs.

Mitigating this effect is an open question. People may prompt models differently, leading to varied advice, but experiments have shown that even with different prompts, groups using language models often produce more homogenous outputs than those who don’t. This issue is concerning, especially given that only a few tech companies currently dominate the model landscape. The limited diversity of big players and the bottlenecks around hardware and compute resources make this even more worrisome.

Ross: Yes, and there’s evidence suggesting models may converge over time on certain responses, which is concerning. One potential remedy could be prompting models to challenge our thinking or offer critiques to stimulate independent thought rather than always providing direct answers.

Jason: Absolutely. That’s one of the applications I’m most excited about. A recent study by Dave Rand and colleagues used a language model to challenge conspiracy theorists, getting them to update their beliefs on topics like flat-Earth theory. It’s incredibly useful to use language models as devil’s advocates.

In my experience, I often ask language models to critique my arguments or help me respond to reviewers. However, you sometimes need to prompt it specifically to provide honest feedback because, by default, it tends to agree with you.

Ross: Yes, sometimes you have to explicitly tell it, “Properly critique me; don’t hold back,” or whatever words encourage it to give real feedback, because they can lean toward being “yes people” if you don’t direct them otherwise.

Jason: Exactly, and I think this ties into our previous discussion on reinforcement learning from human feedback. If people generally prefer responses that confirm their existing beliefs, the utility of language models as devil’s advocates could decrease over time. We may need to start differentiating language models by specific use cases, rather than expecting a single model to fulfill every role.

Ross: Yes, and you can set up system prompts or custom instructions that encourage models to be challenging, obstinate, or difficult if that’s the kind of interaction you need. Moving on, some of your other work relates to algorithmic amplification of intelligence in various forms. I’d love to hear more about that, especially since this is the Amplifying Cognition podcast.

Jason: Sure, so this work actually started before language models became widely discussed. I was thinking, along with my then PhD advisor, Ulrike Hahn, about the “wisdom of the crowd” effect and how to enhance it. One well-documented observation in the literature is that communication can improve crowd wisdom because it allows knowledge sharing. However, it can also be detrimental if it leads to homogenization or groupthink.

Research shows this can depend on network structure. In a highly centralized network where one person has a lot of influence, communication can reduce diversity. However, if communication is more decentralized and spreads peer-to-peer without a central influencer, it can spread knowledge effectively without compromising diversity.

We did an experiment on this, providing a proof of concept for how algorithms could dynamically alter network structures during communication to enhance crowd wisdom. While it’s early days, it shows promise.

Ross: Interesting! And you used the term “rewiring algorithm,” which suggests dynamically altering these connections. This concept could be impactful in other areas, like decentralized autonomous organizations (DAOs). DAOs aim to manifest collective intelligence, but often rely on basic voting structures. Algorithmic amplification could help rebalance input from participants.

Jason: Absolutely. I’m not deeply familiar with blockchain literature, but when I present this work, people often draw parallels with DAOs and blockchain governance. I may need to explore that connection further.

Ross: Definitely! There’s research potential in rebalancing structures for a fairer redistribution of influence. Also, one of this year’s hottest topics is multi-agent systems, often involving both human and AI agents. What excites you about human-plus-AI multi-agent systems?

Jason: There are two aspects to multi-agent systems as I see it. One is very speculative—thinking about language models as digital twins interacting on our behalf, which is futuristic and still far from today’s capabilities. The other, more immediate side, is that we’re already in multi-agent systems.

Think of Wikipedia, social media, and other online environments. We interact daily with algorithms, bots, and other people. We’re already embedded in multi-agent systems without always realizing it. Trying to conceptualize this intersection is difficult, but similar to how early AI discussions seemed speculative and are now real.

For me, a focus on civic applications is crucial. We need more civic technology platforms like Polis that encourage public engagement in discussions. Unfortunately, there aren’t many platforms widely recognized or competing in this space. My hope is that researchers in multi-agent systems will start building in that direction.

Ross: Do you think there’s potential to create a democracy that integrates these systems in a substantial way?

Jason: Yes, but it depends on the form it takes. I conceptualize it through a framework discussed by political scientist Hélène Landemore, who references Jürgen Habermas. He describes two tracks of the public sphere. One is a bureaucratic, formal track where elected officials debate in government. The other is an open, free-for-all public sphere, like discussions in coffee shops or online. The idea was that the best arguments from the free-for-all sphere would influence the formal sphere, but that bridge seems weakened today.

Civic technologies and algorithmic communication could create a “third track” to connect the open public sphere more effectively with bureaucratic decision-making.

Ross: Rounding things out, collective intelligence has to be the future of humanity. We face bigger and more complex challenges, and we need to be intelligent beyond our individual capacities to address these issues and create a better world. What do you see as the next phase or frontiers for building more effective collective intelligence?

Jason: The next frontier will be not just human collective intelligence. We’ve already seen that over the past decade, and I think we’ve almost taken it for granted. There’s substantial research on the “wisdom of the crowd” and deliberative democracy, often focusing on groups of people debating in a room. But now, we have more access to information and the ability to communicate faster and more easily than ever.

The problem now is mitigating information overload. In a way, we’ve already built the perfect collective intelligence system—the internet, social media. Yet, despite having more information, we don’t seem to be a more informed society. Issues like misinformation, echo chambers, and “post-truth” have become part of our daily vocabulary.

I think the next phase will involve developing AI systems and algorithms to help us handle information overload in a socially beneficial way, rather than just catering to advertising or engagement metrics. That’s my hope.

Ross: Amen to that. Thanks so much for your time and your work, Jason. I look forward to following your research as you continue.

Jason: Thank you, Ross.

The post Jason Burton on LLMs and collective intelligence, algorithmic amplification, AI in deliberative processes, and decentralized networks (AC Ep68) appeared first on amplifyingcognition.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode