

Humans + AI
Ross Dawson
Exploring and unlocking the potential of AI for individuals, organizations, and humanity
Episodes
Mentioned books

Feb 5, 2025 • 0sec
Rita McGrath on inflection points, AI-enhanced strategy, memories of the future, and the future of professional services (AC Ep76)
Rita McGrath, a top expert on strategy and innovation and Professor at Columbia Business School, shares her insights on the intersection of human creativity and AI. She discusses how AI can enhance strategic decision-making and navigate transient competitive advantages. McGrath highlights the significance of inflection points in business evolution and their effects on consumer behavior. Furthermore, she reimagines the future of work, advocating for a human-centric approach in an AI-driven landscape, emphasizing the importance of continuous learning and collaboration.

Jan 29, 2025 • 0sec
Christian Stadler on AI in strategy, open strategy, AI in the boardroom, and capabilities for strategy (AC Ep75)
Christian Stadler, a strategic management professor at Warwick Business School and author of 'Open Strategy,' dives into the transformative role of AI in decision-making. He emphasizes AI as a co-strategist that enhances boardroom discussions rather than replaces human judgment. The conversation covers the shift toward open strategy, highlighting how diverse perspectives drive innovation and improve execution. Stadler also discusses the need for political awareness in leadership and engaging employees to foster a culture of innovation.

Dec 18, 2024 • 0sec
Valentina Contini on AI in innovation, multi-potentiality, AI-augmented foresight, and personas from the future (AC Ep74)
Valentina Contini, an innovation strategist and technofuturist, dives into the fascinating intersection of AI and creativity. She shares insights on being a 'professional black sheep' and how generative AI can enhance human innovation. Valentina emphasizes the role of AI in freeing up cognitive resources, fostering critical thinking, and generating immersive future scenarios through AI personas. The conversation also touches on the importance of embracing technology and lifelong learning to harness AI's potential for a positive future.

Dec 11, 2024 • 0sec
Anthea Roberts on dragonfly thinking, integrating multiple perspectives, human-AI metacognition, and cognitive renaissance (AC Ep73)
Anthea Roberts, a leading authority in international law and founder of Dragonfly Thinking, dives deep into the art of 'dragonfly thinking'—a method for examining complex issues from multiple angles. She discusses the shift in human roles in AI collaboration, emphasizing metacognition in decision-making. Roberts also tackles the biases in AI and the importance of integrating diverse knowledge systems, advocating for a cognitive renaissance to navigate the challenges and opportunities of AI advancements and enhance our collective problem-solving capabilities.

Dec 4, 2024 • 0sec
Kevin Eikenberry on flexible leadership, both/and thinking, flexor spectrums, and skills for flexibility (AC Ep72)
“To be a flexible leader is to make sense of the world in a way that allows you to intentionally ask, ‘How do I need to lead in this moment to get the best results for my team and the outcomes we need?’”
– Kevin Eikenberry
About Kevin Eikenberry
Kevin Eikenberry is Chief Potential Officer of leadership and learning consulting company The Kevin Eikenberry Group. He is the bestselling author or co-author of 7 books, including the forthcoming Flexible Leadership. He has been named to many lists of top leaders, including twice to Inc. magazine’s Top 100 Leadership and Management Experts in the World. His podcast, The Remarkable Leadership Podcast, has listeners in over 90 countries.
Website:
The Kevin Eikenberry Group
LinkedIn Profiles
Kevin Eikenberry
The Kevin Eikenberry Group
Book
Flexible Leadership: Navigate Uncertainty and Lead with Confidence
What you will learn
Understanding the essence of flexible leadership
Balancing consistency and adaptability in decision-making
Embracing “both/and thinking” to navigate complexity
Exploring the power of context in leadership strategies
Mastering the art of asking vs. telling
Building habits of reflection and intentionality
Developing mental fitness for effective leadership
Episode Resources
People
Carl Jung
F. Scott Fitzgerald
David Snowden
Book
Flexible Leadership: Navigate Uncertainty and Lead with Confidence
Frameworks/Concepts
Myers-Briggs
Cynefin framework
Confidence-competence loop
Organizations/Companies
The Kevin Eikenberry Group
Technical Terms
Leadership style
“Both/and thinking”
Compliance vs. commitment
Ask vs. tell
Command and control
Sense-making
Plausible cause analysis
Transcript
Ross Dawson: Kevin, it is wonderful to have you on the show.
Kevin Eikenberry: Ross, it’s a pleasure to be with you. I’ve had conversations about this book for podcasts. This is the first one that’s going to go live to the world, so I’m excited about that.
Ross: Fantastic. So the book is Flexible Leadership: Navigate Uncertainty and Lead with Confidence. What does flexible leadership mean?
Kevin: Well, that’s a pretty good starting question. Here’s the big idea, Ross: so many people have come up in leadership and taken assessments of one sort or another. They’ve done Strengths Finder or a leadership style assessment, and it’s determined that they are a certain style or type.
That’s useful to a point, but it becomes problematic beyond that. Humans are pattern recognizers, so once we label ourselves as a certain type of leader, we tend to stick to that label. We start thinking, “This is how I’m supposed to lead.”
To be a flexible leader means we need to start by understanding the context of the situation. Context determines how we ought to lead in a given moment rather than relying solely on what comes naturally to us.
Being a flexible leader involves making sense of the world intentionally and asking, “How do I need to lead in this moment to get the best results for my team and the outcomes we’re working towards?”
Ross: I was once told that Carl Jung, who wrote the typology of personalities that forms the foundation of Myers-Briggs, said something similar. I’ve never found the original source, but apparently, he believed the goal was not to fix ourselves at one point on a spectrum but to be as flexible as possible across it.
So, we’re all extroverts and introverts, sensors and intuitors, thinkers and feelers.
Kevin: Exactly. None of us are entirely one or the other on these spectrums. They’re more like continuums.
Take introvert vs. extrovert. Some people are at one extreme or the other, but no one is a zero on either side. The problem arises when we label ourselves and think, “This is who I am.” That may reflect your natural tendency, but it doesn’t mean that’s the only way you can or should lead.
Ross: One of the themes in your book is “both/and thinking,” which echoes what I wrote in Thriving on Overload. You can be both extroverted and introverted. I see that in myself.
Kevin: Me too. Our world is so focused on “either/or” thinking, but to navigate complexity and uncertainty as leaders, we must embrace “both/and” thinking.
Scott Fitzgerald once said something along the lines of, “The test of a first-rate intelligence is the ability to hold two opposing ideas in your mind at the same time and still function.” I’d say the same applies to leadership.
To be highly effective, leaders must consider seemingly opposite approaches and determine what works best given the context.
Ross: That makes sense. Most people would agree that flexible leadership is a sound idea. But how do we actually get there? How does someone become a more flexible leader?
Kevin: The first step is recognizing the value of flexibility. Many leaders get stuck on the idea of consistency. They think, “To be effective, I need to be consistent so people know what to expect from me.”
But flexibility isn’t the opposite of consistency. We can be consistent in our foundational principles—our values, mission, and core beliefs—while being adaptable in how we approach different situations.
Becoming a flexible leader requires three things:
Intention – Recognizing the value of flexibility.
Sense-making – Understanding the context and what it requires of us.
Flexors – Knowing the options available to us and deciding how to adapt in a given situation.
Ross: This aligns with my work on real-time strategy. A fixed strategy might have worked in the past, but in today’s world, we need to adapt. At the same time, being completely flexible can lead to chaos.
Kevin: Exactly. Leaders need to balance consistency and flexibility, knowing when to lean toward one or the other.
Leadership is about achieving valuable outcomes with and through others. This creates an inherent tension—outcomes vs. people. The answer isn’t one or the other; it’s both.
For every “flexor” in the book, the goal isn’t to be at one extreme of the spectrum but to find the balance that best serves the team and the context.
Ross: You’ve mentioned the word “flexor” a few times now. I think this is one of the real strengths of the book. It’s a really useful concept. So, what is a flexor?
Kevin: A flexor is the two ends of a continuum on something that matters. Let’s use an example.
On one end, we have achieving valuable outcomes. On the other end, we have taking care of people. Some leaders lean toward focusing on outcomes—getting the work done no matter what. Others lean toward prioritizing their people—ensuring their well-being and development so outcomes follow.
The reality is that leadership requires balancing both. Sometimes the context calls for one approach more than the other. For instance, in moments of chaos, compliance might be necessary to maintain safety or order. In other situations, you’ll need to inspire commitment for long-term success.
A leader must constantly assess the context and decide where to lean on the spectrum.
Ross: That’s a great example. Another one might be between “ask” and “tell.”
Kevin: Yes, exactly! Leaders often believe they need to have all the answers, so they default to telling—giving directives and expecting people to follow.
But sometimes, asking is far more effective. Your team members often have perspectives and information you don’t. By asking rather than telling, you gain insights, foster collaboration, and build trust.
Of course, it’s not about always asking or always telling. It’s about understanding when to lean toward one and when the other might be more effective.
Ross: That makes sense. In today’s world, consultative leadership is highly valued, especially in certain industries. Many great leaders lean heavily on asking rather than telling.
Kevin: Absolutely, but even consultative leaders need to recognize when the situation calls for decisiveness. If there’s urgency or a crisis, sometimes the team just needs clear instructions: “Here’s what we need to do.”
Being a flexible leader means being intentional—understanding the context and adjusting your approach, even if it doesn’t align with your natural tendencies.
Ross: That brings us to the concept of sense-making. Leaders need to make sense of their context to decide where they stand on a particular flexor. How can leaders improve their sense-making capabilities?
Kevin: The first step is recognizing that context matters and that it changes.
Many leaders rely on best practices, but those only work in clear, predictable situations. Our world is increasingly complex and uncertain. In such situations, we need to adopt “good enough” practices or experiment to find what works.
To improve sense-making, leaders must build a mental map of their world. Is the situation clear, complicated, complex, or chaotic? This aligns with David Snowden’s Cynefin framework, which I reference in the book.
By identifying the nature of the situation, leaders can adjust their approach accordingly.
Ross: The Cynefin framework is a fantastic tool, often used in group settings. You’re applying it here to individual leadership.
Kevin: Exactly. It’s not just about guiding group processes. It’s about helping leaders see the situation clearly so they can flex their approach.
Ross: That’s insightful. Leaders don’t operate in isolation—they’re part of an organizational context. How does a leader navigate their role while considering the expectations of their peers, colleagues, and supervisors?
Kevin: Relationships play a critical role. The better your relationships with peers and supervisors, the more you understand their styles and perspectives. This helps you navigate the context effectively.
Sometimes, though, you may need to challenge others’ perspectives—respectfully, of course. If someone is treating a situation as chaotic when it’s actually complex, your role as a leader may be to ask questions or provide a different perspective.
Being intentional is key. Leadership often involves breaking habitual responses, pausing to assess the context, and deciding if a different approach is needed.
Ross: That’s a journey. Leadership habits are deeply ingrained. How do leaders move from their current state to becoming more flexible and adaptive?
Kevin: That’s the focus of the third part of the book—how to change your habits.
First, leaders need to recognize that their natural tendencies might not always serve them best. Without this realization, no progress is possible.
Next, they must build new habits, starting with regularly asking questions like:
What’s the context here?
What does this situation require of me?
How did that approach work?
Reflection is crucial. Leaders should consistently ask, “What worked, what didn’t, and what can I learn from this?”
Another valuable practice is what I call “plausible cause analysis.” Instead of jumping to conclusions about why something happened, consider multiple plausible explanations. For example, if a team doesn’t respond to a question, don’t assume they’re disengaged. There could be several reasons—perhaps they need more time to think or the question wasn’t clear.
By exploring plausible causes, leaders can choose responses that address most potential issues.
Ross: That’s a great framework for reflection and improvement. It also ties into mental fitness, which is so important for leaders.
Kevin: Exactly. During the pandemic, we worked extensively with clients on mental fitness—not just mental health. Mental fitness involves proactively building resilience, much like physical fitness.
Reflection, gratitude, and self-awareness are all part of maintaining mental fitness. Leaders who invest in their mental fitness are better equipped to handle challenges and make sound decisions.
Ross: Let’s circle back to the book. What would you say is its ultimate goal?
Kevin: The goal of Flexible Leadership is to help leaders navigate uncertainty and complexity with confidence.
For 70 years, leadership models have tried to simplify the real world. While those models are helpful, they’re inherently oversimplified. The ideas in the book aim to help leaders embrace the complexity of the real world, equipping them with tools to become more effective and, ultimately, wiser.
Ross: Fantastic. Where can people find your book?
Kevin: The book launches in March, but you can pre-order it now at kevineikenberry.com/flexible. That link will take you directly to Amazon. You can also learn more about our work at kevineikenberry.com.
Ross, it’s been an absolute pleasure. Thanks for having me.
Ross: Thank you so much, Kevin!
The post Kevin Eikenberry on flexible leadership, both/and thinking, flexor spectrums, and skills for flexibility (AC Ep72) appeared first on Humans + AI.

Nov 27, 2024 • 0sec
Alexandra Diening on Human-AI Symbiosis, cyberpsychology, human-centricity, and organizational leadership in AI (AC Ep71)
In this discussion, Alexandra Diening, Co-founder and Executive Chair of the Human-AI Symbiosis Alliance, delves into the vital concept of human-AI symbiosis. She emphasizes the importance of designing ethical frameworks to enhance human potential without causing harm. The conversation touches on the risks of parasitic AI, the balance between excitement and caution in AI deployment, and practical strategies for organizations to navigate this integration responsibly. With her background in cyberpsychology, Diening highlights the transformative impact of AI on human behavior.

Nov 20, 2024 • 41min
Kevin Clark & Kyle Shannon on collective intelligence, digital twin elicitation, data collaboratives, and the evolution of content (AC Ep70)
Join Kevin Clark, President of Content Evolution and author of Brandscendence, alongside Kyle Shannon, Founder of Storyvine and AI Salon. They dive into the transformative impact of digital twins and collective intelligence on creativity. Discover how generative AI can help overcome creative blocks and facilitate deep dialogue. The duo emphasizes the importance of asking meaningful questions to enhance interactions with AI. Together, they explore future content evolution and foster collaboration in an increasingly AI-driven world.

Nov 6, 2024 • 0sec
Samar Younes on pluridisciplinary art, AI as artisanal intelligence, future ancestors, and nomadic culture (AC Ep69)
“To me, envisioning a future should involve elements anchored in nature, modern materials, and sustainable practices, challenging Western-centric constructs of ‘futuristic.’ Artisanal intelligence is about understanding material culture, combining traditional craft with modern techniques, and redefining what feels ‘modern.’”
– Samar Younes
About Samar Younes
Samar Younes is a pluridisciplinary hybrid artist and futurist working across art, design, fashion, technology, experiential futures, culture, sustainability and education. She is founder of SAMARITUAL which produces the “Future Ancestors” series, proposing alternative visions for our planet’s next custodians. She has previously worked in senior roles for brands like Coach and Anthropologie and has won numerous awards for her work.
LinkedIn: Samar Younes
Website: www.samaritual.com
University Profile: Samar Younes
What you will learn
Exploring the intersection of art, AI, and cultural identity
Reimagining future aesthetics through artisanal intelligence
Blending traditional craftsmanship with digital innovation
Challenging Western-centric ideas of “modern” and “futuristic”
Using AI to amplify narratives from the Global South
Building a sustainable, nature-anchored digital future
Embracing imperfection and creativity in the age of AI
Episode Resources
Silk Road
Web3
Metaverse
Orientalist
AI (Artificial Intelligence)
Artisanal Intelligence
Dubai Future Forum
Neuroaesthetics
ChatGPT
Runway ML
Midjourney
Archives of the Future
Luma
Large Language Model (LLM)
Gun Model
Transcript
Ross Dawson: Samar, it’s awesome to have you on the show.
Samar Younes: Thank you so much. Thanks for having me.
Ross: So you describe yourself as a plural, disciplinary hybrid, artist, futurist, and creative catalyst. That sounds wonderful. What does that mean? What do you do?
Samar: What does that mean? It means that I am many layers of the life that I’ve had. I started my training as an architect and worked as a scenographer and set designer. I’ve always been interested in bringing public art to the masses and fostering social discourse around public art and art in general.
I’ve also always been interested in communicating across cultures. Growing up as a child of war in Beirut, among various factions—religious and cultural—it was a diverse city, but it was also a place where knowledge and deep, meaningful discussions were vital to society. Having a mother who was an artist and a father who was a neurologist, I became interested in how the brain and art converge, using art and aesthetics to communicate culture and social change.
In my career, I began in brand retail because, at the time, public art narratives and opportunities to create what I wanted were limited. So I used brand experiences—store design, window displays, art installations, and sensory storytelling—as channels to engage people.
As the world shifted more towards digital, I led brands visually, aiming to bridge digital and physical sensory frameworks. But as Web3, the metaverse, and other digital realms emerged, I found that while exciting, they lacked the artisanal textures and layers that were important to me. Working across mediums—architecture, fashion, design, food—I saw artificial intelligence as akin to working with one’s hands, very similar to what artisans do. That’s how I got into AI, as a challenge to amplify narratives from the Global South, reclaiming aesthetics from my roots.
Ross: Fascinating. I’d love to dig into something specific you mentioned: AI as artisanal. What does that mean in practice if you’re using AI as a tool for creativity?
Samar: Often, when people use AI, specifically generative AI with prompts or images, they don’t realize the role of craftsmanship or the knowledge of craft required to create something that resonates. Much digital imagery has a clinical, dystopian aesthetic, often cold and disconnected from nature or biomorphic elements, which are part of the world crafted by hand.
To me, envisioning a future should involve elements anchored in nature, modern materials, and sustainable practices, challenging Western-centric constructs of “futuristic.” Ancient civilizations, like Egypt’s with the pyramids, exemplify timeless modernity. Similarly, the Global South has always been avant-garde in subversion and disruption, but this gets re-appropriated in Western narratives. Artisanal intelligence is about understanding material culture, combining traditional craft with modern techniques, and redefining what feels “modern.”
Ross: Right. AI offers a broad palette, not just in styles from history but also potentially in areas like material science and philosophy. It supports a pluriplinary approach, assisted by the diversity of AI training data.
Samar: Exactly. When I think of AI, I see data sets as materials, not just images. If data is a medium, I’m not interested in recreating a Picasso. I see each data set as a material, like paint on a palette—acrylic, oil, charcoal—with the AI system as my brush. Creating something unique requires understanding composition, culture, and global practices, then weaving them together into a new, personal perspective.
Ross: One key theme in your work is merging multiple cultural and generational frames using technology. How does technology enable this?
Samar: Many AI tools are biased and problematic. When I tried an exercise creating a “Hello Kitty” version in different cultural stereotypes, I found disturbing, inaccurate, or even racist results, especially for Global South or Middle Eastern cultures. To me, cultures are fluid and connected, shaped by historical nomadism rather than nationalistic borders.
My concept of the “future ancestor” explores sustainability and intergenerational, transcultural constructs. Cultures have always been fluid and adaptable, but modern consumerism and digital borders often force rigid identity constructs. In prompting AI, I describe culture fluidly, resisting prescribed stereotypes to create atypical, nuanced representations.
Ross: Agreed. We’re digital nomads today, traveling and exploring in new ways. But AI training data is often Western-biased, so artists can’t rely on defaults without reinforcing these biases.
Samar: The artist’s role is to subvert and hack the system. If you don’t have resources to train your own model, I believe there’s power in collectively hacking existing models by feeding them new, corrective data. The more people create diverse data, the more it influences these systems. Understanding how to manipulate AI systems to your needs helps shape their evolution.
Ross: Technology is advancing so quickly, transforming art, expression, and identity. What do you see as the implications of this acceleration?
Samar: I see two scenarios: one dystopian, one more constructive. Ideally, technology fosters nurturing, empathetic futures, which requires slower, thoughtful development. The current speed, however, is driven by profit and the extractive aims of industrialization—manipulating human needs for profit or even exploiting people without compensation. This dystopia is evident in algorithmic manipulation and censorship.
I wish the acceleration focused on health and well-being rather than extractive technologies. We should prioritize technologies that support work-life balance, health, and sustainable futures over those driven by profit.
Ross: Shifting gears, can you share more specifics on tools you use or projects you’re working on?
Samar: Sure. I use several tools like Cloud, ChatGPT, Runway ML for animations, and Midjourney for visuals. I have an archive of 50,000+ images I’ve created, nurturing them over time, blending them across tools. Building a unique perspective is key—everyone has a distinct point of view rooted in their cultural and personal experiences. Recent projects include my “Future Ancestor” project and a piece called “Future Custodian,” which I co-wrote with futurist Geraldine Warri. It’s a speculative narrative about a tribe called the “KALEI Tribe,” where fashion serves as a tool of healing and self-expression.
Ross: What’s the process behind creating these?
Samar: The “KALEI Tribe” is a speculative piece set in 2034, where nomadic survival uses fashion as self-expression and well-being. Fashion is reframed as healing and sustainable, rather than for fast consumption. We explore a future where we co-exist with sentient beings beyond humans. This concept emerged from my archive and AI-created imagery, blending perspectives with Geraldine Warri for Spur Magazine in Japan.
I also recently did a food experience project that didn’t directly use AI but engaged with artisanal intelligence. It imagined ancestral foods, blending speculative thinking with our senses, rewilding how we think of food.
Ross: That’s brilliant—rewilding ourselves and pushing against domestication.
Samar: Exactly. The industrial era pushed repetition and perfection, taming our humanity’s wild, playful side. I hope to use AI to rewild our imaginations, embracing imperfections, chaos, and organic unpredictability. The system’s flaws inspire me, adding a serendipitous quality, much like working with hands-on materials like clay or fabric, where outcomes aren’t perfectly predictable.
Ross: Wonderful insights. Where can people find out more about your work?
Samar: They can visit my website at summeritual.com, where I share workshops and sessions. I’m also active on Instagram (@samorritual) and LinkedIn.
Ross: All links are in the show notes. Thanks for such inspiring, insightful work.
Samar: Thank you so much for having me. Hopefully, we’ll meet soon.
The post Samar Younes on pluridisciplinary art, AI as artisanal intelligence, future ancestors, and nomadic culture (AC Ep69) appeared first on Humans + AI.

Oct 30, 2024 • 0sec
Jason Burton on LLMs and collective intelligence, algorithmic amplification, AI in deliberative processes, and decentralized networks (AC Ep68)
“When you get a response from a language model, it’s a bit like a response from a crowd of people, shaped by the preferences of countless individuals.”
– Jason Burton
About Jason Burton
Jason Burton is an assistant professor at Copenhagen Business School and an Alexander von Humboldt Research fellow at the Max Planck Institute for Human Development. His research applies computational methods to studying human behavior in a digital society, including reasoning in online information environments and collective intelligence.
LinkedIn: Jason William Burton
Google Scholar page: Jason Burton
University Profile (Copenhagen Business School): Jason Burton
What you will learn
Exploring AI’s role in collective intelligence
How large language models simulate crowd wisdom
Benefits and risks of AI-driven decision-making
Using language models to streamline collaboration
Addressing the homogenization of thought in AI
Civic tech and AI’s potential in public discourse
Future visions for AI in enhancing group intelligence
Episode Resources
Nature Human Behavior
How Large Language Models Can Reshape Collective Intelligence
ChatGPT
Max Planck Institute for Human Development
Reinforcement learning from human feedback
DeepMind
Digital twin
Wikipedia
Algorithmic Amplification and Society
Wisdom of the crowd
Recommender system
Decentralized autonomous organizations
Civic technology
Collective intelligence
Deliberative democracy
Echo chambers
Post-truth
People
Jürgen Habermas
Dave Rand
Ulrika Hahn
Helena Landemore
Transcript
Ross: Ross, Jason, it is wonderful to have you on the show.
Jason Burton: Hi, Ross. Thanks for having me.
Ross: So you and 27 co-authors recently published in Nature Human Behavior a wonderful article called How Large Language Models Can Reshape Collective Intelligence. I’d love to hear the backstory of how this paper came into being with 28 co-authors.
Jason: It started in May 2023. There was a research retreat at the Max Planck Institute for Human Development in Berlin, about six months or so after ChatGPT had really come into the world, at least for the average person. We convened a sort of working group around this idea of the intersection between language models and collective intelligence, something interesting that we thought was worth discussing.
At that time, there were just about five or six of us thinking about the different ways to view language models intersecting with collective intelligence: one where language models are a manifestation of collective intelligence, another where they can be a tool to help collective intelligence, and another where they could potentially threaten collective intelligence in some ways. On the back of that working group, we thought, well, there are lots of smart people out there working on similar things. Let’s try to get in touch with them and bring it all together into one paper. That’s how we arrived at the paper we have today.
Ross: So, a paper being the manifestation of collective intelligence itself?
Jason: Yes, absolutely.
Ross: You mentioned an interesting part of the paper—that LLMs themselves are an expression of collective intelligence, which I think not everyone realizes. How does that work? In what way are LLMs a type of collective intelligence?
Jason: Sure, yeah. The most obvious way to think about it is these are machine learning systems trained on massive amounts of text. Where are the companies developing language models getting this text? They’re looking to the internet, scraping the open web. And what’s on the open web? Natural language that encapsulates the collective knowledge of countless individuals.
By training a machine learning system to predict text based on this collective knowledge they’ve scraped from the internet, querying a language model becomes a kind of distilled form of crowdsourcing. When you get a response from a language model, you’re not necessarily getting a direct answer from a relational database. Instead, you’re getting a response that resembles the answer many people have given to similar queries.
On top of that, once you have the pre-trained language model, a common next step is training through a process called reinforcement learning from human feedback. This involves presenting different responses and asking users, “Did you like this response or that one better?” Over time, this system learns the preferences of many individuals. So, when you get a response from a language model, it’s shaped by the preferences of countless individuals, almost like a response from a crowd of people.
Ross: This speaks to the mechanisms of collective intelligence that you write about in the paper, like the mechanisms of aggregation. We have things like markets, voting, and other fairly crude mechanisms for aggregating human intelligence, insight, or perspective. This seems like a more complex and higher-order aggregation mechanism.
Jason: Yeah. I think at its core, language models are performing a form of compression, taking vast amounts of text and forming a statistical representation that can generate human-like text. So, in a way, a language model is just a new aggregation mechanism.
In an analog sense, maybe taking a vote or deliberating as a group leads to a decision. You could use a language model to summarize text and compress knowledge down into something more digestible.
Ross: One core part of your article discusses how LLMs help collective intelligence. We’ve had several mechanisms before, and LLMs can assist in existing aggregation structures. What are the primary ways that LLMs assist collective intelligence?
Jason: A lot of it boils down to the realization of how easy it is to query and generate text with a language model. It’s fast and frictionless. What can we do with that? One straightforward use is that, if you think of a language model as a kind of crowd in itself, you can use it to replace traditional crowdsourcing.
If you’re crowdsourcing ideas for a new product or marketing campaign, you could instead query a language model and get results almost instantaneously. Crowdsourcing taps into crowd diversity, producing high-quality, diverse responses. However, it requires setting up a crowd and a mechanism for querying, which can be time and resource-intensive. Now, we have these models at our fingertips, making it much quicker.
Another potential use that excites me is using language models to mediate deliberative processes. Deliberation is beneficial because individuals exchange information, allowing them to become more knowledgeable about a task. I have some knowledge, and you have some knowledge. By communicating, we learn from each other.
Ross: Yeah, and there have been some other researchers looking at nudges for encouraging participation or useful contributions. I think another point in your paper is around aggregating group discussions so that other groups or individuals can effectively take those in, allowing for scaled participation and discussion.
Jason: Yeah, absolutely. There’s a well-documented trade-off. Ideally, in a democratic sense, you want to involve everybody in every discussion, as everyone has knowledge to share. By bringing more people into the conversation, you establish a shared responsibility in the outcome. But as you add more people to the room, it becomes louder and noisier, making progress challenging.
If we can use technological tools, whether through traditional algorithms or language models, we could manage this trade-off. Our goal is to bring more people into the room while still producing high-quality outputs. That’s the ideal outcome.
Ross: So, one of the outcomes of bringing people together is decisions. There are other ways in which collective intelligence manifests, though. Are there specific ways, outside of what we’ve discussed, where LLMs can facilitate better decision-making?
Jason: Yes, much of my research focuses on collective estimations and predictions, where each individual submits a number, which can then be averaged across the group. This works in contexts with a concrete decision point or where there’s an objective answer, though we often debate subjective issues with no clear-cut answers.
In those cases, what we want is consensus rather than just an average estimate. For instance, we need a document that people with different perspectives can agree on for better coordination. One of my co-authors, Michael Baker, has shown that language models fine-tuned for consensus can be quite effective. These models don’t just repeat existing information but generate statements that identify points of agreement and disagreement—documents that diverse groups can look at and discuss further. That’s a direction I’d love to see more of.
Ross: That may be a little off track, but it brings up the idea of hierarchy. Implicitly, in collective intelligence, you assume there’s equal participation. However, in real-world decision-making, there’s typically a hierarchy—a board, an executive team, managers. You don’t want just one person making the decision, but you still want effective input from various groups. Can these collective intelligence structures apply to create more participatory decision-making within hierarchical structures?
Jason: Yeah, I think that’s one of the unique aspects of what’s called the civic technology space. There are platforms like Polis, for example, which level the playing field. In an analog room, certain power structures can discourage some people from speaking up while encouraging others to dominate, which might not be ideal because it undermines the benefits of diversity in a group.
Using language models to build more civic technology platforms can make it more attractive for everyday people to engage in deliberation. It could help reduce hierarchies where they may not be necessary.
Ross: Your paper also discusses some downsides of LLMs and collective intelligence. One concern people raise is that LLMs may homogenize perspectives, mashing everything together so that outlier views get lost. There’s also the risk that interacting too much with LLMs could homogenize individuals’ thinking. What are the potential downsides, and how might we mitigate them?
Jason: There’s definitely something to unpack there. One issue is that if everyone starts turning to the same language model, it’s like consulting the same person for every question. If we all rely on one source for answers, we risk homogenizing our beliefs.
Mitigating this effect is an open question. People may prompt models differently, leading to varied advice, but experiments have shown that even with different prompts, groups using language models often produce more homogenous outputs than those who don’t. This issue is concerning, especially given that only a few tech companies currently dominate the model landscape. The limited diversity of big players and the bottlenecks around hardware and compute resources make this even more worrisome.
Ross: Yes, and there’s evidence suggesting models may converge over time on certain responses, which is concerning. One potential remedy could be prompting models to challenge our thinking or offer critiques to stimulate independent thought rather than always providing direct answers.
Jason: Absolutely. That’s one of the applications I’m most excited about. A recent study by Dave Rand and colleagues used a language model to challenge conspiracy theorists, getting them to update their beliefs on topics like flat-Earth theory. It’s incredibly useful to use language models as devil’s advocates.
In my experience, I often ask language models to critique my arguments or help me respond to reviewers. However, you sometimes need to prompt it specifically to provide honest feedback because, by default, it tends to agree with you.
Ross: Yes, sometimes you have to explicitly tell it, “Properly critique me; don’t hold back,” or whatever words encourage it to give real feedback, because they can lean toward being “yes people” if you don’t direct them otherwise.
Jason: Exactly, and I think this ties into our previous discussion on reinforcement learning from human feedback. If people generally prefer responses that confirm their existing beliefs, the utility of language models as devil’s advocates could decrease over time. We may need to start differentiating language models by specific use cases, rather than expecting a single model to fulfill every role.
Ross: Yes, and you can set up system prompts or custom instructions that encourage models to be challenging, obstinate, or difficult if that’s the kind of interaction you need. Moving on, some of your other work relates to algorithmic amplification of intelligence in various forms. I’d love to hear more about that, especially since this is the Amplifying Cognition podcast.
Jason: Sure, so this work actually started before language models became widely discussed. I was thinking, along with my then PhD advisor, Ulrike Hahn, about the “wisdom of the crowd” effect and how to enhance it. One well-documented observation in the literature is that communication can improve crowd wisdom because it allows knowledge sharing. However, it can also be detrimental if it leads to homogenization or groupthink.
Research shows this can depend on network structure. In a highly centralized network where one person has a lot of influence, communication can reduce diversity. However, if communication is more decentralized and spreads peer-to-peer without a central influencer, it can spread knowledge effectively without compromising diversity.
We did an experiment on this, providing a proof of concept for how algorithms could dynamically alter network structures during communication to enhance crowd wisdom. While it’s early days, it shows promise.
Ross: Interesting! And you used the term “rewiring algorithm,” which suggests dynamically altering these connections. This concept could be impactful in other areas, like decentralized autonomous organizations (DAOs). DAOs aim to manifest collective intelligence, but often rely on basic voting structures. Algorithmic amplification could help rebalance input from participants.
Jason: Absolutely. I’m not deeply familiar with blockchain literature, but when I present this work, people often draw parallels with DAOs and blockchain governance. I may need to explore that connection further.
Ross: Definitely! There’s research potential in rebalancing structures for a fairer redistribution of influence. Also, one of this year’s hottest topics is multi-agent systems, often involving both human and AI agents. What excites you about human-plus-AI multi-agent systems?
Jason: There are two aspects to multi-agent systems as I see it. One is very speculative—thinking about language models as digital twins interacting on our behalf, which is futuristic and still far from today’s capabilities. The other, more immediate side, is that we’re already in multi-agent systems.
Think of Wikipedia, social media, and other online environments. We interact daily with algorithms, bots, and other people. We’re already embedded in multi-agent systems without always realizing it. Trying to conceptualize this intersection is difficult, but similar to how early AI discussions seemed speculative and are now real.
For me, a focus on civic applications is crucial. We need more civic technology platforms like Polis that encourage public engagement in discussions. Unfortunately, there aren’t many platforms widely recognized or competing in this space. My hope is that researchers in multi-agent systems will start building in that direction.
Ross: Do you think there’s potential to create a democracy that integrates these systems in a substantial way?
Jason: Yes, but it depends on the form it takes. I conceptualize it through a framework discussed by political scientist Hélène Landemore, who references Jürgen Habermas. He describes two tracks of the public sphere. One is a bureaucratic, formal track where elected officials debate in government. The other is an open, free-for-all public sphere, like discussions in coffee shops or online. The idea was that the best arguments from the free-for-all sphere would influence the formal sphere, but that bridge seems weakened today.
Civic technologies and algorithmic communication could create a “third track” to connect the open public sphere more effectively with bureaucratic decision-making.
Ross: Rounding things out, collective intelligence has to be the future of humanity. We face bigger and more complex challenges, and we need to be intelligent beyond our individual capacities to address these issues and create a better world. What do you see as the next phase or frontiers for building more effective collective intelligence?
Jason: The next frontier will be not just human collective intelligence. We’ve already seen that over the past decade, and I think we’ve almost taken it for granted. There’s substantial research on the “wisdom of the crowd” and deliberative democracy, often focusing on groups of people debating in a room. But now, we have more access to information and the ability to communicate faster and more easily than ever.
The problem now is mitigating information overload. In a way, we’ve already built the perfect collective intelligence system—the internet, social media. Yet, despite having more information, we don’t seem to be a more informed society. Issues like misinformation, echo chambers, and “post-truth” have become part of our daily vocabulary.
I think the next phase will involve developing AI systems and algorithms to help us handle information overload in a socially beneficial way, rather than just catering to advertising or engagement metrics. That’s my hope.
Ross: Amen to that. Thanks so much for your time and your work, Jason. I look forward to following your research as you continue.
Jason: Thank you, Ross.
The post Jason Burton on LLMs and collective intelligence, algorithmic amplification, AI in deliberative processes, and decentralized networks (AC Ep68) appeared first on Humans + AI.

Oct 23, 2024 • 33min
Kai Riemer on AI as non-judgmental coach, AI fluency, GenAI as style engines, and organizational redesign (AC Ep67)
Kai Riemer is a Professor at the University of Sydney Business School, specializing in AI’s impact on organizations. He discusses how AI serves as a non-judgmental coach, boosting decision-making and personal productivity for leaders. Riemer explores generative AI as a catalyst for creativity and its role in enhancing group dynamics through structured facilitation. He emphasizes the necessity for upskilling in AI fluency and rethinking organizational frameworks to harness AI's potential effectively, amplifying both challenges and opportunities.


