Amplifying Cognition cover image

Amplifying Cognition

Latest episodes

undefined
Dec 18, 2024 • 35min

Valentina Contini on AI in innovation, multi-potentiality, AI-augmented foresight, and personas from the future (AC Ep74)

Valentina Contini, an innovation strategist and technofuturist, dives into the fascinating intersection of AI and creativity. She shares insights on being a 'professional black sheep' and how generative AI can enhance human innovation. Valentina emphasizes the role of AI in freeing up cognitive resources, fostering critical thinking, and generating immersive future scenarios through AI personas. The conversation also touches on the importance of embracing technology and lifelong learning to harness AI's potential for a positive future.
undefined
Dec 11, 2024 • 34min

Anthea Roberts on dragonfly thinking, integrating multiple perspectives, human-AI metacognition, and cognitive renaissance (AC Ep73)

Anthea Roberts, a leading authority in international law and founder of Dragonfly Thinking, dives deep into the art of 'dragonfly thinking'—a method for examining complex issues from multiple angles. She discusses the shift in human roles in AI collaboration, emphasizing metacognition in decision-making. Roberts also tackles the biases in AI and the importance of integrating diverse knowledge systems, advocating for a cognitive renaissance to navigate the challenges and opportunities of AI advancements and enhance our collective problem-solving capabilities.
undefined
Dec 4, 2024 • 35min

Kevin Eikenberry on flexible leadership, both/and thinking, flexor spectrums, and skills for flexibility (AC Ep72)

“To be a flexible leader is to make sense of the world in a way that allows you to intentionally ask, ‘How do I need to lead in this moment to get the best results for my team and the outcomes we need?’” – Kevin Eikenberry About Kevin Eikenberry Kevin Eikenberry is Chief Potential Officer of leadership and learning consulting company The Kevin Eikenberry Group. He is the bestselling author or co-author of 7 books, including the forthcoming Flexible Leadership. He has been named to many lists of top leaders, including twice to Inc. magazine’s Top 100 Leadership and Management Experts in the World. His podcast, The Remarkable Leadership Podcast, has listeners in over 90 countries. Website: The Kevin Eikenberry Group   LinkedIn Profiles Kevin Eikenberry The Kevin Eikenberry Group   Book Flexible Leadership: Navigate Uncertainty and Lead with Confidence   What you will learn Understanding the essence of flexible leadership Balancing consistency and adaptability in decision-making Embracing “both/and thinking” to navigate complexity Exploring the power of context in leadership strategies Mastering the art of asking vs. telling Building habits of reflection and intentionality Developing mental fitness for effective leadership Episode Resources People Carl Jung F. Scott Fitzgerald David Snowden Book Flexible Leadership: Navigate Uncertainty and Lead with Confidence Frameworks/Concepts Myers-Briggs Cynefin framework Confidence-competence loop Organizations/Companies The Kevin Eikenberry Group Technical Terms Leadership style “Both/and thinking” Compliance vs. commitment Ask vs. tell Command and control Sense-making Plausible cause analysis Transcript Ross Dawson: Kevin, it is wonderful to have you on the show. Kevin Eikenberry: Ross, it’s a pleasure to be with you. I’ve had conversations about this book for podcasts. This is the first one that’s going to go live to the world, so I’m excited about that. Ross: Fantastic. So the book is Flexible Leadership: Navigate Uncertainty and Lead with Confidence. What does flexible leadership mean? Kevin: Well, that’s a pretty good starting question. Here’s the big idea, Ross: so many people have come up in leadership and taken assessments of one sort or another. They’ve done Strengths Finder or a leadership style assessment, and it’s determined that they are a certain style or type. That’s useful to a point, but it becomes problematic beyond that. Humans are pattern recognizers, so once we label ourselves as a certain type of leader, we tend to stick to that label. We start thinking, “This is how I’m supposed to lead.” To be a flexible leader means we need to start by understanding the context of the situation. Context determines how we ought to lead in a given moment rather than relying solely on what comes naturally to us. Being a flexible leader involves making sense of the world intentionally and asking, “How do I need to lead in this moment to get the best results for my team and the outcomes we’re working towards?” Ross: I was once told that Carl Jung, who wrote the typology of personalities that forms the foundation of Myers-Briggs, said something similar. I’ve never found the original source, but apparently, he believed the goal was not to fix ourselves at one point on a spectrum but to be as flexible as possible across it. So, we’re all extroverts and introverts, sensors and intuitors, thinkers and feelers. Kevin: Exactly. None of us are entirely one or the other on these spectrums. They’re more like continuums. Take introvert vs. extrovert. Some people are at one extreme or the other, but no one is a zero on either side. The problem arises when we label ourselves and think, “This is who I am.” That may reflect your natural tendency, but it doesn’t mean that’s the only way you can or should lead. Ross: One of the themes in your book is “both/and thinking,” which echoes what I wrote in Thriving on Overload. You can be both extroverted and introverted. I see that in myself. Kevin: Me too. Our world is so focused on “either/or” thinking, but to navigate complexity and uncertainty as leaders, we must embrace “both/and” thinking. Scott Fitzgerald once said something along the lines of, “The test of a first-rate intelligence is the ability to hold two opposing ideas in your mind at the same time and still function.” I’d say the same applies to leadership. To be highly effective, leaders must consider seemingly opposite approaches and determine what works best given the context. Ross: That makes sense. Most people would agree that flexible leadership is a sound idea. But how do we actually get there? How does someone become a more flexible leader? Kevin: The first step is recognizing the value of flexibility. Many leaders get stuck on the idea of consistency. They think, “To be effective, I need to be consistent so people know what to expect from me.” But flexibility isn’t the opposite of consistency. We can be consistent in our foundational principles—our values, mission, and core beliefs—while being adaptable in how we approach different situations. Becoming a flexible leader requires three things: Intention – Recognizing the value of flexibility. Sense-making – Understanding the context and what it requires of us. Flexors – Knowing the options available to us and deciding how to adapt in a given situation. Ross: This aligns with my work on real-time strategy. A fixed strategy might have worked in the past, but in today’s world, we need to adapt. At the same time, being completely flexible can lead to chaos. Kevin: Exactly. Leaders need to balance consistency and flexibility, knowing when to lean toward one or the other. Leadership is about achieving valuable outcomes with and through others. This creates an inherent tension—outcomes vs. people. The answer isn’t one or the other; it’s both. For every “flexor” in the book, the goal isn’t to be at one extreme of the spectrum but to find the balance that best serves the team and the context. Ross: You’ve mentioned the word “flexor” a few times now. I think this is one of the real strengths of the book. It’s a really useful concept. So, what is a flexor? Kevin: A flexor is the two ends of a continuum on something that matters. Let’s use an example. On one end, we have achieving valuable outcomes. On the other end, we have taking care of people. Some leaders lean toward focusing on outcomes—getting the work done no matter what. Others lean toward prioritizing their people—ensuring their well-being and development so outcomes follow. The reality is that leadership requires balancing both. Sometimes the context calls for one approach more than the other. For instance, in moments of chaos, compliance might be necessary to maintain safety or order. In other situations, you’ll need to inspire commitment for long-term success. A leader must constantly assess the context and decide where to lean on the spectrum. Ross: That’s a great example. Another one might be between “ask” and “tell.” Kevin: Yes, exactly! Leaders often believe they need to have all the answers, so they default to telling—giving directives and expecting people to follow. But sometimes, asking is far more effective. Your team members often have perspectives and information you don’t. By asking rather than telling, you gain insights, foster collaboration, and build trust. Of course, it’s not about always asking or always telling. It’s about understanding when to lean toward one and when the other might be more effective. Ross: That makes sense. In today’s world, consultative leadership is highly valued, especially in certain industries. Many great leaders lean heavily on asking rather than telling. Kevin: Absolutely, but even consultative leaders need to recognize when the situation calls for decisiveness. If there’s urgency or a crisis, sometimes the team just needs clear instructions: “Here’s what we need to do.” Being a flexible leader means being intentional—understanding the context and adjusting your approach, even if it doesn’t align with your natural tendencies. Ross: That brings us to the concept of sense-making. Leaders need to make sense of their context to decide where they stand on a particular flexor. How can leaders improve their sense-making capabilities? Kevin: The first step is recognizing that context matters and that it changes. Many leaders rely on best practices, but those only work in clear, predictable situations. Our world is increasingly complex and uncertain. In such situations, we need to adopt “good enough” practices or experiment to find what works. To improve sense-making, leaders must build a mental map of their world. Is the situation clear, complicated, complex, or chaotic? This aligns with David Snowden’s Cynefin framework, which I reference in the book. By identifying the nature of the situation, leaders can adjust their approach accordingly. Ross: The Cynefin framework is a fantastic tool, often used in group settings. You’re applying it here to individual leadership. Kevin: Exactly. It’s not just about guiding group processes. It’s about helping leaders see the situation clearly so they can flex their approach. Ross: That’s insightful. Leaders don’t operate in isolation—they’re part of an organizational context. How does a leader navigate their role while considering the expectations of their peers, colleagues, and supervisors? Kevin: Relationships play a critical role. The better your relationships with peers and supervisors, the more you understand their styles and perspectives. This helps you navigate the context effectively. Sometimes, though, you may need to challenge others’ perspectives—respectfully, of course. If someone is treating a situation as chaotic when it’s actually complex, your role as a leader may be to ask questions or provide a different perspective. Being intentional is key. Leadership often involves breaking habitual responses, pausing to assess the context, and deciding if a different approach is needed. Ross: That’s a journey. Leadership habits are deeply ingrained. How do leaders move from their current state to becoming more flexible and adaptive? Kevin: That’s the focus of the third part of the book—how to change your habits. First, leaders need to recognize that their natural tendencies might not always serve them best. Without this realization, no progress is possible. Next, they must build new habits, starting with regularly asking questions like: What’s the context here? What does this situation require of me? How did that approach work? Reflection is crucial. Leaders should consistently ask, “What worked, what didn’t, and what can I learn from this?” Another valuable practice is what I call “plausible cause analysis.” Instead of jumping to conclusions about why something happened, consider multiple plausible explanations. For example, if a team doesn’t respond to a question, don’t assume they’re disengaged. There could be several reasons—perhaps they need more time to think or the question wasn’t clear. By exploring plausible causes, leaders can choose responses that address most potential issues. Ross: That’s a great framework for reflection and improvement. It also ties into mental fitness, which is so important for leaders. Kevin: Exactly. During the pandemic, we worked extensively with clients on mental fitness—not just mental health. Mental fitness involves proactively building resilience, much like physical fitness. Reflection, gratitude, and self-awareness are all part of maintaining mental fitness. Leaders who invest in their mental fitness are better equipped to handle challenges and make sound decisions. Ross: Let’s circle back to the book. What would you say is its ultimate goal? Kevin: The goal of Flexible Leadership is to help leaders navigate uncertainty and complexity with confidence. For 70 years, leadership models have tried to simplify the real world. While those models are helpful, they’re inherently oversimplified. The ideas in the book aim to help leaders embrace the complexity of the real world, equipping them with tools to become more effective and, ultimately, wiser. Ross: Fantastic. Where can people find your book? Kevin: The book launches in March, but you can pre-order it now at kevineikenberry.com/flexible. That link will take you directly to Amazon. You can also learn more about our work at kevineikenberry.com. Ross, it’s been an absolute pleasure. Thanks for having me. Ross: Thank you so much, Kevin! The post Kevin Eikenberry on flexible leadership, both/and thinking, flexor spectrums, and skills for flexibility (AC Ep72) appeared first on amplifyingcognition.
undefined
Nov 27, 2024 • 35min

Alexandra Diening on Human-AI Symbiosis, cyberpsychology, human-centricity, and organizational leadership in AI (AC Ep71)

“It’s not just about the AI itself; it’s about the way we deploy it. We need to focus on human-centric practices to ensure AI enhances human potential rather than harming it.” – Alexandra Diening About Alexandra Diening Alexandra Diening is Co-founder & Executive Chair of Human-AI Symbiosis Alliance. She has held a range of senior executive roles including as Global Head of Research & Insights at EPAM Systems. Through her career she has helped transform over 150 digital innovation ideas into products, brands, and business models that have attracted $120 million in funding . She holds a PhD in cyberpsychology, and is author of Decoding Empathy: An Executive’s Blueprint for Building Human-Centric AI and A Strategy for Human-AI Symbiosis. Website: Human-AI Symbiosis   LinkedIn Profiles Alexandra Diening Human-AI Symbiosis Alliance   Book A Strategy for Human-AI Symbiosis What you will learn Exploring the concept of human-AI symbiosis Recognizing the risks of parasitic AI Bridging neuroscience and artificial intelligence Designing ethical frameworks for AI deployment Balancing excitement and caution in AI adoption Understanding AI’s impact on individuals and organizations Leveraging practical strategies for mutualistic AI development Episode Resources Organizations and Alliances Human AI Symbiosis Alliance Fortune 500 companies Books A Strategy for Human AI Symbiosis Technical Terms Human-AI symbiosis Generative AI Cognitive sciences Cyber psychology Neuroscience AI avatars Algorithmic bias Responsible AI Symbiotic AI Transcript Ross Dawson: Alexandra, it’s a delight to have you on the show. Alexandra Diening: Thank you for having me, Ross. Very happy to be here. Ross: So you’ve recently established the Human AI Symbiosis Alliance, and that sounds very, very interesting. But before we dig into that, I’d like to hear a bit of the backstory. How did you come to be on this journey? Alexandra: It’s a long journey, but I’ll try to make it short and quite interesting. I entered the world of AI almost two decades ago, and it was through a very unconventional path—neuroscience. I’m a neuroscientist by training, and my focus was on understanding how the brain works. Of course, if you want to process all the neuroscience data, you can’t do it alone. Inevitably, you need to incorporate AI. This was my gateway to AI through neuroscience. At the time, there weren’t many people working on this type of AI, so the industry naturally pulled me in. I transitioned to working on business applications of AI, progressively moving from neuroscience to AI deployment within business contexts. I worked with Fortune 500 companies across life sciences, retail, finance, and more. That was the first chapter of my entry into the world of AI. When deploying AI in real business scenarios, patterns start to emerge. Sometimes you succeed; sometimes you fail. What I noticed was that when we succeeded and delivered long-term tangible business value, it was often due to a strong emphasis on human-centricity. This focus came naturally to me, given my background in cognitive sciences. This emphasis became even more critical with the emergence of generative AI. Suddenly, AI was no longer just a background technology crunching data and influencing decisions behind the scenes. It became something we could interact with using natural language. AI started capturing emotions, building relationships, and augmenting our capabilities, emerging as a kind of social, technological actor. This led to our hypothesis that generative AI is the first technology with a natural propensity to build symbiotic relationships with humans. Unlike traditional technologies, there is mutual interaction. While “symbiosis” may sound romantic, it can manifest across a spectrum of outcomes, from positive (mutualistic) to negative (parasitic). In business, I started to see the emergence of parasitic AI—AI that benefits at the detriment of humans or organizations. This realization began to trouble me deeply. While I was working for multi-billion-dollar tech companies, I advocated for Responsible AI and human-centric practices. However, I realized the impact I could have was limited if this remained a secondary concern in corporate agendas. This led to the establishment of the Human AI Symbiosis Alliance. Its mission is to educate people about the risks of parasitic AI and to guide organizations in steering AI development toward mutualistic outcomes. Ross: That’s… well, there’s a lot to dig into there. I look forward to delving into it. You referred to being human-centric, and I think you seem to be a very human-centric person. One point that stood out was the idea of generative AI’s propensity for symbiosis. Hopefully, we can return to that. But first, you did your Ph.D. in cyber psychology, I believe. What is cyber psychology, and what did you learn? Alexandra: Cyber psychology, when I started, was quite unconventional and still is to some degree. It combines psychology, medical neuroscience, human-computer interaction, marketing science, and technology. The focus is on how human interaction and behavior change within digital environments. In my case, it was AI-powered digital environments, like social media and AI avatars. Part of my research examined how long-term exposure to these environments impacts behavior, emotions, and even biology. For example, interacting with AI-powered technologies over time can alter brain connectivity and structure. The goal was to identify patterns and, most importantly, help tech companies design technologies that uplift human potential rather than harm it. Ross: Today, we are deeply immersed in digital environments and interacting with human-like systems. You mentioned the importance of fostering positive symbiosis. This involves designing both the systems and human behavior. What are the leverage points to achieve a constructive symbiosis between humans and AI? Alexandra: The most important realization is that AI itself isn’t a living entity. It lacks consciousness, intent, and agency. The focus should be on our actions—how we design and deploy AI. While it’s vital to address biases in AI data and ensure proper guardrails, the real danger lies in how AI is deployed. Deployment literacy is key. Many tech companies treat AI like traditional software, but AI requires a completely different lifecycle, expertise, and processes. Awareness and education about this distinction are essential. Beyond education, we need frameworks to guide deployment. Companies must not only enhance employee efficiency but also ensure that skills aren’t eroded over time, turning employees into efficient yet unskilled workers. Measurement is another critical aspect. Traditional success metrics like productivity and efficiency are insufficient for AI. Companies must consider innovation indices, employee well-being, and brand relationships. AI’s impact needs to be evaluated with a long-term perspective. Finally, there are unprecedented risks with AI. For example, recent events, like a teenager tragically taking their life after interacting with an AI chatbot, highlight the dangers. Companies must be aware of these risks and prioritize expertise, architecture, and metrics that steer AI deployment away from parasitism. Ross: One of the things I understand you’re launching is the Human AI Symbiosis Bible. What is it, what does it look like, and how can people use it to put these ideas into practice? Alexandra: The “Human AI Symbiosis Bible” is officially titled A Strategy for Human AI Symbiosis. It’s already available on Amazon, and we’re actively promoting it. The book acts as a guide for stakeholders in the AI space, transitioning them from traditional software development practices to AI-specific strategies. The content is practical and hands-on, tailored to leaders, designers, engineers, and regulators. It starts with foundational concepts about human-AI symbiosis and its importance. Then it provides frameworks and processes for avoiding common pitfalls. What sets it apart is its practicality. It’s not a theoretical book that simply outlines risks and concepts. We include over 70 case studies from Fortune 500 companies, showcasing real-world examples of AI failures and successes. These case studies highlight lessons learned so readers can avoid repeating the same mistakes. We also had 150 contributors, including 120 industry practitioners directly involved in building and deploying AI. The book synthesizes their insights and experiences, offering actionable guidance rather than prescribing a single “correct” way to develop and deploy AI. It’s a resource to help leaders ask the right questions, make informed decisions, and prepare for what we call the AI game. Ross: Of course, everything you’re describing is around a corporate or organizational context—how AI is applied in organizations. You suggest that every aspect of AI adoption should align with the human-AI symbiosis framework. Alexandra: Absolutely. The message is clear: organizations must go beyond viewing AI as merely a technological or data exercise. They need to understand its profound effects on the human factor—both employees and customers. As we’ve discussed, generative AI inherently influences human behavior. Organizations must decide how they want this symbiosis to manifest. Do they want AI to augment human potential and drive mutual benefits, or allow parasitic patterns to emerge, harming individuals and the organization in the long term? Ross: You and I might immediately grasp the concept of human-AI symbiosis, but when you present this in a corporate boardroom, some people might be puzzled or even resistant. How do you communicate these ideas effectively to business leaders? Alexandra: It’s essential to avoid letting the conversation become too fluffy or esoteric. When introducing human-AI symbiosis, we frame the discussion around a tangible enemy: parasitic AI. No company wants to invest time, money, and resources into deploying AI only to have it harm their organization. We start by defining parasitic AI and sharing quantified use cases, including financial costs and operational impacts. This approach grounds the conversation in real-world stakes. From there, we guide leaders through identifying parasitic patterns in their organization and preventing them. By addressing the risks, we create space for mutualistic AI to thrive. This framing—focusing on preventing harm—proves very effective in getting leaders engaged and invested. Ross: What you’re describing seems to extend beyond individual human-AI interactions to an organizational level—symbiosis between AI and the entire organization. Is it one or the other, or both? Alexandra: It’s both. On the individual level, if you enhance an employee’s productivity but they become disengaged or leave the organization, it ultimately harms the company. Similarly, if employees become more efficient but lose critical skills over time, the company’s ability to innovate is compromised. The connection between individual outcomes and organizational success is inseparable. Organizations must consider how AI impacts employees on a personal level and translate those effects into broader business objectives like resilience, innovation, and long-term sustainability. Ross: It’s been almost two years since the “ChatGPT moment” that changed how many view AI. As AI capabilities continue to evolve rapidly, what are the most critical leverage points to drive the shift toward human-AI symbiosis? Alexandra: It starts with literacy and awareness. Leaders, innovators, and engineers must understand that AI is fundamentally different from traditional software. The old ways of working don’t apply anymore, and clinging to them will lead to mistakes. Education is the first pillar, but it must be followed by practical tools and frameworks. People need guidance on what to do and how to do it. Case studies are crucial here—they provide real-world examples of both successes and failures, demonstrating what works and what doesn’t. Lastly, we need regulatory guardrails. I often use the analogy of a driving license. You wouldn’t let someone drive a car without proper training and certification, yet we have people deploying AI systems without sufficient expertise. Regulation must define minimum requirements for AI deployment to prevent harm. Ross: That ties into people’s attitudes toward AI. Surveys often show mixed feelings—excitement and nervousness. In an organizational context, how do you navigate this spectrum of emotions to foster transformation? Alexandra: The key is to meet people where they are, whether they’re excited or scared. Listen to their concerns and validate their perspectives. Neuroscience tells us that most decisions are driven by emotion, so understanding emotional responses is critical. The goal is to balance excitement and caution. Pure excitement can lead to reckless adoption of AI for its own sake, while excessive fear can result in resistance or harmful practices, like shadow AI usage by employees. Encouraging a middle ground—both excited and cautious—creates a productive mindset for decision-making. Ross: That’s a great way to frame it—balancing excitement with due caution. So, as a final thought, what advice would you give to leaders implementing AI? Alexandra: First, educate your teams. Don’t pursue AI just because it’s trendy or looks good. Many AI proofs of concept never reach production, and some shouldn’t even get that far. Understand what you’re getting into and why. Second, ensure you have the right expertise. There are many self-proclaimed AI experts, but true expertise comes from long-term experience. Verify credentials and include at least one seasoned expert in your team. Third, go beyond technology and data. Focus on human factors, ethics, and responsible AI. Consider how AI will impact employees, customers, and society at large. Fourth, establish meaningful metrics. Productivity and efficiency are important, but so are innovation, employee well-being, and long-term brand value. Measure what truly matters for your organization. Finally, get a third-party review. Independent assessments can spot parasitic patterns early and help course-correct. It’s a small investment for significant protection. Ross: That’s excellent advice. Identifying parasitic AI requires awareness and understanding, and your framing is incredibly valuable. How can people learn more about your work? Alexandra: Visit our website at h-aisa.com. We publish resources, case studies, expert interviews, and event details. You can also find our book, A Strategy for Human AI Symbiosis, on Amazon or through our site. We’re actively engaging with universities, conferences, NGOs, and media to spread awareness. We’ll also host an event in Q1 2025. For updates, follow us on LinkedIn and join the Human AI Symbiosis Alliance group. Ross: Fantastic. We’ll include links to your resources in the show notes. Thank you for sharing your insights and for your work in advancing human-AI symbiosis. It’s an essential and positive framework for organizations to adopt. Alexandra: Thank you, Ross. It was a pleasure.     The post Alexandra Diening on Human-AI Symbiosis, cyberpsychology, human-centricity, and organizational leadership in AI (AC Ep71) appeared first on amplifyingcognition.
undefined
Nov 20, 2024 • 41min

Kevin Clark & Kyle Shannon on collective intelligence, digital twin elicitation, data collaboratives, and the evolution of content (AC Ep70)

“What these tools allow you to do is very, very quickly go from an idea to sort of an 80% manifestation of it. It’s not just about the technology—it’s about understanding how, when, and why to use it to unlock collective intelligence.” – Kyle Shannon “We’ve discovered you can externalize the voice in your head into something you can have a dialogue with, creating reflective moments that result in documentation, not fleeting thoughts. That’s transformative.” – Kevin Clark About Kevin Clark & Kyle Shannon Kevin Clark is the President and Federation Leader of Content Evolution, a global consulting ecosystem working in brand, customer experience, business strategy and transformation. He previously worked for IBM as Program Director, Brand & Values Experience. He is on the board of numerous companies and is the author of numerous articles, book chapters, and books including Brandscendence. Kyle Shannon is Founder & CEO of video production company Storyvine, Founder of collaborative community the AI Salon, and Chief Generative Officer of Content Evolution. Previous roles include as EVP Creative Strategy at The Distillery and Co-Founder of Agency.com. Websites: www.contentevolution.net www.thesalon.ai   LinkedIn Profiles Kevin Clark Kyle Shannon   Book Collective Intelligence in the Age of AI What you will learn Exploring the power of digital twins in collaboration Overcoming creative blocks with generative AI tools Asking better questions to unlock AI’s potential Designing structured interviews for personalized AI Understanding collective intelligence in the digital age Rapid prototyping to test and refine ideas quickly Reshaping industries with untapped organizational data Episode Resources Emily Shaw Aristotle Steve Jobs Content Evolution CoLab Storyvine AI Salon Fortune 500 Gartner Digital twins Generative AI Large Language Models (LLMs) GPT Notebook LM Transformer architecture Data collaboratives Books, Shows, and Titles Collective Intelligence and AI  Candy Ears The Hitchhiker’s Guide to the Galaxy Transcript Ross Dawson: Ross, Kevin, and Kyle, wonderful to have you on the show. Kevin Clark: Pleasure to be here. Kyle Shannon: Ross, great to be here. Ross: So, you created a book recently called Collective Intelligence and AI. I’d like to pull back to the big picture of where this fits into what you’re doing. This organization is called Content Evolution. How did you get to this place of creating this book and the other things you are doing using AI to assist in your work? Kevin: Well, Content Evolution itself is a federation of companies that are aligned. We’re all thoughtful leaders and innovators and have been at it for 23 years now. This technology is helping us pull the thread forward a lot faster. As Kyle will describe in a moment, we have almost 30 digital agents—or what we call digital advisors—of ourselves. As a result, we have a collective of those, and we can all write together. We’ve published articles and done all kinds of things. This book is a particular expression between the two of us because we’ve been talking to each other for over a decade. It’s the residue of a decade’s worth of weekly conversations. There’s more to it—Kyle, say more. Kyle: When we started, we put together a group within Content Evolution called CoLab. The initial idea was, “Hey, this AI stuff is happening.” We started this probably a year and a half ago, almost two years ago. Generative AI was clearly evolving rapidly, so it felt important to explore. Like with all new technologies, you start with the tools, but very quickly, you ask, “Why? What are we trying to accomplish?” Content Evolution is an organization that’s a couple of decades old. One challenge was figuring out who’s in it and what talents exist within it. Initially, we asked, “Could we create a tool using generative AI to help someone discover the right person for a business problem?” That’s how it started. Over time, we realized we could create digital representations of ourselves—digital twins or digital advisors—that people could interact with 24/7. Even if Kevin wasn’t available, you could get his point of view. We’ve built 30 of these digital twins. They’re all in a single entity, a single GPT, where we can query them for the Content Evolution perspective on a topic. Individuals within that group can also comment on outputs. A big part of what we’re exploring now is understanding how, when, and why to use these tools. That’s far more fascinating than just the technology itself. Kevin: By the way, Kyle is the world’s first Chief Generative Officer. We didn’t put AI in the title because being generative is more important than the specific technologies you use. It’s about the practices, methodologies, and discernment of when to apply them—and sometimes, when to set them aside. We’ve discovered you can overcome writer’s block quickly by having a prompted start for something you’re thinking about. We’re also learning to externalize the voice in our heads into something we can have a dialogue with. This creates reflective moments and produces documentation rather than fleeting thoughts. Fascinating, isn’t it? Ross: Absolutely. The title Chief Generative Officer feels more appropriate, given the context. AI is just a set of tools. Kyle: Exactly. You can generate content with the tools or on your own. It could even be a hybrid. You can also generate revenue or other outcomes. The generative aspect goes beyond just the tools. Ross: The questions you raised are exactly the kinds of questions I wanted to ask. Starting with the basics, how are these digital twins set up? Are they based on system prompts or custom instructions for commercial LLMs? Kyle: Right now, they’re custom GPTs, but we’ve experimented with other platforms like Poe and Claude. Initially, we wanted to scrape LinkedIn profiles to discover expertise within Content Evolution. But we realized a LinkedIn profile is a very thin, historical slice of who someone is. It doesn’t reflect how they talk, think, or solve problems. We designed a structured interview with 27 questions across various categories. This interview digs into who someone is today, their inspirations, problem-solving approaches, worldview, and more. The answers to these questions form the foundational data for a custom GPT with a tailored prompt. Ross: So, for someone in your network, do you conduct a voice or text interview for these questions? Kevin: That’s a great question because there’s a difference. Kyle: We learned that when people wrote their responses in text, their digital twins turned out horrible—just bad. People don’t write the same way they talk. We now conduct video interviews where we go through the structured questions interactively. As the interviewer, if I notice someone hasn’t gone deep enough or gets excited about something but cuts themselves off, I’ll ask them to expand. Once we made this interactive, the digital twins came to life. Kevin: It takes about 45 minutes to complete the interview. The questions are designed to be unusual, going beyond superficial answers. People are often surprised by the depth of the questions. Kyle: One of my favorites, which was developed by Joke Gamble, is: “Describe your career in three acts.” It frames the career as a journey or drama, putting you in a different mental space. The quality of the questions is everything. Kevin: Exactly. The quality of the question determines the quality of the answer from a large language model. At Content Evolution, our original tagline was “Be Intentional.” For 20 years, we’ve challenged our clients to ask better questions. That’s what we’ve been practicing all along. Kyle: Asking better questions is the core of being a good prompt engineer. It’s about having expertise but also being able to communicate across disciplines. Our team members have this cross-disciplinary ability, which makes us well-suited to leverage this technology. Ross: That’s a key point. Even though the answers from LLMs are improving, the most important thing remains the question. It reminds me of The Hitchhiker’s Guide to the Galaxy—you may know the answer, but asking the right question is crucial. Kyle: Exactly. Inside the Heart of Gold with the improbability engine, you never know what’ll come up. Kevin: Right. I’d also argue that this technology is redeeming the liberal arts degree. It enables specialization across disciplines, encouraging lifelong learners to embrace a generalist perspective. It’s about knowing how to organize and synthesize human knowledge. Ross: Absolutely. Humans excel at synthesis, and now we have access to diverse ideas that nurture that capability. From the structured interviews, how do you translate the data into a GPT? Kyle: We made strategic decisions for our official Content Evolution digital advisors. All of them share the same structural data: the interview forms the core, and every twin has the same system prompt. If we update the core prompt, it applies to all of them. The collection of 30 twins also has its own prompt. Some members have created duplicates of their twins and added their writings, articles, books, and papers. These are different types of GPTs—one captures the person’s essence, and the other their body of work. It’s fascinating because the core data makes the twins inherit the personalities of the people behind them. Kevin: Here’s a fun example. Kyle met a podcaster, Emily Shaw, who has a show called Candy Ears. She experimented with our digital twins, taking voice samples to mimic how we sound. Then she asked our twins questions and recorded their answers. Kyle: We first answered the questions ourselves. Then she played the twins’ responses, and we rated them. Kevin: I rated my twin a 7.5 out of 10. My wife, Heidi, said it sounded just like me and thought it deserved a 9 or 10. She’s lived with me for almost 50 years, so I’ll take her word for it! The question was something broad, like, “What is the meaning of life?” The alignment between my response and my twin’s was striking. Kyle: For me, the text responses were spot on. However, the voice delivery didn’t match my dynamic range—I talk loudly, softly, quickly, and slowly. For someone with a monotone style, the twins are nearly identical. Ross: Voice rendition is a challenge, but we’re on the verge of improving it. Kevin, you mentioned earlier that you use this group of 30 digital twins collectively. How does that work? Kevin: All the individual twins are in a common folder labeled “CE GPT Profile Complete.” When I write an article for LinkedIn, I can query the folder: “Who in the community would have something to say about this?” It pulls relevant quotes and drafts an article, complete with an executive summary and attributions like, “Kyle says this,” or “Cindy Coon says that.” Before publishing, I share the draft for feedback to ensure accuracy. Even if people don’t actively use this technology, engaging with it leaves a residue that makes them better. For instance, I couldn’t spell well growing up, but using spell check gave me immediate feedback and improved my skills. Similarly, interacting with this tech enhances capabilities over time. Ross: So these are custom GPTs fine-tuned with your methodology? Kyle: Yes, that’s correct. They’re private but also available in the GPT store for public interaction as part of our marketing. People can experience what Kyle Shannon or the collective might say on various topics. Kevin: We also host a weekly program called Content Evolution: New World, where people can call in. Sometimes, we feed the transcripts into the GPT profile to generate LinkedIn posts summarizing the discussion. It does a decent job turning an hour-long conversation into a seven-paragraph post. Ross: Kyle, you mentioned the book Collective Intelligence and AI. What’s the process from idea to a finished, shippable product? Kyle: Kevin often says the book reflects a decade of our conversations. We meet weekly, and I’m the CEO of Storyvine, where Kevin is our senior advisor. This collaboration has been ongoing for years. Personally, when I get excited about new technology, I dive in. Large language models initially felt counterintuitive—simple probability calculators, yet producing outputs that felt human. One day, I saw a tweet: “Artificial intelligence is the collective intelligence of humanity.” That hit me. The magic isn’t in how the tool works; it’s in what it’s trained on. I realized it allows us to collaborate with everyone who’s contributed to the internet. I shared this insight with Kevin, and it sparked deeper discussions about collective intelligence—not just in machines but also in our CoLab. The idea evolved, and tools helped us quickly go from concept to an 80% draft. Kevin: After that conversation, I went into the tools, wrote some prompts, and told Kyle, “I just outlined this as a book. What do you think?” He mentioned a tool that could write the whole thing, but I wasn’t interested in going that route. I’m more of a policy person, while Kyle dives into current trends. He also has his community, the AI Salon, which is very popular with lots of opt-ins. We fed our manuscript into Notebook LM. It provided an interesting summary, but it also generated profound insights we hadn’t written. One example was: “The authors are saying it’s like being given access only to the children’s section of the library, without reading the adult books.” That was exactly the point. Much of human knowledge—especially advanced knowledge—is inaccessible because it’s behind firewalls, paywalls, or hasn’t been digitized. We’re only at the beginning of this journey. Ross: That’s such a compelling metaphor—children’s versus adult sections. There’s so much knowledge that remains untapped because it hasn’t been captured or digitized. It’s an important insight. Kyle: Agreed. One of the things we’ve written about is data collaboratives. Creating shared data lakes is crucial for organizations to think about and act on. Ross: What are some examples of data collaboratives you’ve seen or worked with? Kyle: The concept isn’t new—trade associations are a simple example. They bring together organizations with common interests, enabling them to share best practices without crossing legal boundaries. Large consulting firms also facilitate sharing across industries while respecting confidentiality. AI accelerates this process because it doesn’t care about your industry—it can recognize parallels, analogize, and bring insights to bear faster than ever before. It just needs a prompt to get started. Kevin: What amazes me about AI, particularly transformer architecture, is how it can hoover up enormous amounts of data and derive value with enough compute power. My organization has been around for over a decade. If I think about all the knowledge trapped in PowerPoint presentations, sales documents, and more, it’s substantial. We could plop all of it into an AI model and instantly gain insights. Now imagine a Fortune 500 company or a trade association pooling their data. The value trapped in unstructured formats is immense. With just a little organization, they could unlock incredible potential. Kyle: Often, this data sits on individual hard drives, disconnected from the cloud. Gartner predicts that in the next five to seven years, employment agreements will include clauses allowing companies to replicate your work processes and contributions. This will become part of the terms and conditions for employment. Ross: That’s a fascinating point. To wrap up, what’s the generative roadmap for Content Evolution? What’s next for Kevin and Kyle? Kyle: One thing I’m excited about is using the collection of digital twins to explore ideas in unique ways. For instance, if we have a new piece of legislation or an article, we can query the twins for 10 different perspectives—some close to my thinking, others wildly different. We’re now working on a system that allows us to collaborate with people based on how they think and solve problems, rather than just their professional expertise. I can have a brainstorming session with people similar to me or choose those who think completely differently to challenge my ideas. This could even extend to historical figures—where would Aristotle or Steve Jobs sit on that spectrum? That’s what excites me. Kevin: Let me add to that. On Tuesday, Kyle and I had a conversation that ended at 10:55 AM. By noon, Kyle had already prototyped and demoed the idea we discussed. That’s the power of rapid prototyping—there are no bad ideas because you can quickly test them. Another key aspect is transcending limitations like time zones or language barriers. Right now, you can’t always get on someone’s calendar. But with digital twins, people can access our knowledge anytime, in their preferred language, and then decide if they need to speak to us directly. This approach transforms business and how we engage with the world. Our challenge is often being so far ahead of the curve that people initially don’t understand what we’re talking about. That’s part of the innovator’s dilemma. But we’re excited to keep pushing forward. Ross: That’s fantastic. We’ll include links to everything in the show notes. Where can people learn more about what you’re doing? Kyle: Visit contentevolution.net. One of the first tools we built there is the Challenge Engine. You input a business challenge, and instead of giving answers, it generates questions to guide your thinking. You can also find us on the GPT Store by searching for “CE Profiles.” Kevin: For those interested in staying updated on this space, I highly recommend Kyle’s AI Salon. It’s a vibrant community discussing AI and its implications. Kyle, where can people find it? Kyle: The URL is thesalon.ai. We host bi-monthly meetings featuring speakers and discussions. The focus is on exploring what we can do with AI now that it’s accessible to everyone—not just engineers and mathematicians. Ross: Great. Thank you so much for your time and insights, Kevin and Kyle. It’s been wonderful hearing about your work. Kyle: Thank you, Ross. It’s been great to be here. Kevin: Absolutely. Thanks, Ross.   The post Kevin Clark & Kyle Shannon on collective intelligence, digital twin elicitation, data collaboratives, and the evolution of content (AC Ep70) appeared first on amplifyingcognition.
undefined
Nov 6, 2024 • 38min

Samar Younes on pluridisciplinary art, AI as artisanal intelligence, future ancestors, and nomadic culture (AC Ep69)

“To me, envisioning a future should involve elements anchored in nature, modern materials, and sustainable practices, challenging Western-centric constructs of ‘futuristic.’ Artisanal intelligence is about understanding material culture, combining traditional craft with modern techniques, and redefining what feels ‘modern.’” – Samar Younes About Samar Younes Samar Younes is a pluridisciplinary hybrid artist and futurist working across art, design, fashion, technology, experiential futures, culture, sustainability and education. She is founder of SAMARITUAL which produces the “Future Ancestors” series, proposing alternative visions for our planet’s next custodians. She has previously worked in senior roles for brands like Coach and Anthropologie and has won numerous awards for her work. LinkedIn: Samar Younes Website: www.samaritual.com University Profile: Samar Younes What you will learn Exploring the intersection of art, AI, and cultural identity Reimagining future aesthetics through artisanal intelligence Blending traditional craftsmanship with digital innovation Challenging Western-centric ideas of “modern” and “futuristic” Using AI to amplify narratives from the Global South Building a sustainable, nature-anchored digital future Embracing imperfection and creativity in the age of AI Episode Resources Silk Road Web3 Metaverse Orientalist AI (Artificial Intelligence) Artisanal Intelligence Dubai Future Forum Neuroaesthetics ChatGPT Runway ML Midjourney Archives of the Future Luma Large Language Model (LLM) Gun Model Transcript Ross Dawson: Samar, it’s awesome to have you on the show. Samar Younes: Thank you so much. Thanks for having me. Ross: So you describe yourself as a plural, disciplinary hybrid, artist, futurist, and creative catalyst. That sounds wonderful. What does that mean? What do you do? Samar: What does that mean? It means that I am many layers of the life that I’ve had. I started my training as an architect and worked as a scenographer and set designer. I’ve always been interested in bringing public art to the masses and fostering social discourse around public art and art in general. I’ve also always been interested in communicating across cultures. Growing up as a child of war in Beirut, among various factions—religious and cultural—it was a diverse city, but it was also a place where knowledge and deep, meaningful discussions were vital to society. Having a mother who was an artist and a father who was a neurologist, I became interested in how the brain and art converge, using art and aesthetics to communicate culture and social change. In my career, I began in brand retail because, at the time, public art narratives and opportunities to create what I wanted were limited. So I used brand experiences—store design, window displays, art installations, and sensory storytelling—as channels to engage people. As the world shifted more towards digital, I led brands visually, aiming to bridge digital and physical sensory frameworks. But as Web3, the metaverse, and other digital realms emerged, I found that while exciting, they lacked the artisanal textures and layers that were important to me. Working across mediums—architecture, fashion, design, food—I saw artificial intelligence as akin to working with one’s hands, very similar to what artisans do. That’s how I got into AI, as a challenge to amplify narratives from the Global South, reclaiming aesthetics from my roots. Ross: Fascinating. I’d love to dig into something specific you mentioned: AI as artisanal. What does that mean in practice if you’re using AI as a tool for creativity? Samar: Often, when people use AI, specifically generative AI with prompts or images, they don’t realize the role of craftsmanship or the knowledge of craft required to create something that resonates. Much digital imagery has a clinical, dystopian aesthetic, often cold and disconnected from nature or biomorphic elements, which are part of the world crafted by hand. To me, envisioning a future should involve elements anchored in nature, modern materials, and sustainable practices, challenging Western-centric constructs of “futuristic.” Ancient civilizations, like Egypt’s with the pyramids, exemplify timeless modernity. Similarly, the Global South has always been avant-garde in subversion and disruption, but this gets re-appropriated in Western narratives. Artisanal intelligence is about understanding material culture, combining traditional craft with modern techniques, and redefining what feels “modern.” Ross: Right. AI offers a broad palette, not just in styles from history but also potentially in areas like material science and philosophy. It supports a pluriplinary approach, assisted by the diversity of AI training data. Samar: Exactly. When I think of AI, I see data sets as materials, not just images. If data is a medium, I’m not interested in recreating a Picasso. I see each data set as a material, like paint on a palette—acrylic, oil, charcoal—with the AI system as my brush. Creating something unique requires understanding composition, culture, and global practices, then weaving them together into a new, personal perspective. Ross: One key theme in your work is merging multiple cultural and generational frames using technology. How does technology enable this? Samar: Many AI tools are biased and problematic. When I tried an exercise creating a “Hello Kitty” version in different cultural stereotypes, I found disturbing, inaccurate, or even racist results, especially for Global South or Middle Eastern cultures. To me, cultures are fluid and connected, shaped by historical nomadism rather than nationalistic borders. My concept of the “future ancestor” explores sustainability and intergenerational, transcultural constructs. Cultures have always been fluid and adaptable, but modern consumerism and digital borders often force rigid identity constructs. In prompting AI, I describe culture fluidly, resisting prescribed stereotypes to create atypical, nuanced representations. Ross: Agreed. We’re digital nomads today, traveling and exploring in new ways. But AI training data is often Western-biased, so artists can’t rely on defaults without reinforcing these biases. Samar: The artist’s role is to subvert and hack the system. If you don’t have resources to train your own model, I believe there’s power in collectively hacking existing models by feeding them new, corrective data. The more people create diverse data, the more it influences these systems. Understanding how to manipulate AI systems to your needs helps shape their evolution. Ross: Technology is advancing so quickly, transforming art, expression, and identity. What do you see as the implications of this acceleration? Samar: I see two scenarios: one dystopian, one more constructive. Ideally, technology fosters nurturing, empathetic futures, which requires slower, thoughtful development. The current speed, however, is driven by profit and the extractive aims of industrialization—manipulating human needs for profit or even exploiting people without compensation. This dystopia is evident in algorithmic manipulation and censorship. I wish the acceleration focused on health and well-being rather than extractive technologies. We should prioritize technologies that support work-life balance, health, and sustainable futures over those driven by profit. Ross: Shifting gears, can you share more specifics on tools you use or projects you’re working on? Samar: Sure. I use several tools like Cloud, ChatGPT, Runway ML for animations, and Midjourney for visuals. I have an archive of 50,000+ images I’ve created, nurturing them over time, blending them across tools. Building a unique perspective is key—everyone has a distinct point of view rooted in their cultural and personal experiences. Recent projects include my “Future Ancestor” project and a piece called “Future Custodian,” which I co-wrote with futurist Geraldine Warri. It’s a speculative narrative about a tribe called the “KALEI Tribe,” where fashion serves as a tool of healing and self-expression. Ross: What’s the process behind creating these? Samar: The “KALEI Tribe” is a speculative piece set in 2034, where nomadic survival uses fashion as self-expression and well-being. Fashion is reframed as healing and sustainable, rather than for fast consumption. We explore a future where we co-exist with sentient beings beyond humans. This concept emerged from my archive and AI-created imagery, blending perspectives with Geraldine Warri for Spur Magazine in Japan. I also recently did a food experience project that didn’t directly use AI but engaged with artisanal intelligence. It imagined ancestral foods, blending speculative thinking with our senses, rewilding how we think of food. Ross: That’s brilliant—rewilding ourselves and pushing against domestication. Samar: Exactly. The industrial era pushed repetition and perfection, taming our humanity’s wild, playful side. I hope to use AI to rewild our imaginations, embracing imperfections, chaos, and organic unpredictability. The system’s flaws inspire me, adding a serendipitous quality, much like working with hands-on materials like clay or fabric, where outcomes aren’t perfectly predictable. Ross: Wonderful insights. Where can people find out more about your work? Samar: They can visit my website at summeritual.com, where I share workshops and sessions. I’m also active on Instagram (@samorritual) and LinkedIn. Ross: All links are in the show notes. Thanks for such inspiring, insightful work. Samar: Thank you so much for having me. Hopefully, we’ll meet soon.   The post Samar Younes on pluridisciplinary art, AI as artisanal intelligence, future ancestors, and nomadic culture (AC Ep69) appeared first on amplifyingcognition.
undefined
Oct 30, 2024 • 36min

Jason Burton on LLMs and collective intelligence, algorithmic amplification, AI in deliberative processes, and decentralized networks (AC Ep68)

“When you get a response from a language model, it’s a bit like a response from a crowd of people, shaped by the preferences of countless individuals.” – Jason Burton About Jason Burton Jason Burton is an assistant professor at Copenhagen Business School and an Alexander von Humboldt Research fellow at the Max Planck Institute for Human Development. His research applies computational methods to studying human behavior in a digital society, including reasoning in online information environments and collective intelligence. LinkedIn: Jason William Burton Google Scholar page: Jason Burton University Profile (Copenhagen Business School): Jason Burton What you will learn Exploring AI’s role in collective intelligence How large language models simulate crowd wisdom Benefits and risks of AI-driven decision-making Using language models to streamline collaboration Addressing the homogenization of thought in AI Civic tech and AI’s potential in public discourse Future visions for AI in enhancing group intelligence Episode Resources Nature Human Behavior How Large Language Models Can Reshape Collective Intelligence ChatGPT Max Planck Institute for Human Development Reinforcement learning from human feedback DeepMind Digital twin Wikipedia Algorithmic Amplification and Society Wisdom of the crowd Recommender system Decentralized autonomous organizations Civic technology Collective intelligence Deliberative democracy Echo chambers Post-truth People Jürgen Habermas Dave Rand Ulrika Hahn Helena Landemore Transcript Ross: Ross, Jason, it is wonderful to have you on the show. Jason Burton: Hi, Ross. Thanks for having me. Ross: So you and 27 co-authors recently published in Nature Human Behavior a wonderful article called How Large Language Models Can Reshape Collective Intelligence. I’d love to hear the backstory of how this paper came into being with 28 co-authors. Jason: It started in May 2023. There was a research retreat at the Max Planck Institute for Human Development in Berlin, about six months or so after ChatGPT had really come into the world, at least for the average person. We convened a sort of working group around this idea of the intersection between language models and collective intelligence, something interesting that we thought was worth discussing. At that time, there were just about five or six of us thinking about the different ways to view language models intersecting with collective intelligence: one where language models are a manifestation of collective intelligence, another where they can be a tool to help collective intelligence, and another where they could potentially threaten collective intelligence in some ways. On the back of that working group, we thought, well, there are lots of smart people out there working on similar things. Let’s try to get in touch with them and bring it all together into one paper. That’s how we arrived at the paper we have today. Ross: So, a paper being the manifestation of collective intelligence itself? Jason: Yes, absolutely. Ross: You mentioned an interesting part of the paper—that LLMs themselves are an expression of collective intelligence, which I think not everyone realizes. How does that work? In what way are LLMs a type of collective intelligence? Jason: Sure, yeah. The most obvious way to think about it is these are machine learning systems trained on massive amounts of text. Where are the companies developing language models getting this text? They’re looking to the internet, scraping the open web. And what’s on the open web? Natural language that encapsulates the collective knowledge of countless individuals. By training a machine learning system to predict text based on this collective knowledge they’ve scraped from the internet, querying a language model becomes a kind of distilled form of crowdsourcing. When you get a response from a language model, you’re not necessarily getting a direct answer from a relational database. Instead, you’re getting a response that resembles the answer many people have given to similar queries. On top of that, once you have the pre-trained language model, a common next step is training through a process called reinforcement learning from human feedback. This involves presenting different responses and asking users, “Did you like this response or that one better?” Over time, this system learns the preferences of many individuals. So, when you get a response from a language model, it’s shaped by the preferences of countless individuals, almost like a response from a crowd of people. Ross: This speaks to the mechanisms of collective intelligence that you write about in the paper, like the mechanisms of aggregation. We have things like markets, voting, and other fairly crude mechanisms for aggregating human intelligence, insight, or perspective. This seems like a more complex and higher-order aggregation mechanism. Jason: Yeah. I think at its core, language models are performing a form of compression, taking vast amounts of text and forming a statistical representation that can generate human-like text. So, in a way, a language model is just a new aggregation mechanism. In an analog sense, maybe taking a vote or deliberating as a group leads to a decision. You could use a language model to summarize text and compress knowledge down into something more digestible. Ross: One core part of your article discusses how LLMs help collective intelligence. We’ve had several mechanisms before, and LLMs can assist in existing aggregation structures. What are the primary ways that LLMs assist collective intelligence? Jason: A lot of it boils down to the realization of how easy it is to query and generate text with a language model. It’s fast and frictionless. What can we do with that? One straightforward use is that, if you think of a language model as a kind of crowd in itself, you can use it to replace traditional crowdsourcing. If you’re crowdsourcing ideas for a new product or marketing campaign, you could instead query a language model and get results almost instantaneously. Crowdsourcing taps into crowd diversity, producing high-quality, diverse responses. However, it requires setting up a crowd and a mechanism for querying, which can be time and resource-intensive. Now, we have these models at our fingertips, making it much quicker. Another potential use that excites me is using language models to mediate deliberative processes. Deliberation is beneficial because individuals exchange information, allowing them to become more knowledgeable about a task. I have some knowledge, and you have some knowledge. By communicating, we learn from each other. Ross: Yeah, and there have been some other researchers looking at nudges for encouraging participation or useful contributions. I think another point in your paper is around aggregating group discussions so that other groups or individuals can effectively take those in, allowing for scaled participation and discussion. Jason: Yeah, absolutely. There’s a well-documented trade-off. Ideally, in a democratic sense, you want to involve everybody in every discussion, as everyone has knowledge to share. By bringing more people into the conversation, you establish a shared responsibility in the outcome. But as you add more people to the room, it becomes louder and noisier, making progress challenging. If we can use technological tools, whether through traditional algorithms or language models, we could manage this trade-off. Our goal is to bring more people into the room while still producing high-quality outputs. That’s the ideal outcome. Ross: So, one of the outcomes of bringing people together is decisions. There are other ways in which collective intelligence manifests, though. Are there specific ways, outside of what we’ve discussed, where LLMs can facilitate better decision-making? Jason: Yes, much of my research focuses on collective estimations and predictions, where each individual submits a number, which can then be averaged across the group. This works in contexts with a concrete decision point or where there’s an objective answer, though we often debate subjective issues with no clear-cut answers. In those cases, what we want is consensus rather than just an average estimate. For instance, we need a document that people with different perspectives can agree on for better coordination. One of my co-authors, Michael Baker, has shown that language models fine-tuned for consensus can be quite effective. These models don’t just repeat existing information but generate statements that identify points of agreement and disagreement—documents that diverse groups can look at and discuss further. That’s a direction I’d love to see more of. Ross: That may be a little off track, but it brings up the idea of hierarchy. Implicitly, in collective intelligence, you assume there’s equal participation. However, in real-world decision-making, there’s typically a hierarchy—a board, an executive team, managers. You don’t want just one person making the decision, but you still want effective input from various groups. Can these collective intelligence structures apply to create more participatory decision-making within hierarchical structures? Jason: Yeah, I think that’s one of the unique aspects of what’s called the civic technology space. There are platforms like Polis, for example, which level the playing field. In an analog room, certain power structures can discourage some people from speaking up while encouraging others to dominate, which might not be ideal because it undermines the benefits of diversity in a group. Using language models to build more civic technology platforms can make it more attractive for everyday people to engage in deliberation. It could help reduce hierarchies where they may not be necessary. Ross: Your paper also discusses some downsides of LLMs and collective intelligence. One concern people raise is that LLMs may homogenize perspectives, mashing everything together so that outlier views get lost. There’s also the risk that interacting too much with LLMs could homogenize individuals’ thinking. What are the potential downsides, and how might we mitigate them? Jason: There’s definitely something to unpack there. One issue is that if everyone starts turning to the same language model, it’s like consulting the same person for every question. If we all rely on one source for answers, we risk homogenizing our beliefs. Mitigating this effect is an open question. People may prompt models differently, leading to varied advice, but experiments have shown that even with different prompts, groups using language models often produce more homogenous outputs than those who don’t. This issue is concerning, especially given that only a few tech companies currently dominate the model landscape. The limited diversity of big players and the bottlenecks around hardware and compute resources make this even more worrisome. Ross: Yes, and there’s evidence suggesting models may converge over time on certain responses, which is concerning. One potential remedy could be prompting models to challenge our thinking or offer critiques to stimulate independent thought rather than always providing direct answers. Jason: Absolutely. That’s one of the applications I’m most excited about. A recent study by Dave Rand and colleagues used a language model to challenge conspiracy theorists, getting them to update their beliefs on topics like flat-Earth theory. It’s incredibly useful to use language models as devil’s advocates. In my experience, I often ask language models to critique my arguments or help me respond to reviewers. However, you sometimes need to prompt it specifically to provide honest feedback because, by default, it tends to agree with you. Ross: Yes, sometimes you have to explicitly tell it, “Properly critique me; don’t hold back,” or whatever words encourage it to give real feedback, because they can lean toward being “yes people” if you don’t direct them otherwise. Jason: Exactly, and I think this ties into our previous discussion on reinforcement learning from human feedback. If people generally prefer responses that confirm their existing beliefs, the utility of language models as devil’s advocates could decrease over time. We may need to start differentiating language models by specific use cases, rather than expecting a single model to fulfill every role. Ross: Yes, and you can set up system prompts or custom instructions that encourage models to be challenging, obstinate, or difficult if that’s the kind of interaction you need. Moving on, some of your other work relates to algorithmic amplification of intelligence in various forms. I’d love to hear more about that, especially since this is the Amplifying Cognition podcast. Jason: Sure, so this work actually started before language models became widely discussed. I was thinking, along with my then PhD advisor, Ulrike Hahn, about the “wisdom of the crowd” effect and how to enhance it. One well-documented observation in the literature is that communication can improve crowd wisdom because it allows knowledge sharing. However, it can also be detrimental if it leads to homogenization or groupthink. Research shows this can depend on network structure. In a highly centralized network where one person has a lot of influence, communication can reduce diversity. However, if communication is more decentralized and spreads peer-to-peer without a central influencer, it can spread knowledge effectively without compromising diversity. We did an experiment on this, providing a proof of concept for how algorithms could dynamically alter network structures during communication to enhance crowd wisdom. While it’s early days, it shows promise. Ross: Interesting! And you used the term “rewiring algorithm,” which suggests dynamically altering these connections. This concept could be impactful in other areas, like decentralized autonomous organizations (DAOs). DAOs aim to manifest collective intelligence, but often rely on basic voting structures. Algorithmic amplification could help rebalance input from participants. Jason: Absolutely. I’m not deeply familiar with blockchain literature, but when I present this work, people often draw parallels with DAOs and blockchain governance. I may need to explore that connection further. Ross: Definitely! There’s research potential in rebalancing structures for a fairer redistribution of influence. Also, one of this year’s hottest topics is multi-agent systems, often involving both human and AI agents. What excites you about human-plus-AI multi-agent systems? Jason: There are two aspects to multi-agent systems as I see it. One is very speculative—thinking about language models as digital twins interacting on our behalf, which is futuristic and still far from today’s capabilities. The other, more immediate side, is that we’re already in multi-agent systems. Think of Wikipedia, social media, and other online environments. We interact daily with algorithms, bots, and other people. We’re already embedded in multi-agent systems without always realizing it. Trying to conceptualize this intersection is difficult, but similar to how early AI discussions seemed speculative and are now real. For me, a focus on civic applications is crucial. We need more civic technology platforms like Polis that encourage public engagement in discussions. Unfortunately, there aren’t many platforms widely recognized or competing in this space. My hope is that researchers in multi-agent systems will start building in that direction. Ross: Do you think there’s potential to create a democracy that integrates these systems in a substantial way? Jason: Yes, but it depends on the form it takes. I conceptualize it through a framework discussed by political scientist Hélène Landemore, who references Jürgen Habermas. He describes two tracks of the public sphere. One is a bureaucratic, formal track where elected officials debate in government. The other is an open, free-for-all public sphere, like discussions in coffee shops or online. The idea was that the best arguments from the free-for-all sphere would influence the formal sphere, but that bridge seems weakened today. Civic technologies and algorithmic communication could create a “third track” to connect the open public sphere more effectively with bureaucratic decision-making. Ross: Rounding things out, collective intelligence has to be the future of humanity. We face bigger and more complex challenges, and we need to be intelligent beyond our individual capacities to address these issues and create a better world. What do you see as the next phase or frontiers for building more effective collective intelligence? Jason: The next frontier will be not just human collective intelligence. We’ve already seen that over the past decade, and I think we’ve almost taken it for granted. There’s substantial research on the “wisdom of the crowd” and deliberative democracy, often focusing on groups of people debating in a room. But now, we have more access to information and the ability to communicate faster and more easily than ever. The problem now is mitigating information overload. In a way, we’ve already built the perfect collective intelligence system—the internet, social media. Yet, despite having more information, we don’t seem to be a more informed society. Issues like misinformation, echo chambers, and “post-truth” have become part of our daily vocabulary. I think the next phase will involve developing AI systems and algorithms to help us handle information overload in a socially beneficial way, rather than just catering to advertising or engagement metrics. That’s my hope. Ross: Amen to that. Thanks so much for your time and your work, Jason. I look forward to following your research as you continue. Jason: Thank you, Ross. The post Jason Burton on LLMs and collective intelligence, algorithmic amplification, AI in deliberative processes, and decentralized networks (AC Ep68) appeared first on amplifyingcognition.
undefined
Oct 23, 2024 • 33min

Kai Riemer on AI as non-judgmental coach, AI fluency, GenAI as style engines, and organizational redesign (AC Ep67)

Kai Riemer is a Professor at the University of Sydney Business School, specializing in AI’s impact on organizations. He discusses how AI serves as a non-judgmental coach, boosting decision-making and personal productivity for leaders. Riemer explores generative AI as a catalyst for creativity and its role in enhancing group dynamics through structured facilitation. He emphasizes the necessity for upskilling in AI fluency and rethinking organizational frameworks to harness AI's potential effectively, amplifying both challenges and opportunities.
undefined
Oct 16, 2024 • 37min

Marc Ramos on organic learning, personalized education, L&D as the new R&D, and top learning case studies (AC Ep66)

“The craft of corporate development and training has always been very specialized in providing the right skills for workers, but that provision of support is being totally transformed by AI. It’s both an incredible opportunity and a challenge because AI is exposing whether we’ve been doing things right all along.” – Marc Steven Ramos About Marc Steven Ramos Marc Ramos is a highly experienced Chief Learning Officer, having worked in senior global roles with Google, Microsoft, Accenture, Novartis, Oracle, and other leading organizations. He is a Fellow at Harvard’s Learning Innovation Lab, with his publications including the recent Harvard Business Review article, A Framework for Picking the Right Generative AI Project. LinkedIn: Marc Steven Ramos Harvard Business Review Profile: Marc Steven Ramos  What you will learn Navigating the post-pandemic shift in corporate learning Balancing scalable learning with maintaining quality Leveraging AI to transform workforce development Addressing the imposter syndrome in learning and development teams Embedding learning into the organizational culture Utilizing data and AI to demonstrate training ROI Rethinking the role of L&D as a driver of innovation Episode Resources AI (Artificial Intelligence) L&D (Learning and Development) Workforce Development Learning Management System (LMS) Change Management Learning Analytics Corporate Learning Blended Learning DHL Ernst & Young (EY) Microsoft Salesforce.com ServiceNow Accenture ERP (Enterprise Resource Planning) CRM (Customer Relationship Management) Large Language Models (LLMs) GPT (Generative Pretrained Transformer) RAG (Retrieval-Augmented Generation) Movie Sideways Transcript Ross: Ross Mark, it is wonderful to have you on the show. Marc Steven Ramos: It is great to be here, Ross. Ross: Your illustrious career has been framed around learning, and I think today it’s pretty safe to say that we need to learn faster and better than ever before. So where do you think we’re at today? Marc Steven: I think from the lens of corporate learning or workforce development, not the academic, K-12 higher ed stuff, even though there’s a nice bridging that I think is necessary and occurring is a tough world. I think if you’re running any size learning and development function in any region or country and in any sector or vertical, these are tough times. And I think the tough times in particular because we’re still coming out of the pandemic, and what was in the past, live in person, instructor-led training has got to move into this new world of all virtual or maybe blended or whatever. But I think in terms of the adaptation of learning teams to move into this new world post-pandemic, and thinking about different ways to provide ideally the same level of instruction or training or knowledge gain or behavior change, whatever, it’s just a little tough. So I think a lot of people are having a hard time adjusting to the proper modality or the proper blends of formats. I think that’s one area where it’s tough. I think the other area that is tough is related to the macroeconomics of things, whether it’s inflation. I’m calling in from the US and the US inflation story is its own interesting animal. But whether it’s inflation or tighter budgets and so forth, the impact to the learning functions and other functions, other support functions in general, it’s tighter, it’s leaner, and I think for many good reasons, because if you’re a support function in legal or finance or HR or learning, the time has come for us to really, really demonstrate value and provide that value in different forms of insights and so forth.  So the second point, in terms of where I think it is right now, the temperature, the climate, and how tough it is, I think the macroeconomic piece is one, and then clearly there’s this buzzy, brand new character called AI, and I’m being a little sarcastic, but not I think it’s when you look at it from a learning lens. I think a lot of folks are trying to figure out not only how do I on the good side, right? How can I really make my courses faster and better and cooler and create videos faster in this, text to XYZ media is cool, so that’s but it’s still kind of hypey, if that’s even a word.  But what’s really interesting? And I’m framing this just as a person that’s managed a lot of L&D teams, it’s interesting because there’s this drama that’s below the waterline of the iceberg of pressure, in the sense that I think a lot of L&D people, because AI can do all this stuff, it’s kind of exposing whether or not the stuff that the human training person has been doing correctly all this time. So there’s this newfound ish, imposter syndrome that I think is occurring within a lot of support functions, again, whether it’s legal or HR, but I think it’s more acute in learning because the craft of corporate development, of training, has always been very specialized in the sense of providing the right skills for workers, but that provisioning of stuff to support skills, man, it is being totally benefiting from AI, but also challenging because of AI. So there’s a whole new sense of pressure, I think for I think the L&D community, and I’m just speaking from my own perspective, rather than representing, obviously, all these other folks. But those are some perspectives in terms of where I think the industry is right now and again, I’m looking at it more from the human perspective rather than AI’s perspective. But we can go there as well. Ross: Yeah. Well, there’s lots to dig into there. First point is the do more with less mantra has been in place for a very long time. And I, as I’ve always said, it’s business is going to get tougher. It’s always going to get, you’re going to always have to do more. But the thing is, I don’t think of learning as a support function, or it shouldn’t be. It’s so, okay, yes, legal, it’s got its role. HR has got a role. But we are trying to create learning organizations, and we’ve been talking about that for 30 years or so, and now more than ever, the organization has to be a learning organization. I think that any leader that tries to delegate learning to the L&D is entirely missing their role and function to transform the organization to one where learning is embedded into everything. And I think there’s a real danger to separating out L&D as all right. They’re doing their job. They’ve got all their training courses, and we’re all good now to one of transformation of the organization, where as you’re alluding to, trying to work out, well, what can AI do and what can humans do? And can humans be on the journey where they need to do what they need to do? So we need to think of this from a leadership frame, I’d say. Marc Steven: Yeah, I totally agree. I think you have three resonating points. The first one that you mentioned, you know, the need to get stuff out faster, more efficient and so forth, and make sure that you’re abiding by the corporate guidelines of scale, right? And that’s a very interesting dilemma, I think, just setting aside the whole kind of AI topic. But what’s interesting is, and I think a lot of L&D folks don’t talk about this, particularly at the strategy level. Yes, it’s all about scale. Yes, it’s about removing duplication, redundancy. Yes, it’s about reach. Yes, it’s about making sure that you’re efficiently spending the money in ways where your learning units can reach as many people as possible. The dilemma is, the more you scale a course, a program, with the intention of reaching as many people as possible, frankly, the more you have to dummy down the integrity of that course to reach as many people. The concern that I’ve had about scale, and you need scale. There’s no doubt. But that the flip side of the scale coin, if I can say that, is how do you still get that reach at scale, the efficiencies at scale, but in such a way that you’re not providing vanilla training for everyone? Because what happens is when you provide too much scaled learning, you do have to, forgive the term, dummy it down for a more common lowest common denominator reach. And when that happens, all you’re basically doing is building average workers in bulk. And I don’t really think that’s the goal of scalable learning. Ross: But that’s also not going to give you, well, gives you competitive disadvantage, as opposed to competitive advantage. If you’re just churning out people that have defined skill sets, yeah, doing it. Do you, even if you’re doing that well or at scale. The point is, you know, for a competitive advantage, you need a bunch of diverse people that think, have different skills, and you bring them together in interesting ways. That’s where competitive advantage comes from. It’s not from the L&D churning out a bunch of people with skill sets, X, Y and Z. Marc Steven: Yeah, and I think you’re so right. The dilemma might not be in terms of, you know, the internal requirements of the training teams, strategic approach, whatever, it’s just getting hit from different angles. I mean, when you’re looking at a lot of large learning content, course providers, you know, without naming names, they’re in a big, big, big dilemma because AI is threatening their wares, their stuff, and so they’re trying to get out of that. There’s something, as you mentioned, too, that this is not verbatim, Ross, but something about making sure that, you know, L&D, let me kind of step back that, you know, building the right knowledge and skills and capabilities for a company, it’s everyone’s responsibility, and if anything, what is L&D’s role to kind of make that happen? The way I’ve been kind of framing this with some folks, this is maybe not the best metaphor, analogy, example, whatever. Within the L&D function, the support functions, talent, HR, whatever, we’ve been striving to gain the seat at the table for years, right? And what’s interesting now is because of some of the factors that I mentioned beforehand, coming out of COVID, macroeconomics, there’s a lot more pressure on the L&D team to make sure that they are providing value. What’s happening now is that expectation of more duty, responsibility, showing the return has peaked, and I think in good ways, so much so that, you know, I don’t think we are striving to get the seat at the table. I think the responsibilities have been raised so high where L&D is the table. I think, you know, we are a new center of gravity. I’m not saying we’re the be all, end all, but there’s so much, and I think necessary responsible scrutiny of learning, particularly related to cultural aspects, because everyone is responsible to contribute, to share. You learn. What was the old statement? Teaching is learning twice, and so everyone has that responsibility to kind of unleash their own expertise and help lift each other without getting called all kind of soft and corporate mushy. But that’s just the basic truth.  The other thing is this whole kind of transformation piece, you know, whether we are the table, whether we are a new center of gravity, we have that responsibility. And my concern is, as I speak with a lot of other learning leaders and so forth, and just kind of get a general temperament of the economic play of learning. In other words, how much money support are you actually receiving? It is tough, but now’s the time actually where some companies are super smart because they are enabling the learning function to find new mechanisms and ways to actually show the return because the learning analytics, learning insights, learning reporting and dashboards, back to the executives. It’s been fairly immature now, whether it’s AI or not, but now it’s actually getting a lot more sophisticated and correct. The evidence is finally there, and I think a lot of companies get that where they’re basically saying, wow, I’ve always believed in the training team, the training function, and training our team, our employees, but I’ve never really figured out a way for those folks to actually show the return, right? I don’t mind giving them the money because I know I can tell. But now there’s, like, really justified, evidence-based ways to show, yeah, this program that costs $75,000 I know now that I can take that the learner data from the learning management system, correlate that into the ERP or CRM system, extract the data related to learning that did have an impact on sellers being able to sell faster or bigger or whatever, and use that as a corollary, so to speak, it’s not real causation, but use that as evidence, maybe with a small e back to the people managing your budgets. And that’s the cool part, but that’s what I’m saying beforehand. It’s time that I think collectively, we’ve got to step up. And part of that stepping up means that we have the right evidence of efficacy, that the stuff we’re building is actually working. Ross: I think that is very valuable. And you want to support appropriate investment in learning. Absolutely, though, is I actually, when was it? It was 27 years ago something, I did a certificate in workplace training, and I was getting very frustrated because the whole course was saying, okay, these, this is what your outcomes from the learning, and this is how you and then you set all your low objectives to choose our outcome. I was saying, well, but what happens? Why don’t you want to get beyond what you’ve defined as the outcome to have open-ended learning, as opposed to having a specific bar and getting to that bar? And I think today we, again, this, if we have this idea of a person in a box, as in, that’s the organization in the past. This is the person. This is the job function? This is all the definitions of this thing. That person fits in that box, and they’ve got all this learning to be able to do that. So now we’ve got to create people that can respond on the fly to a different situation where the world is different, and where we can not just reach bars, but go beyond those two people to hunger to learn and to create and to innovate. And so I think we absolutely want to have show ROI to justify investment in learning, but we also need to have an open-endedness to it to we’re going into areas where we don’t even know what the metrics are because we don’t know what we’re creating. I mean, this obviously creates requires leaders who are prepared to go there. But I think part of I have similar conversations with technology functions, where the sense of you have to, as if you’re a CIO, you have to go to the board and the executive team, and you have to say, this is why you should be investing in technology. It’s partly because we will, we are part of the transformation of the organization. We’re not just a function to be subsumed. And same thing with learning. It’s like saying learning has to be part of what the organization is becoming. And so that goes beyond being able to anything you can necessarily quantify, to quantify completely. At this point, I think takes us a little bit to the AI piece. I’d love to get your thoughts on that. So you’ve kind of kept on saying, let’s keep that out of the conversation for now, let’s bring that in, because you’ve been heavily involved in that, and I’d love to hear your all right. Big Picture thoughts start. We can dig in from there. What’s the role of AI in organizational learning? Marc Steven: That’s a big question. Yeah, it’s a big question, and it’s an important question, but it’s also a question that’s kind of flavored with, I think, some incredible levels of ambiguity and vagueness for lack of better words. So maybe a good way to kind of frame that was actually circling back to your prior comment about people in a box to a certain degree, right? I mean, you have the job architecture of a role, right? Here’s the things that the guy or gal or the individuals got to do. I get it. It’s really interesting in the sense of this whole kind of metaphorical concept of a box, of a container, is super fascinating to me. And there’s an AI play here I’ll share in a second in the way I’m gonna kind of think about this as an old instructional designer fella. We’ve always been trained, conditioned, whatever, to build courses that could be awesome. But in general, the training event is still bound by a duration. Here’s your two-hour class, here’s your two-day event, here’s your 20-week certification program. I don’t know, but it’s always in. It’s always contained by duration. It’s always contained by fixed learning objectives. It’s typically contained by a fixed set of use cases. In other words, by the time you exit this training, you’ll be able to do XYZ things a lot better. This whole kind of container thing is just really, it boggles me, and maybe I’m thinking too much about this.  There’s a great movie, one of my favorite movies, called Sideways. It’s a couple guys that go to wine country in California, and they’re drinking a lot of wine, and they’re meeting some people. There’s one great scene where one of these actors, these characters, is talking to someone else, and this other person, he’s trying to figure out, where did you? Why did you get so enticed and in love with wine? What she says is just really, really remarkable to me. What she basically says is, you know why she loves wine is because she always felt that when you open up a bottle of wine, you’re opening up something that’s living, that’s alive. When you open up a wine and really think about it from that perspective, you think about the people that were actually tending the grapes when they were gathered. You might be thinking about what was the humidity? What was the sunshine? So I’m going to come back to the whole kind of container thing, but in AI, I just think that’s a really interesting way to kind of look at learning now, in the sense of what has been in that container in truth, has been alive. It’s an organic, living thing that becomes alive once the interaction with the learner occurs. What you want to do is think about extending the learning outside of the box, outside of the container. So getting back to your question, Ross, about the intersection, so to speak, of AI and learning, that’s one way I kind of think about it sometimes, is how can we recreate the actual learning event where it’s constantly alive, where if you take a course, the course is something that is everlasting, is prolonged, and it’s also unique to your amount of time that you might have, the context of which you’re working, blah, blah, blah. I’m not going to talk about learning styles. I think it’s fascinating because if AI, particularly with what large language models are doing now, and the whole kind of agentic AI piece where these agents can go off and do multiple tasks against multiple use cases, but against multiple systems, and then you got the RAG piece here too. That’s really interesting now, right? Because if somebody wants to learn something on XYZ subject, and let’s just say that you work for a company that has 50,000 people, and let’s just say that, I don’t know, half of those folks probably know something related to the course that you’re taking. But it’s not in the learning management system; it’s in a whole bunch of Excel spreadsheets, or it’s in your Outlook emails, it’s in the terabytes of stuff. Well, if AI and its siblings, GPTs, LLMs, agents, whatever, if they can now tap into that, that missing information on an ongoing dynamic basis to feed that back to Ross or to Marc or whomever, you’re literally tapping into this living organism of information.  AI is becoming smart enough to shift that living, breathing information into instruction to give it shape, to give it structure, to give it its own kind of appeal, and then make it, tailor it, and personalize it and adapt it for the individual. So if that occurs, I don’t know if it’s 2024 or 2034, but if that occurs, this whole kind of concept of really thinking about learning where the true benefits are organic, it’s alive, and it’s constantly being produced in the beautiful sunshine of everyone else’s unleashed expertise. That’s a really, really fun kind of dream state to think about because there’s a significant AI play. What it really does, it changes the whole, frankly, the whole philosophy of how corporate learning is supposed to operate. If we see some companies kind of heading into that direction or a correlation, which is probably going to happen, that’s going to be super, super fascinating. Ross: Yeah, that’s fantastic. It goes back to the Aridigos and his living company metaphor in the sense of it is self-feeding, that’s autopoiesis. This definition of life is you feed on itself in a way. I think that’s a beautiful evocation of organization as alive because it is dynamic. It’s taking its own essence and using it to feed itself. Is there anything in the public domain around organizations that aren’t truly on this path? Because, I mean, that’s compelling what you describe. But I’m sure that there’s plenty of organizations that have, you know, you’re not the only person to think of something like this. But are there any companies that are showing the way on this enable to be able to put this into place? Marc Steven: Definitely, it’s interesting. I’m trying to finish a book on AI, but I’m not talking about AI. Frankly, I’m talking about the importance of change management. But my slant is, is there any other greater function or team that can drive the accelerated adoption of AI in your company other than the L&D team? The clickbaity title that I think about is, is L&D the new R&D? Is learning and development the new research and development? That’s just one kind of crazy perspective. The way I’m kind of thinking about that is when I’ve been interviewing some folks for a piece that I’m doing, these are CLOs of major, major, major companies. With that change management framing, there are so many incredibly awesome stories I’m hearing related to how to really drive adoption, and what is L&D’s role. To your question, related to is anybody doing it? Some of these companies that really, really get it, they totally see the value of human-driven change management. By that, I mean the more successful deployments that at least I’ve come across is one where you’re not thinking about, well, identify those 24 use cases that have a higher probability of AI doing X, Y and Z. The smarter companies, I think, my own take, no, they don’t even ask that question. They kind of go a level higher. They basically say, can we put together a dedicated, I didn’t say senior, a dedicated group, cross-functional group of folks to figure out question number one.  Question number one is, what the heck do we do with this? They’re not talking about use cases. They’re not talking about the technology, so to speak. They were just trying to figure out, okay, what’s the plan here, people? That’s an interesting way to kind of do this. You’re not hiring Accenture, you’re not hiring whatever to bring in the bazillions of billable hours to kind of figure that out. They want a grassroots way of figuring out how to deal with AI, what does it mean to us? Good, bad, right or wrong? That’s one thing that I see a lot of companies are doing. They’re really taking a much more forward, people-first perspective of figuring out the ball game, and then if the ball game says, hey, we understand that, thinking about risk, thinking about responsibility, whatever. Yeah, here’s the three places we got to start. I think that’s just a really, really smart way to do it. On the vendor side, there’s a lot of really, really cool vendors now thinking about enabling companies for the betterment of AI. The ones that I think are really sharp, they’re getting it. They’re not like the really big, content course providers that say, hey, this is AI 101, this is the, here’s your list of acronyms. We’re going to talk through every single dang acronym and blah, blah, blah. That’s necessary. That’s great stuff. Some of the vendors that are really cool are the ones that are not really focusing on those basics, so to speak. They’ll go into an enterprise, name your company anywhere, and they’ll say, what are your concerns? What are your needs? What are your requirements related to this, this AI thing? Have you, oh, customer identified the areas where you think AI can best benefit yourselves and the company? Then they shape the instruction to blend in those clients’ needs very specifically. They literally customize the instruction to do that. That way, when the learner goes through the learning, they’re talking about the stuff they really focus on, on a day-in and day-out basis. It’s not this generic stuff off the shelf. The other thing that they’re doing is they’re actually embedding, no surprise, but they’re embedding agents, LLM processes, proper prompting into the instruction itself. If you want to know Gemini, then use Gemini to learn Gemini. They really, really go deep. That blending of it’s a different instructional design method as well, but that kind of blending is really, really super smart, just on the companies, the corporates. Ross: Is there any companies you can name? Would you say these are companies doing a good job? Marc Steven: I mean, yeah, I mean, so some of the folks I’ve interviewed and some companies I’m aware of, I think what DHL is doing is just remarkable because what they’re doing is, I was just using my prior example. Let’s have a people-first approach about what do we do about this? It’s kind of a given, you kind of know there’s an efficiencies play, there’s a speed play, there’s a, you know, building stuff more efficiently, play, whatever. But I think DHL is really smart about looking at it from that grassroots perspective, but still at the same time having this balanced approach, again, related to responsibility and risk. I think what Ernst and Young is doing, EY, they’re really, really super sharp too because they’re focusing a lot on, making sure that we’re providing the basics and following, I think, the basic corporate capability guidance of give them the one-on-one training, make sure they’re tested, make sure that people have the opportunity to become certified in the right ways. Maybe the higher level of certification can affect their level hours, which affects their compensation, yada yada yada. So I think that’s really, really great. What’s really cool is, what they’re also doing is, they’ve created kind of a, it’s kind of a Slack, it is Slack, but kind of a Slack collection point for people to contribute what they think are just phenomenal prompts. They’re creating, it’s not gamification, but they’re creating a mechanism because Slack is very social, right? People can now chime in to say, wow, that prompt was so great. If I just changed this and added three adjectives, this is my result, and then somebody else can chime and go, whoa. That’s great. What’s interesting is, you’re building this bottoms-up collection of super valuable prompts without the corporate telling you to do it. Again, it’s really kind of telling into the culture of the company, which I think is just fantastic as well. Then obviously there’s the big, big provider players, you know, the Microsofts, Salesforce.com, ServiceNow. What ServiceNow is doing is just phenomenal. I’m really glad to see this. It’s just a matter of keeping track of what’s truly working. It’s not all about data. Data is there to inform the ultimately, it’s the combination of AI’s data provisioning and a human being, the Johnny and Jane, the Ross and the Marc saying, well, yeah, but which I think is, again, super important. Ross: So Taranda, you’re writing a book you mentioned in passing. Can you tell us anything about that? What’s the thesis, and is there a title and launch date? Marc Steven: The book is, what I was highlighting beforehand, is really thinking about change management, but what is the learning functions, role of driving, more accelerated adoption of AI. That’s why I’ve been interviewing a whole bunch of these folks. I want to give a perspective of what’s really happening, rather than this observational, theoretical stuff. I’m interviewing a ton of folks, and my dilemma right now, to be honest with you, maybe you can help me, Ross, because I know you’re a phenomenal author. I don’t know if this is going to be a collection of case studies versus some sort of blue book or a playbook is a better description. I’m still on the fence, and maybe in good ways that should be maybe a combination. How do you take some of these really cool things that people are doing, the quote unquote case studies or whatever, but wait a second, is there a way to kind of operationalize that in a very sensible way that might align to certain processes or procedures you might already have but has maybe a different spin, thinking about this socially minded intelligence, you have to work with an agent to make sure that you’re following the guidelines of the playbook correctly. I don’t know. Maybe the agent is the coach of all your plays. Maybe that’s not the best, well, maybe it is a good example. Depends on what the person’s coaching, but yeah, that’s the book. I don’t know, I don’t have a good title. It could be the real campy, L&D is the new R&D. I get feedback from friends. I get feedback from friends that that is a really great way to look at it because there’s so much truth in that. Then I get other buddies and say, oh, geez, Marc, that’s the worst thing I’ve ever heard. Ross: You do some market testing, but I mean very much looking forward to reading it because this is about, it’s frustrating for me because I’m sitting on the outside because I want to know what’s the best people doing and, and I see bits and pieces from my clients and various other work, but I think sharing as you are, obviously uncovering the real best of what’s happening, I think is going to be a real boon. So thank you so much for your work and your time and your insights. Today, Marc has been a real treat. Marc Steven: Now that the treat, Ross has been mine, I really appreciate the invitation, and hopefully, this has been helpful to our audience. Great. The post Marc Ramos on organic learning, personalized education, L&D as the new R&D, and top learning case studies (AC Ep66) appeared first on amplifyingcognition.
undefined
Oct 9, 2024 • 36min

Alex Richter on Computer Supported Collaborative Work, webs of participation, and human-AI collaboration in the metaverse (AC Ep65)

“Trust is a key ingredient when you look into Explainable AI; it’s about how can we build trust towards these systems.” – Alex Richter About Alex Richter Alexander Richter is Professor of Information Systems at Victoria University of Wellington in New Zealand. where he has also been Inaugural Director of the Executive MBA and Associate Dean, where he specializes in the transformative impact of IT in the workplace. He has published more than 100 articles in leading academic journals and conferences, with several best paper awards and been covered by many major news outlets. He also has extensive industry experience and has led over 25 projects funded by companies and organizations, including the European Union.. Website: www.alexanderrichter.name University Website: people.wgtn.ac.nz/alex.richter LinkedIn: Alexander Richter Twitter: @arimue Publications (Google Scholar): Alexander Richter Publications (ResearchGate): Alexander Richter What you will learn The significance of CSCW in human-centered collaboration Trust as a cornerstone of explainable AI Emerging technologies enhancing human-AI teamwork The role of context in sense-making with AI tools Shifts in organizational structures due to AI integration The importance of inclusivity in AI applications Foresight and future thinking in the age of AI Episode Resources CSCW (Computer Supported Cooperative Work) AI (Artificial Intelligence) Explainable AI Web 2.0 Enterprise 2.0 Social software Human-AI teams Generative AI Ajax Meta (as in the company) Google Transcript Ross: Alex, it’s wonderful to have you on the show. Alex Richter: Thank you for having me, Ross. Ross: Your work is fascinating, and many strands of it are extremely relevant to amplifying cognition. So let’s dive in and see where we can get to. You were just saying to me a moment ago that the origins of a lot of your work are around what you call CSCW. So, what is that, and how has that provided a framework for your work? Alex: Yeah, CSCW (Computer-Supported Cooperative Work) or Computer-Supported Collaborative Work is the idea that we put the human at the center and want to understand how they work. And now, for quite a few years, we’ve had more and more emerging technologies that can support this collaboration. The idea of this research field is that we work together in an interdisciplinary way to support human collaboration, and now more and more, human-AI collaboration. What fascinates me about this is that you need to understand the IT part of it—what is possible—but more importantly, you need to understand humans from a psychological perspective, understanding individuals, but also how teams and groups of people work. So, from a sociological perspective, and then often embedded in organizational practices or communities. There are a lot of different perspectives that need to be shared to design meaningful collaboration. Ross: As you say, the technologies and potential are changing now, but taking a broader look at Computer-Supported Collaborative Work, are there any principles or foundations around this body of work that inform the studies that have been done? Alex: I think there are a couple of recurring themes. There are actually different traditions. For my own history, I’m part of the European tradition. When I was in Munich, Zurich, and especially Copenhagen, there’s a strong Scandinavian tradition. For me, the term “community” is quite important—what it means to be part of a community. That fits nicely with what I experienced during my time there with the culture. Another term that always comes back to me in various forms is “awareness.” The idea is that if we want to work successfully, we need to have a good understanding of what others are doing, maybe even what others think or feel. That leads to other important ingredients of successful collaboration, like trust, which is currently a very important topic in human-AI collaboration. A lot of what I see is that people are concerned about trust—how can we build it? For me, that’s a key ingredient. When you look into Explainable AI, it’s about how we can build trust toward these systems. But ultimately, originally, trust between humans is obviously very important. Being aware of what others are doing and why they’re doing it is always crucial. Ross: You were talking about Computer-Supported Collaborative Work, and I suppose that initial framing was around collaborative work between humans. Have you seen any technologies that support greater trust or awareness between humans, in order to facilitate trust and collaboration through computers? Alex: In my own research, an important upgrade was when we had Web 2.0 or social software, or social media—there are many terms for it, like Enterprise 2.0—but basically, these awareness streams and the simplicity of the platforms made it easy to post and share. I think there were great concepts before, but finally, thanks to Ajax and other technologies, these ideas were implemented. The technology wasn’t brand new, but it was finally accessible, and people could use the internet and participate. That got me excited to do a PhD and to share how this could facilitate better collaboration. Ross: I love that phrase, “web of participation.” Your work came to my attention because you and some of your students or colleagues did a literature review on human-AI teams and some of the success factors, challenges, and use cases. What stood out to you in that paper regarding current research in this space? Alex: I would say there’s a general trend in academia where more and more research is being published, and speed is very important. AI excites so many people, and many colleagues are working on it. One of the challenges is getting an overview of what has already been done. For a PhD student, especially the first author of the paper you mentioned—Chloe—it was important for her to understand the existing body of work. Her idea is to understand the emergence of human-AI teams and how AI is taking on some roles and responsibilities previously held by humans. This changes how we work and communicate, and it ultimately changes organizational structures, even if not formally right away. For example, communication structures are already changing. This isn’t surprising—it has happened before with social software and social media. But I find it interesting that there isn’t much research on the changes in these structures, likely due to the difficulty in accessing data. There’s a lot of research on the effects of AI—both positive and negative. I don’t have one specific study in mind, but what’s key is to be aware of the many different angles to look at. That was the purpose of the literature review—to get a broader, higher-level perspective of what’s happening and the emerging trends. Ross: Absolutely. We’ll share the link to that in the show notes. With that broader view, are there any particularly exciting directions we need to explore to advance human-AI teams? Alex: One pattern I noticed from my previous research in social media is that when people look at these tools, it’s not immediately clear how to use them. We call these “use cases,” but essentially, it’s about what you can do with the tool. Depending on what you do, you can assess the benefits, risks, and so on. What excites me is that it depends heavily on context—my experience, my organization, my department, and my way of working. A lot of sense-making is going on at an individual level: how can I use generative AI to be more productive or efficient, while maintaining balance and doing what feels right? These use cases are exciting because, when we conducted interviews, we saw a diverse range of perspectives based on the department people worked in and the use cases they were familiar with. Some heard about using AI for ideation and thought, “That’s exciting! Let’s try that.” Others heard about using chatbots for customer interactions, but they heard negative examples and were worried. They said, “We should be careful.” There are obviously concerns about ethics and privacy as well, but it really depends on the context. Ultimately, the use cases help us decide what is good for us and what to prioritize. Ross: So there’s kind of a discovery process, where at an organizational level, you can identify use cases to instruct people on and deploy, with safeguards in place. But it’s also a sense-making process at the individual level, where people are figuring out how to use these tools. Everyone talks about training and generative AI, but maybe it’s more about facilitating the sense-making process to discover how these tools can be used individually. Alex: Absolutely. You have to experience it for yourself and learn. It’s good to be aware of the risks, but you need to get involved. Otherwise, it’s hard to discuss it theoretically. It’s like it was before with social media—if you had a text input field, you could post something. For a long time, in our research domain, we tried to make sense of it based on functions, but especially with AI, the functions are not immediately clear. That’s why we invest so much effort into transparency—making it clearer what happens in the background, what you can do with the tool, and where the limitations lie. Ross: So, we’re talking about sense-making in terms of how we use these tools. But if we’re talking about amplifying cognition, can we use generative AI or other tools to assist our own sense-making across any domain? How can we support better human sense-making? Alex: I think one point is that generative AI obviously can create a lot for us—that’s where the term comes from—but it’s also a very easy-to-use interface for accessing a lot of what’s going on. From my personal experience with ChatGPT and others like Google Gemini, it provides a very easy-to-use way of accessing all this knowledge. So, when you think about the definition of generative AI, there may be a smaller definition—like it’s just for generating content—but for me, the more impactful effect is that you can use it to access many other AI tools and break down the knowledge in ways that are easier to use and consume. Ross: I think there are some people who are very skilled at that—they’re using generative AI very well to assist in their sense-making or learning. Others are probably unsure where to start, and there are probably tools that could facilitate that. Are there any approaches that can help people be better at sense-making, either generally or in a way that’s relevant to a particular learning style? Alex: I’m not sure if this is where you’re going, but when you said that, I thought about the fact that we all have individual learning styles. What I find interesting about generative AI is that it’s quite inclusive. I had feedback from Executive MBA students who, for example, are neurodivergent learners. They told me it’s helpful for them because they can control the speed of how they consume the information. Sometimes, they go through it quickly because they’re really into it, and other times, they need it broken down. So, you’re in the driver’s seat. You decide how to consume the information—whether that’s in terms of speed or complexity. I think that’s a very important aspect of learning and sense-making in general. So yeah, inclusivity is definitely a dimension worth considering. Ross: Well, to your point around consuming information, I like the term “assimilating” information because it suggests the information is becoming part of your knowledge structures. So, we’ve talked about individual sense-making. Is there a way we can frame this more broadly, to help facilitate organizational sense-making? Alex: Yeah, we’re working with several companies, and I have one specific example in mind where we tried to support the organizational sense-making process by first creating awareness. When we talk about AI, we might be discussing different things. The use cases can help us reach common ground. By the way, “common ground” is another key CSCW concept. For successful collaboration, you need to look in the same direction, right? And you need to know what that direction is. Defining a set of use cases can ensure you’re discussing the same types of AI usage. You can then discuss the specific benefits as an organization, and use cases help you prioritize. Of course, you also need to be aware of the risks. One insight I got from a focus group during the implementation of generative AI in this company was that they had some low-risk use cases, but the more exciting ones were higher-risk. They agreed to pursue both. They wanted to start with some low-key use cases they knew would go smoothly in terms of privacy and ethics, but they also wanted to push boundaries with higher-risk use cases while creating awareness of the risks. They got top-level support and made sure everyone, including the workers’ council, was on board. So, that’s one way of using use cases—to balance higher-risk but potentially more beneficial options with safer, low-risk use cases. Ross: Sense-making relates very much to foresight. Company leadership needs to make strategic decisions in a fast-changing world, and they need to make sense of their business environment—what are the shifts, what’s the competition, what are the opportunities? Foresight helps frame where you see things going. Effective foresight is fueled by sense-making. Does any of your work address how to facilitate useful foresight, whether individually or organizationally? Alex: Yes. Especially with my wife, Shahper—who is also an academic—and a few other colleagues, we thought, early last year when ChatGPT had a big impact, “Why was this such a surprise?” AI is not a new topic. When you look around, obviously now it’s more of a hype, but it’s been around for a long time. Some of the concepts we’re still discussing now come from the 1950s and 60s. So, why was it so surprising? I think it’s because the way we do research is mainly driven by trying to understand what has happened. There’s a good reason for that because we can learn a lot from the past. But if ChatGPT taught us one thing, it’s that we also need to look more into the future. In our domain—whether it’s CSCW or Information Systems Research—we have the tools to do that. Foresight or future thinking is about anticipating—not necessarily predicting—but preparing for different scenarios. That’s exciting, and I hope we’ll see more of this type of research. For example, we presented a study at a conference in June where we looked at human-AI collaboration in the metaverse, whatever that is. It’s not just sitting in front of a screen with ChatGPT but actually having avatars talking to us, interacting with us, and at some point, having virtual teams where it’s no longer a big difference whether I’m communicating with a human or an AI-based avatar. Ross: One of the first thoughts that comes to mind is if we have a metaverse where a team has some humans represented by avatars and some AI avatars, is it better for the AI avatars to be as human-like as possible, or would it be better for them to have distinct visual characteristics or communication styles that are not human-like? Alex: That’s a great question. One of my PhD students, Bayu, thought a bit about this. His topic is actually visibility in hybrid work, and he found that avatars will play a bigger role. Avatars have been around for a while, depending on how you define them. In a recent study we presented, we tried to understand how much fidelity you need for an avatar. Again, it depends on the use case—sorry to be repetitive—but understanding the context is essential. We’re extending this toward AI avatars. There’s a recent study from colleagues at the University of Sydney, led by Mike Seymour, and they found that the more human-like an AI avatar is, the more trustworthy it appears to people. That seems intuitive, but it contradicts earlier studies that suggested people don’t like AI that is too human-like because it feels like it’s imitating us. One term used in this context is the “uncanny valley.” But Mike Seymour’s study is worth watching. They present a paper using an avatar that is so human-like that people commented on how relatable it felt. As technology advances, and as we as humans adjust our perceptions, we may become more comfortable with human-like avatars. But again, this varies depending on the context. Do we want AI to make decisions about bank loans, or healthcare, for example? We’ll see many more studies in this area, and as perceptions change, so will our ideas about what we trust and how transparent AI needs to be. Already, some chatbots are so human-like that it’s not immediately clear whether you’re interacting with a human or a bot. Ross: A very interesting space. To wrap up, what excites you the most right now? Where will you focus your energy in exploring the possibilities we’ve been discussing? Alex: What excites me most right now is seeing how organizations—companies, governmental organizations, and communities—are making sense of what’s happening and trying to find their way. What I like is that there isn’t a one-size-fits-all approach, especially not in this context. Here in New Zealand, I love discussing cultural values with my Executive MBA students and how our society, which is very aware of values and community, can embrace AI differently from other cultures. Again, it comes back to context—cultural context, in this case. It’s exciting to see diverse case studies where sometimes we get counterintuitive or contradictory effects depending on the organization. We won’t be able to address biases in AI as long as we don’t address biases in society. How can we expect AI to get things right if we as a society don’t get things right? This ties back to the very beginning of our conversation about CSCW. It’s important for CSCW to also include sociologists to understand society, how we develop, and how this shapes technology. Maybe, in the long run, technology will also contribute to shaping society. That will keep me busy, I think. Ross: Absolutely. As you say, this is all about humanity—technology is just an aid. Thank you so much for your time and insights. I’m fascinated by your work and will definitely keep following it. Alex: Thank you very much, Ross. Thanks for having me. The post Alex Richter on Computer Supported Collaborative Work, webs of participation, and human-AI collaboration in the metaverse (AC Ep65) appeared first on amplifyingcognition.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode