Amplifying Cognition cover image

Amplifying Cognition

Latest episodes

undefined
Apr 9, 2025 • 35min

Kieran Gilmurray on agentic AI, software labor, restructuring roles, and AI native intelligence businesses (AC Ep84)

“Let technology do the bits that technology is really good at. Offload to it. Then over-index and over-amplify the human skills we should have developed over the last 10, 15, or 20 years.” – Kieran Gilmurray About Kieran Gilmurray Kieran Gilmurray is CEO of Kieran Gilmurray and Company and Chief AI Innovator of Technology Transformation Group. He works as a keynote speaker, fractional CTO and delivering transformation programs for global businesses. He is author of three books, most recently Agentic AI. He has been named as a top thought leader on generative AI, agentic AI, and many other domains. Website: Kieran Gilmurray X Profile: Kieran Gilmurray LinkedIn Profile: Kieran Gilmurray BOOK: Free chapters from Agentic AI by Kieran Gilmurray Chapter 1 The Rise of Self-Driving AI  Chapter 2: The Third Wave of AI  Chapter 3 – Agentic AI Mapping the Road to Autonomy Chapter 4- Effective AI Agents What you will learn Understanding the leap from generative to agentic AI Redefining work with autonomous digital labor The disappearing need for traditional junior roles Augmenting human cognition, not replacing it Building emotionally intelligent, tech-savvy teams Rethinking leadership in AI-powered organizations Designing adaptive, intelligent businesses for the future Episode Resources People John Hagel Peter Senge Ethan Mollick Technical & Industry Terms Agentic AI Generative AI Artificial intelligence Digital labor Robotic process automation (RPA) Large language models (LLMs) Autonomous systems Cognitive offload Human-in-the-loop Cognitive augmentation Digital transformation Emotional intelligence Recommendation engine AI-native Exponential technology Intelligent workflows Transcript Ross Dawson: Hey, it’s fantastic to have you on the show. Kieran Gilmurray: Absolutely delighted, Ross. Brilliant to be here. And thank you so much for the invitation, by the way. Ross: So agentic AI is hot, hot, hot, and it’s now sort of these new levels of how it is we — these are autonomous or semi-autonomous aspects of AI. So I want to really dig into — you’ve got a new book out on agentic AI, and particularly looking at the future of work. And particularly want to look at work, so amplifying cognition. So I want to start off just by thinking about, first of all, what is different about agentic AI from generative AI, which we’ve had for the last two or three years, in terms of our ability to think better, to perform our work better, to make better decisions? So what is distinctive about this layer of agentic AI? Kieran: I was going to say, Ross, comically, nothing if we don’t actually use it. Because it’s like all the technologies that have come over the last 10–15 years. We’ve had every technology we have ever needed to make more work, more efficient work, more creative work, more innovative, to get teams working together a lot more effectively. But let’s be honest, technology’s dirty little secret is that we as humans very often resist. So I’m hoping that we don’t resist this technology like the others we have slowly resisted in the past, but they’ve all come around to make us work with them. But this one is subtly different. So when you say, look, agentic AI is another artificial intelligence system. The difference in this one — if you take some of the recent, what I describe as digital workforce or digital labor, go back eight years to look at robotic process automation — which was very much about helping people perform what was meant to be end-to-end tasks. So in other words, the robots took the bulky work, the horrible work, the repetitive work, the mundane work and so on — all vital stuff to do, but not where you really want to put your teams, not where you really want to spend your time. And usually, all of that mundaneness sucked creativity out of the room. You ended up doing it most of the day, got bored, and then never did the innovative, interesting stuff. Agentic is still digital labor sitting on top of large language models. And the difference here is, as described, is that this is meant to be able to act autonomously. In other words, you give it a goal and off it goes with minimal or no human intervention. You can design it as such, or both. And the systems are meant to be more proactive than reactive. They plan, they adapt, they operate in more dynamic environments. They don’t really need human input. You give them a goal, they try and make some of the decisions. And the interesting bit is, there is — or should be — human in the loop in this. A little bit of intervention. But the piece here, unlike RPA — that was RPA 1, I should say, not the later versions because it’s changed — is its ability to adapt and to reshape itself and to relearn with every interaction. Or if you take it at the most basic level — you look at a robot under the sea trying to navigate, to build pipelines. In the past, it would get stuck. A human intervention would need to happen. It would fix itself. Now it’s starting to work itself out and determine what to do. If you take that into business, for example, you can now get a group of agentic agents, for example, to go out and do an analysis of your competitors. You can go out and get it to do deep research — another agentic agent to do deep research, McKinsey, BCG or something else. You can get another agent to bring that information back, distill it, assemble it, get an agent to create it, turn that into an article. Get another agent to proofread it. Get another agent to pop it up onto your social media channels and distribute it. And get another agent to basically SEO-optimize it, check and reply to any comments that anyone’s making. You’re sort of going, “Here, but that feels quite human.” Well, that’s the idea of this. Now we’ve got generative AI, which creates. The problem with generative AI is that it didn’t do. In other words, after you created something, the next step was, well, what am I going to do with my creation? Agentic AI is that layer on top where you’re now starting to go, “Okay, not only can I create — I can decide, I can do and act.” And I can now make up for some of the fragility that exists in existing processes where RPA would have broken. Now I can sort of go from A to B to D to F to C, and if suddenly G appears, I’ll work out what G is. If I can’t work it out, I’ll come and ask a person. Now I understand G, and I’ll keep going forever and a day. Why is this exciting — or interesting, I should say? Well-used, this can now make up for all the fragility of past automation systems where they always got stuck, and we needed lots of people and lots of teams to build them. Whereas now we can let them get on with things. Where it’s scary is that now we’re talking about potential human-level cognition. So therefore, what are teams going to look like in the future? Will I need as many people? Will I be managing — as a leader — managing agentic agents plus people? Agentic agents can work 24/7. So am I, as a manager, now going to be expected to do that? Its impact on what type of skills — in terms of not just leadership, but digital and data and technical and everything else — there’s a whole host of questions. There is as much as there is new technology here Ross. Ross Dawson: Yeah, yeah, absolutely. And so, I mean, those are some of the questions, though, I want to, want to ask you the best possible answers we have today. And in your book, you do emphasize this is about augmenting humans. It is around how it is we can work with the machines and how they can support us, and human creativity and oversight being at the center. But the way you’ve just laid out, there’s a lot of what is human work, which is overlap from what you’ve described. So just at a first step, thinking about individuals, right? Professionals, knowledge workers — and so they have had, there’s a few layers. You’ve had your tools, your Excels. You’ve had your assistants which can go and do tasks when you ask them. And now you have agents which can go through sequences and flows of work in knowledge processes. So what does that mean today for a knowledge worker who is starting to have, where the enterprise starts to bring them in? Or they say, “Well, this is going to support it.” So what are the sorts of things which are manifest now for an individual professional in bringing these agentic workforce play? What are the examples? What are ways to see how this is changing work? Kieran Gilmurray: Yeah, well, let’s dig into that a little bit, because there’s a couple of layers to this. If you look at what AI potentially can do through generative AI, all of a sudden, the question becomes: why would I actually hire new trainees, new labor? On the basis that, if you look at any of the studies that have been produced recently, then there’s two roles, two setups. So let me do one, which is: actually, we don’t need junior labor, because junior labor takes a long time to learn something. Whereas now we’ve got generative AI and other technologies, and I can ask it any question that I want, and it’s going to give me a pretty darned good answer. And therefore, rather than having three and four and five years to train someone to get them to a level of competency, why don’t I not just put in agentic labor instead? It can do all that low-ish level work, and I don’t need to spend five years learning. I immediately have an answer. Now that’s still under threat because the technology isn’t good enough yet. It’s like the first scientific calculator version — they didn’t quite work. Now we don’t even think about it. So there is a risk that all of a sudden, agentic AI can get me an answer, or generative AI can get me an answer, that previously would have taken six or eight weeks. Let me give you an example. So I was talking to a professor from Chicago Business School the other day, and he went to one of his global clients. And normally the global client will ask about a strategy item. He would go away — him and a team of his juniors and equals would research this topic over six or twelve weeks. And then they would come back with a detailed answer, where the juniors would have went round, done all the grunt work, done all the searching and everything else, and the seniors would have distilled it off. He went — he’s actually written a version of a GPT — and he’s fed it past strategy documents, and he fed in the client details. Now he did this in a private GPT, so it was clean and clear, and in two and a half hours, he had an answer. It literally — his words, not mine — he went back to the client and said, “There you go. What do you think? By the way, I did that with generative AI and agentics.” And they went, “No, you didn’t. That work’s too good. You must have had a team on this.” And he said, “Literally not.” And he’s being genuine, because I know the guy — he’d put his reputation on it. So all of a sudden, now all of those roles that might have existed could be impacted. But where do we get then the next generation of labor to come through in five and six and ten years’ time? So there’s going to be a lot of decisions need made. As to: look, we’ve got Gen AI, we’ve potentially got agentic AI. We normally bring in juniors over a period of time, they gain knowledge, and as a result of gaining knowledge, they gain expertise. And as a result of gaining expertise, we get better answers, and they get more and more money. But now all of Gen AI is resulting in knowledge costing nothing. So where you and I would have went to university — let’s say we did a finance degree — that would have lasted us 30 years. Career done. Tick. Now, actually, Gen AI can pretty much understand, or will understand, everything that we can learn on a finance degree, plus a politics degree, plus an economics degree, plus, plus, plus — all out of the box for $20 a month. And that’s kind of scary. So when it comes to who we hire, that opens up the question now: do we have Gen AI and agentic labor, and do we actually need as many juniors? Now, someone’s going to have to press the buttons for the next couple of years, and any foresighted firm is going to go, “This is great, but people plus technology actually makes a better answer.” I just might not need as many. So now, when it comes to the actual hiring and decision-making — as to how am I going to construct my labor force inside of an organization — that’s quite a tricky question, if and when this technology, Gen AI and agentics, really ramps through the roof. Ross Dawson: I mean, these are — I mean, I think these are fundamentally strategic choices to be made. As in, you — I mean, it’s, crudely, it’s automate or augment. And you could say, well, all right, first of all, just say, “Okay, well, how do we automate as many of the current roles which we have?” Or you can say, “Oh, I want to augment all of the current roles we have, junior through to senior.” And there’s a lot more subtleties around those strategic decisions. In reality, some organizations will be somewhere between those two extremes — and a lot in between. Kieran Gilmurray: 100%. And that’s the question. Or potentially, at the moment, it’s actually, “Why don’t we augment currently?” Because the technology isn’t good enough to replace. And it isn’t — it still isn’t. And no, I’m a fan of people, by the way — don’t get me wrong. So anyone listening to this should hear that. I believe great people plus great technology equals an even greater result. The technology, the way it exists at the moment, is actually — and you look at some research out from Harvard, Ethan Mollick, HBR, Microsoft, you name it, it’s all coming out at the moment — says, if you give people Gen AI technology, of which agentic AI is one component: “I’m more creative. More productive. And, oddly enough, I’m actually happier.” It’s breaking down silos. It’s allowing me to produce more output — between 10 to 40% — but more quality output, and, and, and. So at the moment, it’s an augmentation tool. But we’re training, to a degree, our own replacements. Every time we click a thumbs up, a thumbs down. Every time we redirect the agentics or the Gen AI to teach it to do better things — or the machine learning, or whatever else it is — then technically, we’re making it smarter. And every time we make it smarter, we have to decide, “Oh my goodness, what are we now going to do?” Because previously, we did all of that work. Now, that for me has never been a problem. Because for all of the technologies over the decades, everybody’s panicked that technology is going to replace us. We’ve grown the number of jobs. We’ve changed jobs. Now, this one — will it be any different? Actually — and why I say potentially — is you and I never worried, and our audience never worried too much, when an EA was potentially automated. When the taxi driver was augmented and automated out of a job. When the factory worker was augmented out of a job. Now we’ve got a decision, particularly when it comes to so-called knowledge work. Because remember, that’s the expensive bit inside of a business — the $200,000 salaries, the $1 million salaries. Now, as an organization, I’m looking at my cost base, going, “Well, I might actually bring in juniors and make them really efficient, because I can get a junior to be as productive as a two-year qualified person within six months, and I don’t need to pay them that amount of money.” And/or, actually, “Why don’t I get rid of my seniors over a period of time? Because I just don’t need any.” Ross Dawson: Things that some leaders will do. But, I mean, it comes back to the theme of amplifying cognition. The sense of — the real nub of the question is, yes, you can sort of say, “All right, well, now we are training the machine, and the machine gets better because it’s interacting. We’re giving it more work.” But it’s really finding the ways in which the nature of the way we interact also increases the skills of the humans. And so John Hagel talks about scalable learning. In fact, Peter Senge used to talk about organizational learning — and that’s no different today. We have to be learning. And so, saying, “Well, as we engage with the AI — and as you rightly point out — we are teaching and helping the AI to learn,” we need to be able to build the process and systems and structures and workflows where the humans in it are not static and stagnant as they use AI more, but they’re more competent and more capable. Kieran Gilmurray: Well, that’s the thing we need to do, Ross. Otherwise, what we end up with is something called cognitive offload — where now, all of a sudden, I’ll get lazy, I’ll let AI make all of the decisions, and over time, I will forget and not be valuable. For me, this is a question of great potential with technology. But the real question comes down to: okay, how do we employ that technology? And to your point a second ago — what do we do as human beings to learn the skills that we need to learn to be highly employable? To create, be more innovative, more creative using technology? Ross Dawson: I answered the question you just asked. Kieran Gilmurray: 100%, and this is — this is literally the piece here, so— Ross: That’s the question. So do you have any answers to that? Kieran: No, of course. Of course. Well, mine is — it’s that. So, for me, AI will be — absolutely — and AI is massive. And let me explain that, because everybody thinks it’s been around. If we look at generative AI for the last couple of years — but AI has been around for 80-plus years. It’s what I call an 80-year-old overnight success story. Everybody’s getting excited about it. Remember, the excitement is down to the fact that I can now interact with — or you interact with — technology in a very natural sense and get answers that I previously couldn’t. So now, all of a sudden, we’re experts in everything across the world. And if you use it on a daily basis, all of a sudden, our writing is better, our output’s better, our social media is better. So the first bit is: just learn how to use and how to interact with the technology. Now, we mentioned a moment ago — but hold on a second here — what happens if everybody uses it all the time, the AI has been trained, there’s a whole host of new skills? Well, what will I do? Well, this for me has always been the case. Technology has always come. There’s a lot less saddlers than there are software engineers. There might be a lot less software engineers in the future. So therefore, what do we do? Well, my one is this. All of this has been the same, regardless of the technology: let technology do the bits that technology is really good at. Offload to it. You still need to understand or develop your digital, your AI, your automation, your data literacy skills — without a doubt. You might do a little bit of offloading, because now we don’t actually think about scientific calculators. We get on with it. We don’t go into Amazon and automatically work out all of our product sets, because it’s got a recommendation engine. So therefore, let it keep doing all its stuff. Whereas, as humans, I want to develop greater curiosity. I want to develop what I would describe as greater cognitive flexibility. I want to use the technology — now that I’ve got this — how can I produce even better, greater outputs, outcomes, better quality work, more innovative work? And part of that is now going, “Okay, let the technology do all of its stuff. Free up tons of hours,” because what used to take me weeks takes me days. Now I can do other stuff, like wider reading. I can partner with more organizations. I can attempt to do more things in the day — whereas in the past, I was just too busy trying to get the day job done. The other bits I would be saying: companies need to develop emotional intelligence in people. Because now, if I can get the technology to do the stuff, now I need to engage with tech. But more importantly, I’m now freed up to work across silos, to work across businesses, to bring in different partner organizations. And statistically, only 36% of us are actually emotionally intelligent. Now, AI is an answer for that as well — but emotional intelligence should be something I would be developing inside of an organization. A continuous innovation mindset. And I’d be teaching people how to communicate even better. Notice I’m letting the tech do all the stuff that tech should do regardless. Now I’m just over-indexing and over-amplifying the human skills that we should have developed over the last 10, 15, or 20 years. Ross Dawson: Yeah. And so, your point — this comes about people working together. And so I think that was one of the — certainly one of the interesting parts of your book is around team dynamics. So there’s a sense of, yes, we have agentic systems. This starts to change the nature of workflows. Workflows involve multiple people. They involve AI agents as well. So as we are thinking about teams — as in multiple humans assisted by technology — what are the things which we need to put in place for effective team dynamics and teamwork? Kieran Gilmurray: Yeah, so — so look, what you will see potentially moving forward is that mixture of agentic labor working with human labor. And therefore, from a leadership perspective, we need people — we need to teach people — to lead in new ways. Like, how do I apply agentic labor and human labor? And what proportion? What bits do I get agentic labor to do? What bits do I get human labor to do? Again, we can’t hand everything over to technology. When is it that I step in? Where do I apply humans in the loop? When you look at agentic labor, it’s going to be able to do things 24/7, but as people, we physically and humanly can’t. So, how — when am I going to work? What is the task that I’m going to perform? As a leadership or as a business — well, what are the KPIs that I’m going to measure myself on, and my team on? Because now, all of a sudden, my outputs potentially could be greater, or I’m asking people to do different roles than they’ve done in the past, because we can get agentic labor to do it. So there’s a whole host of what I would describe as current management consideration. Because, let’s be honest — like when we introduced ERP, CRM, factory automation, or something else — it just changed the nature of the tasks that we perform. So this is thinking through: where is the technology going to be used? Where should we not use it? Where should we put people? How am I going to manage it? How am I going to lead it? How am I going to measure it? These are just the latest questions that we need to answer inside of work. And again, from a skillset perspective — from both a leadership and getting my human labor team to do particular work, or how I onboard them — how do I develop them? What are the skills that I’m now looking for when I’m doing recruitment? What are the career paths that I’m going to put in place, now that we’ve got human plus agentic labor working together? Those are all conversations that managers, leaders, and team leaders need to have — and strategists need to have — inside of businesses. But it shouldn’t worry businesses, because again, we’ve had this same conversation for the last five decades. It’s just been different technology at different times, where we had to suddenly reinvent what we do, how we do it, how we measure it, and how we manage it. Ross Dawson: So what are specifics of how teams, team dynamics might work in using agentic AI in a particular industry or in a particular situation? Or any examples? So let’s ground this. Kieran Gilmurray: Yeah, so let’s — let me ground it in physical robots before I come into software robots, because this is what this is: software labor, not anything else. When you look at how factories have evolved over the years — so take Cadbury’s factory in the UK. At one stage, Cadbury’s had thousands and thousands of workers, and everybody ended up engaging on a very human level — managing people, conversations every day, orchestration, organization. All of the division of labor stuff happened. Now, when you go into Cadbury’s factory, it’s hugely automated — like other factories around the world. So now we’re having to teach people almost to mind the robots. Now we have far less people inside of our organizations. And hopefully — to God — this won’t happen in what I’d describe as a knowledge worker park, but we’re going to teach people how to build logical, organized, sequential things. Because to break something down into a process to build a machine — it’s the same thing when it comes to software labor. How am I going to break it and deconstruct a process down into something else? So the mindset needed to actually put software labor into place varies compared to anything else that we’ve done. Humans were messy. Robots can’t be. They have to be very logical pieces. In the past, we were used to dealing with each other. Now I’m going to have to communicate with a robot. That’s a very different conversation. It’s non-human. It’s silicon — not carbon. So how do I engage with a robot? Am I going to be very polite? And I see a lot of people saying, “Please, would you mind doing the following?” No — it’s a damn robot. Just tell it what to do. My mindset needs to change. So if I take, in the past, when I’m asking someone to do something, I might say, “Give me three things” or “Can you give me three ideas?” Now, I’ve got an exponential technology where my expectations and requests of agentic labor are going to vary. But I need to remember — I’m asking a human one thing and a bot another. Let me give you an example. I might say to you, “Ross, give me three examples of…” Well, that’s not the mindset we need to adopt when it comes to generative AI. I should be going, “Give me 15, 50, 5,000,” because it’s a limitless vat of knowledge that we’re asking for. And then I need to practice and build human judgment — to say, “Actually, I’m not going to cognitively offload and let it think for me and just accept all the answers.” But I’m now going to have to work with this technology and other people to develop that curiosity, develop that challenging mindset, to suddenly teach people how to do deeper research, to fact-check everything that I’m being told. To understand when I should use a particular piece of information that’s been given to me — and hope to God it’s not biased, not hallucinated, or anything else — but it’s actually a valuable knowledge item that I should be putting into workflow or a project or a particular document or something else. So again, it’s just working through: what is technology? What’s the technology in front of me? What’s it really good at? Where can I apply it? And understanding that — where should I put my people, and how should I manage both? What are the skills that I need to teach my people — and myself — to allow me to deal with all of this potentially fantastic, infinite amount of knowledge and activity that will hopefully autonomously deliver all the outcomes that I’ve ever wanted? But not unfettered. And not left to its own devices — ever. Otherwise, we have handed over human agency and team agency — and that’s not something or somewhere we should ever go. The day we hand everything to the robots, we might as well just go to the care home and give up. Ross Dawson: We’ll be doing that soon. So around now, let’s think about leadership. So, I mean, you’ve alluded to that in quite a few — I mean, a lot of it has been really talking about some of the questions or the issues or the challenges that leaders at all levels need to engage with. But this changes, in a way, the nature of leadership. As you say, you’ve got digital labor as well as human labor. The organization has a different structure. It impacts the boundaries of organizations and the flows of information and processes — cross-organizational boundaries. So what is the shift for leaders? And in particular, what are the things that leaders can do to develop their capabilities for a somewhat different world? Kieran Gilmurray: Yeah, it’s interesting. So I think there’ll be a couple of different worlds here. Number one is, we will do what we’ve always done, which is: we’ll put in a bit of agentic labor, and we’ll put in a bit of generative AI, and we’ll basically tweak how we actually operate. We’ll just make ourselves marginally more efficient. Because anything else could involve the redesign and the restructure of the organization, which could involve the restructure and the redesign of our roles. And as humans, we are very often very change-resistant. Therefore, I don’t mind technology that I understand, and I don’t mind technology that makes me more productive, more creative. But I do mind technology that could actually disrupt how I lead, where I actually fit inside of the organization, and something else. So for those leaders, there’s going to be a minimal amount of change — and there’s nothing wrong with that. That’s what I call the “taker philosophy,” because you go: taker, maker, shaper — and I’ll walk through those in a second — which is, I’ll just take another great technology and I’ll be more productive, more creative, more innovative. And I recommend every business does that at this moment in time. Who wouldn’t want to be happier with technology doing greater things for you? So go — box number one. And therefore, the skills I’m going to have to learn — not a lot of difference. Just new skills around AI. In other words, understanding bias, hallucinations, understanding cognitive offloading, understanding where to apply the technology and not. And by “not,” I mean: very often people put technology at something that has no economic value. Waste time, waste money, waste energy, get staff frustrated — something else. So those are just skills people have to learn. It could be any technology, I’ve said. The other method of doing this is almost what I describe as the COVID method. I need to explain that statement. When COVID came about, we all worked seamlessly. It didn’t matter. There were no boundaries inside of organizations. Our mission was to keep our customers happy. And therefore, it didn’t matter about the usual politics, the usual silos, or something else. We made things work, and we made things work fast. What I would love to see organizations doing — and very few do it — is redesign and re-disrupt how they actually work. And I’m sitting there going, it’s not that I’m doing what I’m doing and I’ve now got a technology — “Where do I add it on?” — as in two plus one is equal to three. What I’m sitting going and saying is: How can I fundamentally reshape how I deliver value as an organization? And working back from the customer — who will pay a premium for this — and therefore, if I work back from the customer, how do I reconstruct my entire business in terms of leadership, in terms of people, in terms of agentic and human labor, in terms of open ecosystems and partnerships and everything else — to deliver in a way that excites and delights? If we take the difference between bookstore and Amazon — I never, or rarely, go into a bookstore anymore. I now buy Amazon almost every time, not even thinking about it. If I look at AI-native labor — they’re what I describe as Uber’s children. Their experiences of the world and how they consume are very different than what you and I have constructed. Therefore, how do I create what you might call AI-native intelligent businesses that deliver in a way that is frictionless and intelligent? And that means: intelligent processes, intelligent people, using intelligent technology, intelligent leadership — forgetting about silos and breakdowns and everything else that exists politically inside of organizations — but applying the best technology. Be it agentics, be it automation, be it digital, be it CRM, ERP — it doesn’t really matter what it is. Having worked back from the customer, design an organization to deliver on its promise to customers — to gain a competitive advantage. And those competitive advantages will be less and less. I can copy all the technology quicker. Therefore, my business strategy won’t be 10 years. It possibly won’t be five. It might be three — or even less. But my winning as a business will be my ability to construct great teams. And those great teams will be great people plus great technology — to allow me to deliver something digitally and intelligently to consumers who want to pay a premium for as long as that advantage lasts. And it might be six months. It might be twelve months. It might be eighteen months. So now we’re getting to a phase of almost fast technology — just like we have fast fashion. But the one thing we don’t want to do is play loose and fast with our teams. Because ultimately, I still come back to the core of the argument — that great people who are emotionally intelligent, who’ve been trained to question everything that they’ve got, who are curious, who enjoy working as part of a team in a culture — and that piece needs to be taken care of as well. Because if you just throw robots at everything and leave very few people, then what culture are you actually trying to deliver for your staff and for your customers? How do I get all of this work to deliver in a way that is effective, is affordable, is operationally efficient, profitable — but with great people at the core, who want to continue being curious, creating new and better ways of delivering in a better organization? Not just in the short term — because we’re very short-termist — but how do I create a great organization that endures over the next five or ten years? By creating flexible labor and flexible mindsets, with flexible leaders organizing and orchestrating all this — to allow me to be a successful business. Change is happening too quickly these days. Change is going to get quicker. Therefore, how do I develop an adaptive mindset, adaptive labor force, and adaptive organization that’s going to survive six months, twelve months — and maybe, hopefully to God, sixteen months plus? Ross Dawson: Fantastic. That’s a great way to round out. So where can people find out more about your work? Kieran Gilmurray: Yeah, look, I’m on LinkedIn all the time — probably too much. I should get an agentic labor force to sort that out for me, but I’d much prefer authentic relationships than anything else. Find me on LinkedIn — Kieran Gilmurray. I think there are only two of me: one’s in Scotland, who is related some way back, and the Irish one. Or www.kierangilmurray.com is where I publish far too much stuff and give far too much stuff — things — away for free. But I have a philosophy that says all boats rise in a floating tide. So the more we share, the more we give away, the more we benefit each other. So that’s going to continue for quite some time. I have a book out on agentic AI. Again, it’s being given away for free. Ross, if you want to share it, please go for it, sir, as well. As I said, let’s continue this conversation — but let’s continue this conversation in a way that isn’t about replacing people. It’s about great leadership, great people, and great businesses that have people at their core, with technology serving us — not us serving the technology. Ross: Fabulous. Thanks so much, Kieran. Kieran: My pleasure. Thanks for the invite. The post Kieran Gilmurray on agentic AI, software labor, restructuring roles, and AI native intelligence businesses (AC Ep84) appeared first on Amplifying Cognition.
undefined
Apr 2, 2025 • 32min

Jennifer Haase on human-AI co-creativity, uncommon ideas, creative synergy, and humans outperforming (AC Ep83)

“We humans often tend to be very restricted—even when we are world champions in a game. And I’m very optimistic that AI will surprise us, with very different ways of solving complex problems—and we can make use of that.” – Jennifer Haase About Jennifer Haase Dr. Jennifer Haase is a researcher at the Weizenbaum Institute, and lecturer at Humboldt University and University of the Arts Berlin. Her work focuses on the intersection of creativity, Artificial Intelligence, and automation, including AI for enhancing creative processes. She was named as one the 100 most important minds in Berlin science. Website: Jennifer Haase Jennifer Haase   LinkedIn Profile: Jennifer Haase What you will learn Stumbling into creativity through psychology and tech Redefining creativity in the age of AI The rise of co-creation between humans and machines How divergent and reverse thinking fuel innovation Designing AI tools that adapt to human thought Balancing human motivation with machine efficiency Challenging assumptions with AI’s unconventional solutions Episode Resources Websites & Platforms jenniferhaase.com ChatGPT Concepts & Technical Terms Artificial Intelligence (AI) Human-AI Co-Creativity Generative AI Large Language Models (LLMs) ChatGPT GPT-4 GPT-3.5 GPT-4.5 Business Informatics Psychology Creativity Divergent Thinking Convergent Thinking Mental Flexibility Iterative Process Everyday Creativity Alternative Uses Test Creativity Measures Creative Performance Transcript Ross Dawson: Jennifer, it’s a delight to have you on the show. Jennifer Haase: Thanks for inviting me. Ross: So you are diving deep, deep, deep into AI and human co-creativity. So just to hear—just back a little bit—sort of how you’ve embarked on this journey. I mean, love to—we can fill in more about what you’re doing now. But how did you come to be on this journey? Jennifer: I would say overall, it was me stumbling into tech more and more and more. So I started with creativity. My background is in psychology, and I learned about the concept of creativity in my Bachelor studies, and I got so confused, because what I was taught was nothing like what I thought creativity was—or how it felt to me. It took me years to understand that there are a bunch of different theories, and it was just one that we were taught. But that was the spark of the curiosity for me to try to understand this concept of creativity. And I did it for years. Then, by pure luck, I started a PhD in Business Informatics, which is somewhat technical. The lens of how I looked at creativity shifted from the psychological perspective more into the technical realm, and I looked at business processes and how they are advanced by general technology—basic software, basically. Then I morphed—also, by sheer luck—I morphed into computer science from a research perspective. And that coincided with ChatGPT coming around, and this huge LLM boom happened two, three years ago. And since then, I’m deeply in there. I just fell, fell in this rabbit hole. Ross: Yeah, well, it’s one of the most marvelous things. So the very first use case for most people, when they first use ChatGPT, is: write a poem in the style of whatever, or essentially creative tasks. And pretty decently does those to start off—until you sort of started to see the limitations at the time. Jennifer: Yeah, and I think it did so much. It’s so many different perspectives. I think we—as I said, I studied creativity for quite a while—but it was never as big of a deal, let’s say. It was just one concept of many. But since AI came around, I think it really threatened, to some part, what we understood about creativity, because it was always thought of as this pinnacle of humanness—right next to ethics. And I think intelligence had its bumps two or three decades ago, but for creativity, it was rather new. So the debate started of what it really means to be creative. I think a lot of people also try to make it even bigger than it is. But I think it is as simple as—a lot about creativity is, for example, in terms of poets—poetry is language understanding, right? And so LLMs are really good at it. And it’s just the case. It’s fine. I think we can still live happy lives as humans, although technology takes a lot over. Ross: Yes. So humans are creative in all sorts of dimensions. AI has complementary—let’s say, also different—capabilities in creativity. And in some of your research, you have pointed to different levels of how AI is supporting us in various guises—through being a tool and assistant, through to what you described as the co-creation. So what does that look like? What are some of the manifestations of human-AI co-creativity, which implies peers with different, complementary capabilities? Jennifer: Yeah, I think the easiest way to look at it is if you imagine working creatively with another person who is really competent—but the person is a technical version of it, and usually we call that AI, right? Or generative AI these days. So the idea is that you can work with a technical tool from an eye-to-eye level. Really, the tool would have a—well, now we’re getting into the realm of using psychological terms, right—but the tool would have a decent enough understanding so it would appear competent in the field that you want to create. I think the biggest difference we see to most common tools that we have right now—which I would argue are not on this level yet—tools like ChatGPT and others, they follow your lead, right? If you type in something, they will answer, sometimes more or less creatively. But you can take that as inspiration for your own creativity and your own creative process. And that really holds big potential. It’s great. But what we are envisioning—and seeing in some parts already happening in research—I think this is the direction we’re going to and really want to achieve more: that we have tools that can also come up with ideas, or important input for the creative problem. Not—when I say on their own—I don’t mean that they are, I don’t know, entities that just do. But they contribute a significant, or really a significant part of the creative process. Ross: So, I mean, we’ll come back a little bit to the distinctions between how AI creativity contrasts to human creativity. But just thinking about this co-creative process—from your research or other research that you’re aware of—what are the success factors? What are the things which mean that that co-creation process is more likely to be fruitful than not? Jennifer: I think it starts really with competence. And I think this is something, in general, we see that generative AI just became extremely good at, right? They know, so to speak, a lot and tailor a lot of knowledge, and that is very, very helpful—because we need broad associations, coming from mostly different fields, and connect that to come up with something we consider new enough to call it creative. That is a benefit that is beyond human capabilities, right? What we see right now those tools are doing—that is one part. But that is not all. What you also need is the spark of: why would something need to be connected? And I think that is especially where raising the creative questions, coming up with the goal that you want to achieve something too, is still the human part. But—it doesn’t need to be. That’s all I’m saying. But still, it is. Ross: So, I mean, there are some—very crude workflows, as in, you get AI to ideate, then humans select from those, and then they add other ideas, or you get humans and then AI sort of combines, recombines. Are there any particular sequences or flows that seem to be more effective? Jennifer: It’s interesting. I think this is also an interesting question for human creative work alone, even without technology—like, how do you achieve the good stuff, right? And I think what you just described, for me, would be kind of like a traditional way of: oh, I have a need, or I have a want—like, I want to create something, or I want to solve something, or I need a solution for a certain problem. And I describe that, and I iterate a best solution, right? This is part of what we call the divergent thinking process. And then, at a certain point, you choose a specific solution—so you converge. But I think where we have mostly the more interesting creative output—for humans and now also especially with AI—is that you kind of reverse the process. So let’s assume you have a solution and you need to find issues for it. For example, you have an invention. I think—yeah, I think it was that there’s this story told about the Post-its, you know, the yellow Post-its. So they were kind of invented because someone came up with glue that does not stick at all—like, really bad glue. And they had this as the final product. Now it’s like, “Okay, where can you make use of it?” And then they came up with, “Oh, maybe, if you put it on paper, you can come up with these sticky notes that just glue enough.” So they hold on surfaces, but they don’t stick forever, so you can easily erase them. They’re very practical in our brainstorming work, for example. And this kind of reverse thinking process—it’s much more random. And for many people, it’s much more difficult to open up to all the possibilities that can be. What I’ve seen is that if you try to poke LLMs with such very diverse, open questions, it can be very interesting what kind of comes out there. Ross: Though, to your point, I mean, this is the way—the human frames, the AI can respond. But the human needs to frame—as in, “Here is a solution. What are ways to be able to apply?” Jennifer: And all the examples—like, what I’m thinking of right now—is what is working with the tools that we have with LLMs. And I think what you were asking me before about the fourth level that we described with this co-creation—these are tools that work a bit differently. These are tools that, for now, mostly exist in research because you still need a high level of computational knowledge. So, the work that I did—the colleagues that I work with—are from computer science or mathematicians who program tools that know some rules of the game, or some—let’s call them—boundary conditions of our creative problem that we are dealing with. And then the magic—or the black box magic—of AI is happening. And something comes out. And sometimes we don’t really understand what was going on there. We just see the results. And then, with such results, we can iterate. Or maybe something goes in the direction as we assume could be part of the solution. So it becomes this iterative process between an LLM or AI tool doing something, we’re seeing the results, saying yes or no, nudging it into different directions, and so, overall, coming up with a potentially proper solution. This is—at least in the examples that we see. And if you have such a process and look over it, like what was happening, often what we see is that LLMs or AI tools in general—with their, let’s call it, broad knowledge, or the very intense, broad computational capacities that they have—they do stuff differently than we as humans tend to do stuff. And this is where it becomes interesting, right? Because now we are not bounded in this common way of thinking and finding associations, or iterating smaller solutions. Now we have this interesting artificial entity that finds very different ways of solving complex problems—and we can make use of that. Of course, we can learn from that. Ross: Absolutely. And I think you’ve pointed to some examples in your papers. I mean—other, sort of, I suppose we’ve been quite conceptual—so examples that you can give of either what people have done, or projects you’ve been involved with, or just types of challenges? Jennifer: I think—to explain the mechanism that I’m talking about—I think the first creative, artificial example, like the real, considered properly creative example, was when AlphaGo, the program developed to play Go—the game similar to, or somewhat similar to, chess but not chess—when this tool was able to come up with moves, like play moves, which were very uncommon. Still within the realm of possibilities, but very, very uncommon to how humans used to play. And so, I think what this new was back in 2016, right? When this happened—when DeepMind, from Google, built this tool and kind of revolutionized AI research. What it showed us is exactly this mechanism of these tools. Although they are still within the realm of possibilities—still within what we consider the rules, right, of the game—it showed some moves which were totally uncommon and surprising. And I think this shows us that we humans often tend to be very restricted. Even when we are world champions in a game, we are still restricted to what we commonly do—what is considered a good rule of thumb for success. And I’m very optimistic that AI will surprise us, like in this direction—with this mechanism—quite a lot in the future. Ross: Yeah, and certainly, related to what you’re describing, some similar algorithms have been applied to drug discovery and so on. Part of it is the number-crunching, machine learning piece, but part of it is also being able to find novel ways of folding proteins or other combinations which humans might not have envisaged. Jennifer: Yeah, exactly. And exactly—it’s in part because these machines are just so much more advanced in how much, or how many, information they can hold and combine. This is, in part, purely computational. It’s a bit unfair to compare that to our limited brains. But it’s not just that. It’s not just pure information, right? It’s also how this information is worked upon, or the processes—how information is combined, etc. So I think there are different levels of how these machines can advance our thinking. Ross: So one of the themes you’ve written about is designing for synergies—how we can design so that we are able to be complementary, as opposed to just delegating or substituting with AI. So what are those design factors, or design patterns, or mentalities we need? Jennifer: Well, I will propose, first up—I think it’s extremely complicated. Not complicated, but it will become a huge issue. Because, let’s say, if technology becomes so good—and we see that right now already with LLMs like ChatGPT—it’s so easy for us. And I mean that in a very neutral way. But lazy humans as we are—I think we are inherently lazy—it’s really tough for us to keep motivated to think on our own, to some degree at least, and not have all the processes overtaken by AI. So, saying that, I think the most essential, most important part whenever we are working with LLMs is: we have to keep our motivation in the loop—and our thinking to some degree in the loop—within the process. And so, we need a design which engages us as humans. I think it’s easily seen right now with LLMs. When you need the first step in—like typing some kind of prompt, or even in a conversation—you have to initiate it, right? You have to come up with, maybe even, your creative task at first. And I think this will always be true, because we humans control technology by developing it, right? But even when you’re more on the user end—forcing us to be in the loop, and thinking it through, and controlling the output, etc.—is one part. But I think what it also needs, especially for the synergy, is for the technology to adapt to us—to serve us, so to speak. And I think this is an aspect that is a little bit underdeveloped right now. What do I mean by that? I want a tool that serves me in my thinking. It should be competent enough that I perceive it as a buddy—eye to eye. That is the vision that I have. But I still always want the control. And I want it to adapt to me, and that I don’t have to adapt too much to the tool. Right now, we’re mostly just provided with tools that we need to learn how to deal with. We need to understand how prompting works, etc., etc. And I want that reversed. I want tools which are competent enough to understand, “Okay, this is Jenny. She is socialized in this way. She usually speaks German,”—whatever kind of information would be important to get me involved and understand me better. I think this is the vision for synergy that I’m thinking of. Ross: No, I really like that. The idea of designing for engagement, because instead of saying, yeah, why is it going to make us want to be engaged and continue the process and want to want to be involved, as opposed to doing the hard work of telling the—keep on telling the AI to do stuff. Jennifer: Yes, and also sometimes—I mean, I work a lot with ChatGPT and other similar tools—and sometimes I’m like, I found myself, I hope I don’t spoil too much, but sometimes I find myself copy-pasting too much because there’s nothing left for me to do. And to some degree, it can happen that the tools are too good, right? Because they are meant to create the output as the output, but they are not meant to be part of this iterative thinking process. I think you can design it much better and easier to go hand in hand with what I’m thinking and what I want to advance. Maybe. Ross: Yeah, yes, otherwise the onus is on the human to do it all. So in one of your papers, you identify—you used a number of the different models, and I believe you found that GPT-4 was the best for a variety of ideation tasks. But you’ve also done some more recent research. I’d love to hear about strengths, weaknesses, or different domains in which the different models are good, or— Jennifer: Yeah, that’s quite interesting, right? Because—okay, so going back to the start of the big—let’s call it the big boom of LLMs, right? I think it was early ’23, right, when ChatGPT came around. End of ’22. Okay, so it took a while when it reached Germany—it was for us. No, just joking. But okay, so around this time, what we found was intense debates arguing that, although these tools are generative, they cannot be creative. And that was the stance held tightest—maybe especially from creativity researchers and mostly psychologists, right? As I mentioned before, it’s a little bit of this fear that too much is taken over by technology. I think that is a strong contributor—even among researchers. So what we went out to do is—we basically wanted to ask LLMs the same creativity measures as we would do for humans. Like, when you want to know if a person holds potential for creative thinking, you ask them creative questions, and they have to perform—if they want to. And that’s exactly what we did with LLMs. Back in the day, we did it with the LLMs that were easily reachable and free in the market—like ChatGPT. And now, we really redid it with the current LLMs, with the current versions. And—I don’t know if you’ve seen that—but most LLMs are advertised, when the new versions come out, usually they are advertised with: they are more competent, and they are more creative. And so we questioned that. Is that really true? Is ChatGPT 4.5, for example—the current version—is it more creative than 3.5 back in the day? And what we find is—it’s so messy, actually. Because for some tools, yes, they are a bit more creative than they used to be two years ago. But the picture is really not clear. You cannot really tell or say or argue that the current versions we are having are more creative than two years ago—or even more creative than humans. It’s been interesting. We’re not really sure why. But all we can say is that, on average, these tools are as good at coming up with everyday-like uses or everyday-like ideas for everyday problems. They are, on average, as good as humans—random humans picked from surveys. And I think that is good news, right? Because LLMs are easier to ask than random humans most of the time. But the promise that they become more and more creative with every new release, in our perspective, does not hold up. So that is the bigger, bigger picture. Let’s start there. Ross: So that’s very interesting. So this is using some of the classic psychological creativity tests. And so you’re applying what has for a long time been used for assessing creativity in humans, and simply applying exactly the same test to LLMs? Jennifer: And to be fair, within the creativity research community, we agree that those tests are not good. Okay, they’re really pragmatic. We totally agree on that, so we do not have to fight for this point. But it’s commonly what we use to assess human potential for creative thinking—or even more concise, for divergent thinking—which is only one important, but just one aspect, of the whole creative journey, let’s say. And it basically just asks how good you are, on the spot, at coming up with alternative uses for everyday products like a shoe or toothbrush or newspaper. And of course, you can come up with obvious uses. But then there are the creative ones, which are not so easy to think of, right? And LLMs are good at that. They will deliver a lot of ideas, and quite a few of those are considered original compared to human answers. We also now used another test, which is a little bit more arbitrary even, but it proved to be somewhat of a good predictor for creative performance overall. And that is: you are asked to come up with 10 words which are as different from each other as possible. So very pragmatic again. And these LLMs—as they, you know, know one thing, and that is language—are, again, quite good at that on average. But it’s not that you see that they are above average, or that a specific LLM would be above average. We see some variety, but the picture, I would say, is not too clear. And also, to mention—which was a little bit surprising to us, actually—is that those LLMs, we asked them several times, like, a lot of times, and the variance in terms of originality—the variance is quite huge. So if you ask an LLM like ChatGPT for creative ideas, sometimes you can have quite a creative output, and sometimes it’s just average. Ross: So you did say that you’re comparing them to random humans. So does that mean that generally perceived-to-be-creative humans are significantly outperforming the LLMs on these tasks? Jennifer: Yeah, yeah. So, but the thing is, there is usually no creative human per se. So there’s nothing about a human that makes a human per se creative. We tend to differ a little bit on how well we perform on such tasks. Yes, we do differ in our mental flexibility, let’s say. But a creative individual is usually an individual which found a very good fit between their thinking, their experience, and the kind of creative task they’re doing. And just think about it, because this creativity can be found in all sorts of domains, right? And people can be good or less good in those domains, and that correlates highly with the creativity. So when we ask about the general, like, the ideas for everyday tasks, there is not really the creative individual, right? They are motivated individuals, which makes a huge difference for creativity measures. If you’re motivated and engaged, that is something we take as granted. For LLMs, I guess if you compare them, the motivation is there. But what we see in terms of the best answers—the most original answers in our data sets—most of the time, not all, but most of the time, come from humans. Ross: Very interesting. So, this is the Amplifying Cognition podcast, so I want to sort of round up by asking: all right, so what’s the state of the nation or state of the world, and where we are moving in terms of being able to amplify and augment human cognition, human creativity? So I suppose that could be either just, improving human creativity, or collaborating, or, you know, this co-creativity. Jennifer: I think the potential for significant improvements and amplifications has never been better. But I think at the same time as I’m saying that, I think the risks have never been higher. And that is because, as I said, we are lazy people. That’s just what humanist means—and that is fine—but it also means that we have a great risk of using these technologies not for us, but being used by them, basically, right? So we can use ChatGPT and other tools to do the task for us, or we can use them to do the task more efficiently and better with them. I think this difference can be very gradual, very minor, but it makes the whole difference between success and big dependencies—and potentially failure. Ross: Yeah, and I think you make a point—which I often also do—which is over-reliance is the biggest risk of all, potentially. Where, if we start to just sort of say, “This is good, I’ll let the AI do the task, or the creativity, or whatever,” it’s dangerous on so many levels. Jennifer: Because it does good enough most of the time, right? Technology became so good for many tasks—not all, but many tasks—that it does it good enough. And I think that is exactly where we have the potential to become so much better, right? Because if you now take the time and effort that we usually would put into the task itself, we could just improve on all levels. And that is the potential I’m talking about. I think a lot is to be advanced, and a lot is to be gained—if we play it right. Ross: And so, what’s on your personal research agenda now? Jennifer: Oh, I fell into this agentic LLM hole. Yeah, no, no—it’s not just looking at individual LLMs, but to chain them and combine them into bigger, more complex systems to have—or work on—bigger and complex issues, mostly creative problems, and see where the thinking of me and the tool, yeah, excels, basically, right? And where do I, as a human, have to step in to fine-tune specific bits and pieces and really find the limits of this technology if you scale it up? That’s my agenda right now. Ross: I’m very much looking forward to reading the research as you publish it.  Jennifer: Thank you.  Ross: Is there anywhere people can go to find out more about your work? Jennifer: Yeah, I collect everything on jenniferhaase.com. That’s my web page. It’s hugely up to date there, and you can find talks and papers. Ross: Fabulous. Love the work you’re doing. Jennifer, thanks so much for being on the show and sharing. Jennifer: Thank you very much. It was—yeah, I love to talk about that, so thanks for inviting me. The post Jennifer Haase on human-AI co-creativity, uncommon ideas, creative synergy, and humans outperforming (AC Ep83) appeared first on Amplifying Cognition.
undefined
Mar 26, 2025 • 39min

Pat Pataranutaporn on human flourishing with AI, augmenting reasoning, enhancing motivation, and benchmarking human-AI interaction (AC Ep82)

“We should not make technology so that we can be stupid. We should make technology so we can be even smarter… not just make the machine more intelligent, but enhance the overall intelligence—especially human intelligence.” –Pat Pataranutaporn About Pat Pataranutaporn Pat Pataranutaporn is Co-Director of MIT Media Lab’s new Advancing Humans with AI (AHA) research program, alongside Pattie Maes. In addition to extensive academic publications, his research has been featured in Scientific American, MIT Tech Review, Washington Post, Wall Street Journal, and other leading publications. His work has been named in TIME’s “Best Inventions” lists and Fast Company’s “World Changing Ideas.” Websites: MIT Media Lab AI (AHA)   LinkedIn Profile: Pat Pataranutaporn What you will learn Reimagining ai as a tool for human flourishing Exploring the future you project and long-term thinking Boosting motivation through personalized ai learning Enhancing critical thinking with question-based ai prompts Designing agents that collaborate, not dominate Preventing collective intelligence from becoming uniform Launching aha to measure ai’s real impact on people Episode Resources People Hal Herschfeld Pattie Maes Elon Musk Organizations & Institutions MIT Media Lab KBTG ACM SIGCHI Center for Collective Intelligence Technical Terms & Concepts Human flourishing Human-AI interaction Digital twin Augmented reasoning Multi-agent systems Collective intelligence AI bias Socratic questioning Cognitive load Human general intelligence (HGI) Artificial general intelligence (AGI) Transcript Ross Dawson: Pat, it is wonderful to have you on the show. Pat Pataranutaporn: Thank you so much. It’s awesome to be here. Thanks for having me. Ross: There’s so much to dive into, but as a starting point: you focus on human flourishing with AI, exactly. So what does that mean? Paint the big picture of AI and how it can help us to flourish as who we are and our humanity. Pat: Yeah, that’s a great question. So I’m a researcher at MIT Media Lab. I’ve been working on human-AI interaction before it was cool—before ChatGPT took off, right? So we have been asking this question for a long time: when we focus on artificial intelligence, what does it mean for people? What does it mean for humanity? I think today, a lot of conversation is about how we can make models better, how we can make technology smarter and smarter. But does that mean that we can be stupid? Does it mean that we can just let the machine be the smart one and let it take over? That is not the vision that we have at MIT. We believe that technology should make humans better. So I think the idea of human flourishing is an umbrella term that we use to describe different areas where we think AI could enhance the human experience. For me in particular, I focus on three areas: how AI can enhance human wisdom, enhancing wonder, and well-being. So: 3 W’s—wisdom, wonder, and well-being. We work on many projects to look into these areas. For example, how AI could allow a person to talk to their future self, so that they can think in the longer term, to see that future more vividly. That’s about enhancing wonder and wisdom. We think a lot about how AI can help people think more critically and analyze information that they encounter on a daily basis in a more comprehensive way. And you know well-being, we have many projects that look at how AI can improve human mental health, positive thinking, and things like that. But at the end, we also focus on AI that doesn’t lead to human flourishing, to balance it out. We study in what contexts human-AI interaction leads to negative outcomes—like people becoming lonelier or experiencing negative outcomes such as false memories, misinformation, and things like that. As scientists, we’re not overly optimistic or pessimistic. We’re trying to understand what’s going on and how we can design a better future for everyone. That’s what we’re trying to focus on. Yeah? Ros: Fabulous. And as you say, there are many, many different projects and domains of research which you’re delving into. So I’d like to start to dive into some of those. One that you mentioned was the Future You project. So I’d love to hear about what that is, how you created it, and what the impact was on people being able to interact with their future selves. Pat: Totally. So, I mean, as I said, right, the idea of human flourishing is really exciting for us. And in order to flourish, like, you cannot think short term. You need to think long term and be able to sort of imagine: how would you get there, right? So as a kid, I was interested in sort of a time machine. Like, I loved dinosaurs. I wanted to go back into the past and also go into the future, see what would happen in the future, like the exciting future we might have. So I really love this idea of, like, having a time machine. And of course, we cannot do a real time machine yet, but we can make a simulation of a time machine that uses a person’s personal data and can extrapolate that, and use other data to kind of see, okay, if the person has this current behavior, things that they care about, what would happen down the road—like what would happen in the future. So we built an AI simulation that is a digital twin of a person. And we first ask people to kind of provide us with some basic information: their aspiration, things that they want to achieve in the future. And then we use the current behavior that they have to kind of create what we call a synthetic memory, or a memory that that person might have in the future, right? So normally, memory is something that you already experienced. But in this case, because we want to simulate the future self, we need to build memory that you did not experience yet but might actually experience in the future. So we use language model combined with the information that the person gives us to create this sort of intermediary representation of person experience, and then feed that into a model that then allows us to create human-like conversation. And then we also age the image of the person. So when the person uploads the image, we also use a visual model that can kind of create an older representation of that person. And then combine these together, we are creating an AI-simulated future self that people can have a conversation with. So we have been working with psychologists—Professor Hal Herschfeld from UCLA—who looks at the concept of future self-continuity, which is a psychological concept that measures how well a person can vividly imagine their future self. And he has shown that if you can increase this future self-continuity, people tend to have better mental health, better financial saving, better decision, because they can kind of think for the long term, right? So we did this experiment where we created this future self system and then tested it with people and compared it with a regular chatbot and having no intervention at all. And we have shown that this future self intervention can increase future self-continuity and also reduce people’s anxiety as well. So they become much more of a future thinker—not only think about today’s situation, but can see the possibility of the future and have better mental health overall. So I think this is really exciting for us, because we built a new type of system, but also really showed that it had a positive impact in the real world. Ross: What were the ranges of ages of people who were involved in this research? Pat: Yeah, so right now, the prototype that we developed is for younger population—people that just finished college or people that just finished high school, people that still need to think about what their future might look like, people that still would benefit from having ability to kind of think in the longer term. And right now, we actually have a public demo that everyone can use. So people can go to our website and then actually start to use it. You can also volunteer the data for research as well. So this is sort of in the wild, or in the real world study. That’s what we are doing right now. So if people like to volunteer the data, then we can also use the data to kind of do future research on this topic. But right now, the system has been used by people in over 190 countries, and we are really excited for this research to be in the real world and have people using it. Ross: Fabulous. We’ll have the link in the show notes. So, one of the other interesting aspects raised across your research is the potential positive impact of AI on motivation. I think that’s a really interesting point. Because, classically, if you think about the future of education, AI can have custom learning pathways and so on. But the role of the human teachers, of course, is to inspire and to motivate and to engage and so on. So I’d love to hear about how you’re using AI to develop people’s positive motivation. Pat: Yeah, that’s a really great question. And I totally agree with you that the role of the teacher is to inspire and create this sort of positive reinforcement or positive encouragement for the student, right? We are not trying to replace that. Our research is trying to see what kind of tools the teacher can use to improve student motivation, right? And I think today, a lot of people have been asking, like, well, we have AI that can do so many things—why do we need to learn, right? And we believe at MIT that learning is not just for the benefit of getting a job or for the benefit that you will have a good life, but it’s good for personal growth, and it’s also a fun process, right? Learning something allows you to feel excited about your life—like, oh, you can now do this, even though AI can do that. I mean, a car can also go from one place to another place, but that doesn’t mean we should stop walking, right? Or you can go to a restaurant and a professional chef can cook for you, but it’s also a very fun thing to cook at home, right? With your loved ones or with your family, right? So I think learning is a really important process of being human, and AI could make that process even more interesting and even more personal, right? We really emphasize a lot on the idea of personalized learning, which means that learning can be tailored to each individual. People are very different, right? We learn in different ways. We care about different things. And learning is also about connecting the dots—things that we already know and new things that we haven’t learned before. How do we connect that dot better? So we have built many AI systems that try to address these. The first project we looked at was what happens if we can create virtual characters that can work with teachers to help students learn new materials. They can be a guest lecturer, they could be a virtual tutor that students can interact with in addition to their real teacher, right? And we showed that by creating characters based on the people that students like and admire—like, at that time, I think people liked Elon Musk a lot (I don’t know about now; I think we would have a different story)—but at that time, Elon Musk was a hero to many people. So we showed that if you learn from virtual Elon Musk, people have a higher level of learning motivation, and they want to learn more advanced material compared to a generic AI. So personalization, in this case, really helped with enhancing personalized feeling and also learning motivation and positive learning experience. We have shown this across different educational measures. Another project we did was looking at examples, right? When you learn things, you want examples to help you understand the concept, right? Sometimes concepts can be very abstract, but when you have examples, that’s when you can start to connect it with the real world. Here we showed that if we use AI to create examples that resonate with the student’s interests—like if they love Harry Potter, or, I don’t know, like Kim Kardashian, or whatever—Minecraft or whatever things that people like these days, right? Well, I feel like an old person now, but yeah, things that people care about. If you create an example using elements that people care about, we can also make the lesson more accessible and exciting for people as well, right? So this is a way that AI could make learning more positive and more fun and engaging for students. Yeah. Ross: So one of the domains you’ve looked at is augmented reasoning. And so I think it’s a particularly interesting point now. In the last six months or so, we’ve all talked about reasoning models with large language models—or perhaps “reasoning” in quotation marks. And there are also studies that have shown in various guises that people do seem to be reducing their cognitive engagement sometimes, whether they’re overusing LLMs or using them in the wrong ways. So I’d love to hear about your research in how we can use AI to augment reasoning as well as critical thinking capabilities. Pat: That’s a great question. I mean, that’s going back to what I said, right? Like, what does it mean for humans to have smart models around us? Does it mean we can be stupid? I think that’s a degradation of humans, right? We should not make technology so that we can be stupid. We should make technology so we can be even smarter, right? So I think the end goal of having a machine or models that can do reasoning for us, rather than enhance our reasoning capability—I think that’s the wrong goal, right? And again, if you have the wrong outcome or the wrong measurement, you’re gonna get the wrong thing. So first of all, you need to align the goal in the right direction. That’s why, in my PhD research, I really want to focus on things that ultimately have positive impact on people. AI models continue to advance, but sometimes humans don’t advance with the AI models, right? So in this case, reasoning is something that’s very, very critical. You can trace it back to ancient Greek. Socrates talked a lot about the importance of questioning and asking the right question, and always using this critical thinking process—not trusting things at face value, right? We have been working on systems—again, the outcome of human-AI interaction can be influenced by both human behavior and AI behavior, right? So we can design AI systems that engage people in critical thinking rather than doing the critical thinking for them. That could be very dangerous, right? These systems right now don’t really have real reasoning capability. They’re doing simulated reasoning. And sometimes they get it right because, on the internet, people have already expressed reasoning and thinking processes. If you repeat that, you can get to the right answer. I mean, the internet is bigger than we imagined. I think that’s what the language models show us—that there’s always something on the internet that allows you to get to the right answer. You have powerful models that can learn those patterns, right? So these models are doing simulated reasoning, which means they don’t have real understanding. Many people have shown that right now—that even though these systems perform very well on benchmarks, in the real world they still fail, especially with things that are very unique and very critical, right? So in that case, the model, instead of doing the reasoning for us, could make us have better reasoning by teaching us the critical thinking process. And there are many processes for that. Many schools of thought. We have looked at two processes. One of them is in a project called Variable Reasoner. We made a wearable device—like wearable smart glasses—with an AI agent that runs the process of verifying statements that people listen to and identify and flag when the statement people listen to has no evidence to support, right? This is really, really important—especially if you love political speeches, or you love watching advertisements or TikTok. Because right now, social media is filled with statements that sound so convincing but have no evidence whatsoever. So this type of system can help flag that. Because, as humans, we tend to go—or we tend to follow along—if things sound reasonable, sound correct, sound persuasive, we tend to go with them. But things that sound persuasive or sound correct doesn’t mean it’s correct, right? It can use all sorts of heuristics and other fallacies to get you to fall into that trap. So our system—the AI—can be the system that follows things along and helps us flag that for us. We have shown that when people wear these glasses, when the AI helps them think through the statements they listen to, people tend to agree more with statements that are well-reasoned and have evidence to support, right? So we can show that we can nudge people to pay more attention to the evidence part of the information they encounter. That’s one project. Another project—we borrowed the technique from Socrates, the ancient Greek philosopher. We showed that if the AI doesn’t give the answer to people right away but rather asks a question back—it’s kind of counterintuitive, like, well, but people need to arrive at that information for themselves— We showed that when the AI asked questions, it improved people’s ability to discern true information from false information better than AI giving the correct answer. Which some people might ask: why is that the case? And I think it’s because people already have the ability. Many of us already have the ability to discern information. We are just being distracted by other things. So when the AI asks a question, it can help us focus on things that matter—especially if the AI frames the information in a way that makes us think, right? For example, if there is a statement like: “Video games lead to people becoming more violent,” and the evidence is “a gamer slapped another last week.” For example— If the AI starts to frame that into: “If one person stabs another person, does that mean that every gamer will become violent after playing video games?” And then you start to realize that, oh, now there’s an overgeneralization. You’re using the example of one to overgeneralize to everyone, right? If the AI frames the statement into a question like this, some people will be able to come up with the answer and discern for themselves. And this not only allows them to reach the right and correct answer but also strengthens their process as well, right? It’s kind of like AI creating or scaffolding our critical thinking so that our critical thinking muscle can be strengthened, right? So I think this is a really important area of research. And there are many more research coming out that show how we can design AI systems that enhance critical thinking rather than doing the critical thinking for us. Ross: So in a number of other domains, there’s been research which has showed that whilst in some contexts AI can produce superior cognition or better thinking abilities, when the AI is withdrawn, they revert back. So one of the things is not only using AI in the enhancement process, but post-AI—to actually enhance the norms. When you don’t have the AI, that you’re still able to enhance your critical thinking. So has that been demonstrated, or is that something you would look at? Pat: Yeah, that’s a really important question. We haven’t looked at a study in that sort of domain—what happens when people stop using the AI, or what happens when the AIs are being removed from people—but that’s something that is part of the research roadmap that we are doing. At MIT right now, there’s a new research effort called AHA. We want to create aha moments, but AHA also stands for Advancing Humans with AI. And the emphasis is on advancing humans, right? AI is the part that’s supposed to help humans advance. So the focus is on the humans. We have looked at different research areas. We’ve already been doing a lot of work in this, but we are creating this roadmap for what future AI researchers need to focus on—and this is part of it. This is the point that you just mentioned: the idea of looking at what happens when the AI is removed from the equation, or when people no longer have access to the technology. What happens to their cognitive process and their skills? That is a really important part that is part of our roadmap. And so, for the audience out there—this April 10 is when we are launching this AHA research program at MIT. We have a symposium that everyone can watch. It’s going to be streamed online on the MIT Media Lab website. You can go to aha.media.mit.edu, and see this symposium. The theme of this symposium is: Can we design AI for human flourishing? And we have great speakers from OpenAI, Microsoft. We have great thinkers like Geraldine, Tristan Harris, Sherry Turkle, Arianna Huffington, and many amazing people who are joining us to really ask this question. And hopefully, we hope that this kind of conversation will inspire the larger AI researchers and people in the industry to ask the important question of AI for human flourishing—not just AI for AI’s sake, or AI for technological advancement’s sake. Ross: Yeah, I’ve just looked at the agenda and the speakers—this is mind-boggling. Looks like an extraordinary conference, and I’m very much looking forward to seeing the impact that that has. So one of the other things I’m very interested in is this intersection of agents—AI agents, multi-agents—and collective intelligence. And as I often say, and you very much manifested in your work, this is not about multi-agent as a stack of different AI agents around. It’s saying, well, there are human agents, there are AI agents—so how can you pull these together to get a collective intelligence that manifests the best of both? A group of people and AI working together. So I’d love to hear about your directions and research in that space. Pat: Yeah, there is a lot of work that we are doing. And in fact, my PhD advisor, Professor Pattie Maes, is credited as one of the pioneers of software agents. And she is actually receiving the Lifetime Achievement Award in ACM SIGCHI, which is the special interest group in human-computer interaction—this is in a couple of months, actually. So it’s awesome and amazing that she’s being recognized as the pioneer of this field. But the question of agents, I think, is really interesting, because right now, the terminology is very broad. AI is a broad term. AGI is an even broader term. And “agent”—I don’t know what the definition is, right? I mean, some people argue that it’s a type of system that can take action on behalf of the user, so the user doesn’t need to supervise. This means doing things autonomously. But there are different degrees of autonomy—like things that may require human approval, or things that can just do things on their own. And it can be in the physical world, or the digital world, or in between, right? So the definition of agent is pretty broad. But I think, again, going back to the question of what is the human experience of interacting with this agent—are we losing our agency or the sense of ownership? We have many projects that look into and investigate that. For example, in one project, we design new form factors or new interaction paradigms for interacting with agents. This is a project we worked on with KBTG, which is one of the largest banks in Asia, where we’re trying to help people with financial decisions. If you ask a chatbot, you need to pass back and forth a lot of information—like you need a bank statement, or your savings, or all these accounts. A chatbot is not the right modality. You could have an AI agent that interacts with people in the task—like if you’re planning your financial spending, or investment, or whatever. The AI could be another hand or another pointer on screen. You have your pointer, right? But the AI can be another pointer, and then you can talk to that pointer, and you can feel like there are two agents interacting with one another. And we showed that—even just changing, using the same exact model—but changing the way that information is flowing and visualized to the user, and the way the user can interact with the agent, rather than going from one screen, then going to the chatbot, typing something, and then going back… Now, the agent has access to what the user is doing in real time. And because it’s another pointer, it can point and highlight things that are important at the moment to help steer the user toward things that are critical, or things they should pay attention to, right? We showed that this type of interaction reduces cognitive load and makes people actually enjoy the process even more. So I think the idea of an agent is not a system by itself. It’s also the interaction between human and agent—and how can we design it so that it feels like a collaborative, positive collaboration, rather than a delegation that feels like people are losing some agency and autonomy, right? So I think this is a really, really important question that we need to investigate. Yeah? Ross: Well, the thing is, it is a trust—a relationship of trust, essentially. So you and it. So there’s the nature of the interface between the human, who is essentially trusting an agent—an agent to act on their behalf—and they’re able to do things well, that they’re able to represent them well, that they check nothing’s missed. And so this requires a rich—essentially, in a way—emotional interface between the two. I think that’s a key part of that when we move into multi-agent systems, where you have multiple agents, each with their defined roles or capabilities, interacting. This comes, of course—MIT also has a Center for Collective Intelligence. I mean, I’d love to sort of wonder what the intersections between your work and the Center for Collective Intelligence might be. Pat: Well, one thing that I think both of our research groups focus on is the idea of intelligence not as things that already happen in technologies, but things that happen collectively—at the societal level, or at the collective level. I think that should be the ultimate goal of whatever we do, right? You should not just make the machine more intelligent, but how do we enhance the overall intelligence? And I think the question also is: how do we diversify human intelligence as well, right? Because you can be intelligent in a narrow area, but in the real world, problems are very complex. You don’t want everyone to think in the same way. I mean, there are studies showing that on the individual level, AI can make people’s essays better. But if you look across different essays written by people assisted by AI, they start to look the same—which means that there is an individual gain, but a collective loss, right? And I think that’s a big problem, right? Because now everyone is thinking in the same way. Well, maybe everyone is a little bit better, but if they’re all the same, then we have no diverse solution to the bigger problems. So in one project that we looked into is how do we use AI that has the opposite value as a person—to help make people think more diversely. If you like something, the AI could like the other thing, and then make the idea something in between. Or, if you are so deep into one thing, the AI could represent the broader type of intelligence that gets you out of your depth, basically. Or, if you are very broad, maybe the AI will go in deep in one direction—so complementing your intelligence in a way. And we have shown that this type of AI system can really drive collaboration in a direction that is very diverse—very different from the user. But at the same time, if you have an AI that is similar to the person—like has the same value, same type of intelligence—it can make them go even deeper. In the sense that if you have a bias toward a certain topic, and the AI also has a bias in the same topic as you, it can make that go even further. So again, it’s really about the interaction—and what type of intelligence do we want our people to interact with? And what are the outcomes that we care about, whether it’s individual or collective? I think these are design choices that need to be studied and evaluated empirically. Yeah. Ross: That’s fantastic. I mean, I have a very deep belief in human uniqueness. I think we’re all far more unique than almost anybody realizes. And society basically makes us look and makes us more the same. So AI is perhaps a far stronger force in sort of pulling us together—society already is that, yeah. But I mean, to that point of saying, well, I may have a unique way of thinking, or just unique perspectives—and so, I mean, you’re talking about things where we can actually draw out and amplify and augment what it is that is most unique and individual about each of us. Pat: Right, totally. And I mean, I think the former CEO of Google, right, he has said at one point that, why would an individual—why would a person—want to talk to another person when you can talk to an AI that is 100,000 million people at the same time, right? But I feel like that’s a boring thing. Because the AI could take on any direction. It doesn’t have an opinion of its own, right? But because a human is limited to our own life experience until that point, it gives us a unique perspective, right? When things are everything, everywhere, all at once, it’s like generic and has no perspective of its own. I think each individual person—whether it’s the things they’re living through, things that influence their life, things they grew up with—has that sort of story that made them unique. I think that’s more— to me, that is more interesting, and I think it’s what we should preserve, not try to make everything average out. So for me, this is the thing we should amplify. And again, I talk a lot about human-AI interaction, because I feel like the interaction is the key—not just the model capability, but how it interacts with people. What features, what modality it actually uses to communicate with people. And I think this question of interaction is so interdisciplinary. You need to learn a lot about human behavior, psychology, AI engineering, system design, and all of that, right? So I think that’s the most exciting field to be. Ross: Yeah, It’s fantastic. So in the years to come, what do you find most exciting about what the Augmenting Humans with AI group could do? Pat: Well, I mean, many big ideas or aha moments that we want to create—definitely. We have actually an exciting project announcing tomorrow with one of the largest AI organizations or companies in the world. So please watch out for that. There’s new, exciting research in that direction, happening at scale. So there’s a big project that’s launching tomorrow, which is March 21. So if this is after that, yeah. I think one thing that we are working on is—we’re collaborating with many organizations, trying to focus and make them not just think about AGI, but think about HGI: Human General Intelligence. You know, what would happen to human general intelligence? We want everyone to flourish—not machines to flourish. We want people to flourish, right? To kind of steer many of the organizations, many of the AI companies, into thinking this way. And in order to do that, we first need a new type of benchmark, right? We have a lot of benchmarks on AI capabilities, but we don’t have any benchmarks on what happens to people after using the AI, right? So we need new benchmarks that can really show if the AI makes people depressed, empowers, or enhances these human qualities—these human experiences. We need to design new ways to measure that, especially when they’re using the AI. Second, we need to create an observatory that allows us to observe how people are evolving—or co-evolving—with AI around the world. Because AI affects different groups of people differently, right? We had a study showing that—this is kind of funny—but people talk about AI bias, that it’s biased toward certain genders, ethnicities, and so on. We did a study showing that, if you remove all the factors, just by the name of people, the AI will have a bias based on the name—or just the last name, right? If you have a famous last name, like Trump or Musk, the AI tends to favor those people more than people who have a generic or regular last name. And this is kind of crazy to me, because you can get rid of all the demographic information that we say causes bias, and just the name of a person already can lead to that bias. So we know that AI affects people differently. We need to design this type of observatory that we will deploy around the world to measure the impact of AI on people over time—and whether that leads to human flourishing or makes things worse. We don’t have empirical evidence for that right now. People are in two camps: the optimistic camp, saying AI is going to bring prosperity, we don’t need to care, we don’t need to regulate. And another group saying AI is going to be the worst thing—existential crisis, human extinction. We need to regulate and kill and stop. But we don’t have real scientific empirical evidence on humans at scale. So that’s another thing that MIT’s Advancing Human-AI Interaction is going to do. We’re going to try to establish this observatory so that we can inform people with scientific evidence. And finally, what I think is the most exciting thing: right now, we have so many papers published on AI—more than any human can read, maybe more than any AI can be trained on. Because every minute there’s a new paper being published, right? And people are not knowing what is going on. Maybe they know a little bit about their area, or maybe some papers become very famous, but we want to design an Atlas of Human-AI Interaction—a new type of AI for science that allows us to piece together different research papers that come out so that we have a comprehensive view of what is being researched. What are we over-researching right now? We had a preliminary version of this Atlas, and we showed that people right now do a lot of research on trust and explanation—but less so on other aspects, like loneliness. For example, that AI chatbots might make people lonely—very little research has gone into that. So we have this engine that’s always running. When new papers are being published, the knowledge is put into this knowledge tree. So we see what areas are growing, what areas are not growing, every day. And we see this evolve as the research field evolves. Then I think we will be able to have a better comprehension of when AI leads to human flourishing—or when it doesn’t—and see what is being researched, what is being developed, in real time. So these are the three moonshot ideas that we care about right now at MIT Media Lab. Yeah. Ross Dawson: Fantastic. I love your work—both you and all of your colleagues. This is so important. I’m very grateful for what you’re doing, and thanks so much for sharing your work on The Amplifying Cognition Show. Pat Pataranutaporn: Thank you so much. And I’m glad that you are doing this show to help people think more about this idea of amplifying human cognition. I think that’s an important question and an important challenge for this century and the future century as well. So thank you for having me. Bye. The post Pat Pataranutaporn on human flourishing with AI, augmenting reasoning, enhancing motivation, and benchmarking human-AI interaction (AC Ep82) appeared first on Amplifying Cognition.
undefined
7 snips
Mar 19, 2025 • 31min

Amplifying Foresight Compilation (AC Ep81)

Sylvia Gallusser, CEO of Silicon Humanism, and futurist Jack Uldrich discuss the vital role of AI in enhancing forecasting accuracy, revealing surprising improvements in human forecasters even with biased AI advice. They explore future thinking as an everyday practice for sensing changes and collectively imagining what’s next. Uldrich emphasizes the importance of understanding how we create and envisioning a better future, while both guests highlight unlearning as essential for success in the digital age.
undefined
21 snips
Mar 12, 2025 • 32min

AI for Strategy Compilation (AC Ep80)

Valentina Contini, an innovation strategist and technofuturist, joins a discussion on the transformative power of AI in strategic decision-making. She emphasizes how AI can accelerate foresight processes and enhance cognitive complexity by managing vast data sets. The conversation reveals how AI provokes fresh thinking in boardroom dynamics and highlights the need for human critical thinking alongside AI insights. Innovative workshop techniques are explored to foster diverse perspectives, bridging human intuition with AI capabilities.
undefined
Mar 5, 2025 • 31min

Collective Intelligence Compilation (AC Ep79)

“Collective intelligence is the ability of a group to solve a wide range of problems, and it’s something that also seems to be a stable collective ability.” – Anita Williams Woolley “When you get a response from a language model, it’s a bit like a response from a crowd of people. It’s shaped by the collective judgments of countless individuals.” – Jason Burton “Rather than just artificial general intelligence (AGI), I prefer the term augmented collective intelligence (ACI), where we design processes that maximize the synergy between humans and AI.” – Gianni Giacomelli “We developed Conversational Swarm Intelligence to scale deliberative processes while maintaining the benefits of small group discussions.” – Louis Rosenberg About Anita Williams Woolley, Jason Burton, Gianni Giacomelli, & Louis Rosenberg Anita Williams Woolley is the Associate Dean of Research and Professor of Organizational Behavior at Carnegie Mellon University’s Tepper School of Business. She received her doctorate from Harvard University, with subsequent research including seminal work on collective intelligence in teams, first published in Science. Her current work focuses on collective intelligence in human-computer collaboration, with projects funded by DARPA and the NSF, focusing on how AI enhances synchronous and asynchronous collaboration in distributed teams. Jason Burton is an assistant professor at Copenhagen Business School and an Alexander von Humboldt Research fellow at the Max Planck Institute for Human Development. His research applies computational methods to studying human behavior in a digital society, including reasoning in online information environments and collective intelligence. Gianni Giacomelli is the Founder of Supermind.Design and Head of Design Innovation at MIT’s Center for Collective Intelligence. He previously held a range of leadership roles in major organizations, most recently as Chief Innovation Officer at global professional services firm Genpact. He has written extensively for media and in scientific journals and is a frequent conference speaker. Louis Rosenberg is CEO and Chief Scientist of Unanimous A.I., which amplifies the intelligence of networked human groups. He earned his PhD from Stanford and has been awarded over 300 patents for virtual reality, augmented reality, and artificial intelligence technologies. He has founded a number of successful companies including Unanimous AI, Immersion Corporation, Microscribe, and Outland Research. His new book Our Next Reality on the AI-powered Metaverse is out in March 2024. Websites: Gianni Giacomelli Louis Rosenberg University Profile: Anita Williams Woolley Jason Burton LinkedIn Profile: Anita Williams Woolley Jason Burton Gianni Giacomelli Louis Rosenberg What you will learn Understanding the power of collective intelligence How teams think smarter than individuals The role of ai in amplifying human collaboration Memory, attention, and reasoning in group decision-making Why large language models reflect collective intelligence Designing synergy between humans and ai Scaling conversations with conversational swarm intelligence Episode Resources People Thomas Malone Steve Jobs Concepts & Frameworks Transactive Memory Systems Reinforcement Learning from Human Feedback (RLHF) Conversational Swarm Intelligence Augmented Collective Intelligence (ACI) Artificial General Intelligence (AGI) Technology & AI Terms Large Language Models (LLMs) Machine Learning Collective Intelligence Artificial Intelligence (AI) Cognitive Systems Transcript Anita Williams Woolley: Individual intelligence is a concept most people are familiar with. When we’re talking about general human intelligence, it refers to a general underlying ability for people to perform across many domains. Empirically, it has been shown that measures of individual intelligence predict a person’s performance over time. It is a relatively stable attribute. For a long time, when we thought about intelligence in teams, we considered it in terms of the total intelligence of the individual members combined—the aggregate intelligence. However, in our work, we challenged that notion by conducting studies that showed some attributes of the collective—the way individuals coordinated their inputs, worked together, and amplified each other’s contributions—were not directly predictable from simply knowing the intelligence of the individual members. Collective intelligence is the ability of a group to solve a wide range of problems. It also appears to be a stable collective ability. Of course, in teams and groups, you can change individual members, and other factors may alter collective intelligence more readily than individual intelligence. However, we have observed that it remains fairly stable over time, enabling greater capability. In some cases, collective intelligence can be high or low. When a group has high collective intelligence, it is more capable of solving complex problems. I believe you also asked about artificial intelligence, right? When computer scientists work on ways to endow a machine with intelligence, they essentially provide it with the ability to reason, take in information, perceive things, identify goals and priorities, adapt, and change based on the information it receives. Humans do this quite naturally, so we don’t really think about it. Without artificial intelligence, a machine only does what it is programmed to do and nothing more. It can still perform many tasks that humans cannot, particularly computational ones. However, with artificial intelligence, a computer can make decisions and draw conclusions that even its own programmers may not fully understand the basis of. That is where things get really interesting. Ross Dawson: We’ll probably come back to that. Here at Amplifying Cognition, we focus on understanding the nature of cognition. One fascinating area of your work examines memory, attention, and reasoning as fundamental elements of cognition—not just on an individual level, but as collective memory, collective attention, and collective reasoning. I’d love to understand: What does this look like? How do collective memory, collective attention, and collective reasoning play into aggregate cognition? Anita: That’s an important question. Just as we can intervene to improve collective intelligence, we can also intervene to improve collective cognition. Memory, attention, and reasoning are three essential functions that any intelligent system—whether human, computer, or a human-computer collaboration—needs to perform. When we talk about these in collectives, we are often considering a superset of humans and human-computer collaborations. Research on collective cognition has been running parallel to studies on collective intelligence for a couple of decades. The longest-standing area of research in this field is on collective memory. A specific construct within this area is transactive memory systems. Some of my colleagues at Carnegie Mellon, including Linda Argote, have conducted significant research in this space. The idea is that a strong collective memory—through a well-constructed transactive memory system—allows a group to manage and use far more information than they could individually. Over time, individuals within a group may specialize in remembering different information. The group then develops cues to determine who is responsible for retaining which information, reducing redundancy while maximizing collective recall. As the system forms, the total capacity of information the group can manage grows considerably. Similarly, with transactive attention, we consider the total attentional capacity of a group working on a problem. Coordination is crucial—knowing where each person’s focus is, when focus should be synchronized, when attention should be divided across tasks, and how to avoid redundancies or gaps. Effective transactive attention allows groups to adapt as situations change. Collective reasoning is another fascinating area with a significant body of research. However, much of this research has been conducted in separate academic pockets. Our work aims to integrate these various threads to deepen our understanding of how collective reasoning functions. At its foundation, collective reasoning involves goal setting. A reasoning system must identify the gap between a desired state and the current state, then conceptualize what needs to be done to close that gap. A major challenge in collective reasoning is establishing a shared understanding of the group’s objectives and priorities. If members are not aligned on goals, they may decide that their time is better spent elsewhere. Thus, goal-setting and alignment are foundational to collective reasoning, ensuring that members remain engaged and motivated over time. Ross: One of the interesting insights from your paper is that large language models (LLMs) themselves are an expression of collective intelligence. I don’t think that’s something everyone fully realizes. How does that work? In what way are LLMs a form of collective intelligence? Jason Burton: Sure. The most obvious way to think about it is that LLMs are machine learning systems trained on massive amounts of text. Companies developing these language models source their text from the internet—scraping the open web, which contains natural language encapsulating the collective knowledge of countless individuals. Training a machine learning system to predict text based on this vast pool of collective knowledge is essentially a distilled form of crowdsourcing. When you query a language model, you aren’t getting a direct answer from a traditional relational database. Instead, you receive a response that reflects the most common patterns of answers given by people in the past. Beyond this, language models undergo further refinement through reinforcement learning from human feedback (RLHF). The model presents multiple response options, and humans select the best one. Over time, the system learns human preferences, meaning that every response is shaped by the collective judgments of numerous individuals. In this way, querying a language model is like consulting a crowd of people who have collectively shaped the model’s responses. Gianni Giacomelli: I view this through the lens of augmentation—augmenting collective intelligence by designing organizational structures that combine human and machine capabilities in synergy. Instead of thinking of AI as just a tool or humans as just sources of data, we need to look at how to structure processes that allow large groups of people and machines to collaborate effectively. In 2023, many became engrossed with AI itself, particularly generative AI, which in itself is an exercise in collective intelligence. These systems were trained on human-generated knowledge. But looking at AI in isolation limits our understanding. Rather than just artificial general intelligence (AGI), I prefer the term augmented collective intelligence (ACI), where we design processes that maximize the synergy between humans and AI. Louis Rosenberg: There are two well-known principles of human behavior: one is collective intelligence—the idea that groups can be smarter than individuals if their input is harnessed effectively. The other is conversational deliberation—where groups generate ideas, debate, surface insights, and solve problems through discussion. However, scaling these processes is difficult. If you put 500 people in a chat room, it becomes chaotic. Research shows that the ideal conversation size is five to seven people. To address this, we developed Conversational Swarm Intelligence, using AI agents in small human groups to facilitate discussions and relay key insights across overlapping subgroups. This allows us to scale deliberative processes while maintaining the benefits of small group discussions.   The post Collective Intelligence Compilation (AC Ep79) appeared first on Amplifying Cognition.
undefined
Feb 19, 2025 • 35min

Helen Lee Kupp on redesigning work, enabling expression, creative constraints, and women defining AI (AC Ep78)

“I’m cautiously optimistic because never before has technology been as accessible as it is now—being able to interact with machines in a way that feels so natural to us, rather than in ones and zeros or more technical ways. AI shouldn’t replace what exists but augment and enhance our creativity, helping us tap into what makes us uniquely human.” – Helen Lee Kupp About Helen Lee Kupp Helen Lee Kupp is co-founder and CEO of Women Defining AI, a community of female leaders applying and driving AI. She was previously leader of strategy and analytics at Slack and co-founder of its Future Forum. She is co-author of the best-selling book “How the Future Works: Leading Flexible Teams to do the Best Work of Their Lives”. Website: Women Defining AI LinkedIn Profile: Helen Lee Kupp What you will learn Redefining collaboration in the AI era Unlocking human potential through technology Why flexible work matters more than ever The power of diverse perspectives in AI Balancing optimism and caution in AI adoption How leaders can foster innovation from the ground up Women defining AI and shaping the future Episode Resources People Gregory Bateson Nichole Sterling  (co-founder of Women Defining AI) Companies & Organizations Women Defining AI Technical Terms & Concepts AI (Artificial Intelligence) Generative AI Large Language Model (LLM) Non-deterministic  AI policy AI adoption Machine learning (ML) Human-in-the-loop Transcript Ross Dawson: Helen, it is a delight to have you on the show. Helen Lee Kupp: It’s good to be here. I love how we first started talking over an AI research paper. It was very random but awesome. Ross: Well, that’s pushing the edges, trying to find what’s out there and see what comes on the other side. AI is emerging, and we’re sitting alongside each other. How are you feeling about today and how humans and AI are coming together? Helen: I feel cautiously optimistic, and part of that is because I’ve been in tech for so long. Prior to getting much deeper into AI, I was working on flexible work and research around how to rethink and redesign how we, as humans, collaborate in a way that is more personalized, more customized, and helps more people bring their best selves to work and do their best work. It was serendipitous that around the same time, there was an increase in AI innovation. Now, we had technology to pair with the equation of redesigning work. COVID forced us to rethink work, not just from a people and process perspective but alongside rapid technological change. I’m cautiously optimistic because never before has technology been as accessible as it is now. We can interact with machines in a way that feels so natural rather than in ones and zeros or technical ways. Ross: I’m very aligned with that. One of the things you said was “bring your best self to work.” I think of it as human potential. If we’re creating a future of work, we have potential futures that are not so great and others that are very positive, where people express more of who they are and their capabilities. How can we create organizations like that? Helen: It starts with recognizing that everyone has different preferences and work styles. Organizations, teams, and leaders need to meet people where they are rather than force them into rigid structures that worked in the past. I often share this story—I’m deeply introverted. Despite jumping onto this podcast with you, I have always been an introvert. Navigating an extroverted world takes extra energy. In traditional office and meeting environments, I had to work harder to show up. However, when I had more diverse formats to interact with my team and leadership, it unlocked something for me. Instead of pretending to be the loudest in the room, I could find my own ways of expressing ideas—through text, written formats, or chat. It made work easier for me. When you think about how that manifests across a team, leaders and organizations must avoid putting rigid boxes around collaboration—whether it’s the hours we work or the place where we work. Increasing flexibility enables people to express themselves and bring forward ideas that might otherwise remain hidden. Ross: That’s a compelling vision. How do you bring that to reality? What do you do inside an organization to foster and enable that? Helen: One of the tools that helped in our research on the future of work and redesigning organizations is something simple—creating a team operating manual. The act of explicitly writing down the different ways we interact as a team opens up discussions. It allows for feedback: “Does this work for you? Should we try something different?” When these conversations don’t happen, implied assumptions remain—such as the norm of working in an office from nine to five. Explicitly stating and questioning these assumptions is step one. Then, organizations should give teams and managers the flexibility to define how they work within their sub-teams. Having operating manuals, sharing what works for your team, and bubbling up insights allow for a more bottom-up approach rather than a top-down one. It treats people like adults who understand their preferences and styles. Ross: That’s really nice. PepsiCo had an initiative where teams coordinated among themselves to determine their availability and collaboration methods. I wonder if we can push that further. People are often conditioned to fit into roles and adjust to their environments. Can we help people recognize their self-imposed constraints and flourish beyond them? Helen: This is where I’m cautiously optimistic about AI and how we integrate technology into work. When people start using AI, the initial question is often, “How can I do this more efficiently?” AI is a powerful tool that shortens tasks—like a calculator removing the need for mental math. However, once people move beyond efficiency, they begin asking, “What can I do differently?” AI allows us to do things we couldn’t before. It helps break conventional thinking. For example, if you use a large language model to generate 10 variations of an idea, it removes emotional bias. It shifts the conversation from defending one perspective to evaluating multiple ideas. This fosters creative discourse and integrates seamlessly into workflows without feeling like extra work. AI should not replace what exists but augment and enhance our creativity—helping us tap into what makes us uniquely human. Ross: So, AI helps individuals bring different perspectives and expand their thinking? Helen: Exactly. One of my favorite things to do with large language models is to open up the funnel. Whether it’s brainstorming writing styles, problem-solving, or scoping solutions, AI presents multiple potential paths. This reminds us that there is no single correct answer—only possibilities to explore. Ross: Gregory Bateson said wisdom comes from multiple perspectives. We now have multiple perspectives on demand. You work with leaders to redesign organizations. What guidance do you suggest? How can organizations evolve from existing structures? Helen: I don’t have the perfect answer for what the shape of organizations should be. However, we’ve been transitioning from hierarchical structures to teams-of-teams for a while, with varying success. The biggest challenge is breaking out of our mental paradigms of control. Flexible work means allowing managers and teams to design their workdays and collaboration methods rather than enforcing a company-wide approach. AI introduces another paradigm shift—it behaves unpredictably compared to traditional technology. Leaders must accept that they don’t have all the answers. Some of the best AI-driven innovations come from employees who work closely with the technology daily. For example, a data scientist evaluating AI’s role in data processing can quickly identify where it adds value and where it falls short. These innovations emerge at the edges, from individuals experimenting in real time. Leaders must create environments where experimentation, sharing, and collaboration thrive. Instead of dictating policies top-down, they should spotlight grassroots innovations and scale them across the organization. Ross: So, you’re describing emergence—where leaders set conditions for innovation rather than dictate precise rules? Helen: Exactly. Constraints breed creativity. If there are no guardrails or structures, people stick to the status quo and don’t innovate. Leaders must provide the right nudges—whether through hackathons, dedicated experimentation time, or open Slack channels to share discoveries. Some organizations set up “experiment hours”—weekly meetings where teams explore AI applications in a low-pressure, fun environment. This fosters creativity and keeps innovation moving. Ross: That’s a great example. Speaking of multiple perspectives, one of your recent ventures is Women Defining AI. What is it about? Helen: Women Defining AI started as an experiment about a year and a half ago. I had been working with generative AI models and noticed a significant gender gap in AI adoption. Data showed men adopting AI at higher rates than women, and anecdotally, I saw the same trend. Initially, it was just a study group where I shared what I was learning with other women. Within days, 50 people joined, and by month two, we had 150 members. It became clear that women wanted a space to ask questions, learn together, and experiment without judgment. Now, Women Defining AI is a virtual community that helps women at different stages of their AI journey. Whether it’s understanding AI’s role in their work, automating tasks, or building solutions, we guide them in gaining technical confidence and shaping the field. Some members have landed AI-related jobs or joined AI policy teams at their organizations. Having diverse perspectives in AI is crucial. Women in our community, particularly those from HR and other industries, quickly identify biases and blind spots that might otherwise go unnoticed. We need more voices questioning and shaping AI while we’re still in its early stages. Ross: That’s fantastic. Looking ahead to 2026, what excites you most? Helen: Personally, I’m excited about having our third baby! It’s a reminder of the new perspectives each generation brings. For Women Defining AI, 2025 will be the year we build in public. We’ve been experimenting and learning internally, but now we’re sharing real stories and projects to inspire more builders and technologists. Ross: That’s fantastic. Thank you for your time, insights, energy, and passion. Helen: Thanks for having me. The post Helen Lee Kupp on redesigning work, enabling expression, creative constraints, and women defining AI (AC Ep78) appeared first on Amplifying Cognition.
undefined
Feb 12, 2025 • 26min

Human AI Symbiosis Compilation (AC Ep77)

In this discussion, Alexandra Diening, Co-founder of the Human-AI Symbiosis Alliance, and Mohammad Hossein Jarrahi, Associate Professor at UNC Chapel Hill, unpack the nuances of human-AI interactions. They explore the delicate balance between automating tasks and augmenting human capabilities, emphasizing the irreplaceable role of human intuition in critical decisions. The conversation highlights the need for responsible AI practices to foster positive symbiosis, and advocates for a future where humans and AI co-evolve, reshaping our definitions of knowledge and existence.
undefined
Feb 5, 2025 • 33min

Rita McGrath on inflection points, AI-enhanced strategy, memories of the future, and the future of professional services (AC Ep76)

Rita McGrath, a top expert on strategy and innovation and Professor at Columbia Business School, shares her insights on the intersection of human creativity and AI. She discusses how AI can enhance strategic decision-making and navigate transient competitive advantages. McGrath highlights the significance of inflection points in business evolution and their effects on consumer behavior. Furthermore, she reimagines the future of work, advocating for a human-centric approach in an AI-driven landscape, emphasizing the importance of continuous learning and collaboration.
undefined
Jan 29, 2025 • 34min

Christian Stadler on AI in strategy, open strategy, AI in the boardroom, and capabilities for strategy (AC Ep75)

Christian Stadler, a strategic management professor at Warwick Business School and author of 'Open Strategy,' dives into the transformative role of AI in decision-making. He emphasizes AI as a co-strategist that enhances boardroom discussions rather than replaces human judgment. The conversation covers the shift toward open strategy, highlighting how diverse perspectives drive innovation and improve execution. Stadler also discusses the need for political awareness in leadership and engaging employees to foster a culture of innovation.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode