“Let technology do the bits that technology is really good at. Offload to it. Then over-index and over-amplify the human skills we should have developed over the last 10, 15, or 20 years.”
– Kieran Gilmurray
About Kieran Gilmurray
Kieran Gilmurray is CEO of Kieran Gilmurray and Company and Chief AI Innovator of Technology Transformation Group. He works as a keynote speaker, fractional CTO and delivering transformation programs for global businesses. He is author of three books, most recently Agentic AI. He has been named as a top thought leader on generative AI, agentic AI, and many other domains.
Website:
Kieran Gilmurray
X Profile:
Kieran Gilmurray
LinkedIn Profile:
Kieran Gilmurray
BOOK: Free chapters from Agentic AI by Kieran Gilmurray
Chapter 1 The Rise of Self-Driving AI
Chapter 2: The Third Wave of AI
Chapter 3 – Agentic AI Mapping the Road to Autonomy
Chapter 4- Effective AI Agents
What you will learn
Understanding the leap from generative to agentic AI
Redefining work with autonomous digital labor
The disappearing need for traditional junior roles
Augmenting human cognition, not replacing it
Building emotionally intelligent, tech-savvy teams
Rethinking leadership in AI-powered organizations
Designing adaptive, intelligent businesses for the future
Episode Resources
People
John Hagel
Peter Senge
Ethan Mollick
Technical & Industry Terms
Agentic AI
Generative AI
Artificial intelligence
Digital labor
Robotic process automation (RPA)
Large language models (LLMs)
Autonomous systems
Cognitive offload
Human-in-the-loop
Cognitive augmentation
Digital transformation
Emotional intelligence
Recommendation engine
AI-native
Exponential technology
Intelligent workflows
Transcript
Ross Dawson: Hey, it’s fantastic to have you on the show.
Kieran Gilmurray: Absolutely delighted, Ross. Brilliant to be here. And thank you so much for the invitation, by the way.
Ross: So agentic AI is hot, hot, hot, and it’s now sort of these new levels of how it is we — these are autonomous or semi-autonomous aspects of AI. So I want to really dig into — you’ve got a new book out on agentic AI, and particularly looking at the future of work. And particularly want to look at work, so amplifying cognition.
So I want to start off just by thinking about, first of all, what is different about agentic AI from generative AI, which we’ve had for the last two or three years, in terms of our ability to think better, to perform our work better, to make better decisions? So what is distinctive about this layer of agentic AI?
Kieran: I was going to say, Ross, comically, nothing if we don’t actually use it. Because it’s like all the technologies that have come over the last 10–15 years. We’ve had every technology we have ever needed to make more work, more efficient work, more creative work, more innovative, to get teams working together a lot more effectively.
But let’s be honest, technology’s dirty little secret is that we as humans very often resist. So I’m hoping that we don’t resist this technology like the others we have slowly resisted in the past, but they’ve all come around to make us work with them.
But this one is subtly different. So when you say, look, agentic AI is another artificial intelligence system. The difference in this one — if you take some of the recent, what I describe as digital workforce or digital labor, go back eight years to look at robotic process automation — which was very much about helping people perform what was meant to be end-to-end tasks.
So in other words, the robots took the bulky work, the horrible work, the repetitive work, the mundane work and so on — all vital stuff to do, but not where you really want to put your teams, not where you really want to spend your time. And usually, all of that mundaneness sucked creativity out of the room.
You ended up doing it most of the day, got bored, and then never did the innovative, interesting stuff.
Agentic is still digital labor sitting on top of large language models. And the difference here is, as described, is that this is meant to be able to act autonomously. In other words, you give it a goal and off it goes with minimal or no human intervention. You can design it as such, or both.
And the systems are meant to be more proactive than reactive. They plan, they adapt, they operate in more dynamic environments. They don’t really need human input. You give them a goal, they try and make some of the decisions.
And the interesting bit is, there is — or should be — human in the loop in this. A little bit of intervention.
But the piece here, unlike RPA — that was RPA 1, I should say, not the later versions because it’s changed — is its ability to adapt and to reshape itself and to relearn with every interaction.
Or if you take it at the most basic level — you look at a robot under the sea trying to navigate, to build pipelines. In the past, it would get stuck. A human intervention would need to happen. It would fix itself.
Now it’s starting to work itself out and determine what to do. If you take that into business, for example, you can now get a group of agentic agents, for example, to go out and do an analysis of your competitors.
You can go out and get it to do deep research — another agentic agent to do deep research, McKinsey, BCG or something else. You can get another agent to bring that information back, distill it, assemble it, get an agent to create it, turn that into an article. Get another agent to proofread it. Get another agent to pop it up onto your social media channels and distribute it.
And get another agent to basically SEO-optimize it, check and reply to any comments that anyone’s making. You’re sort of going, “Here, but that feels quite human.” Well, that’s the idea of this.
Now we’ve got generative AI, which creates. The problem with generative AI is that it didn’t do. In other words, after you created something, the next step was, well, what am I going to do with my creation?
Agentic AI is that layer on top where you’re now starting to go, “Okay, not only can I create — I can decide, I can do and act.” And I can now make up for some of the fragility that exists in existing processes where RPA would have broken.
Now I can sort of go from A to B to D to F to C, and if suddenly G appears, I’ll work out what G is. If I can’t work it out, I’ll come and ask a person. Now I understand G, and I’ll keep going forever and a day.
Why is this exciting — or interesting, I should say? Well-used, this can now make up for all the fragility of past automation systems where they always got stuck, and we needed lots of people and lots of teams to build them.
Whereas now we can let them get on with things.
Where it’s scary is that now we’re talking about potential human-level cognition. So therefore, what are teams going to look like in the future? Will I need as many people? Will I be managing — as a leader — managing agentic agents plus people?
Agentic agents can work 24/7. So am I, as a manager, now going to be expected to do that?
Its impact on what type of skills — in terms of not just leadership, but digital and data and technical and everything else — there’s a whole host of questions. There is as much as there is new technology here Ross.
Ross Dawson: Yeah, yeah, absolutely. And so, I mean, those are some of the questions, though, I want to, want to ask you the best possible answers we have today.
And in your book, you do emphasize this is about augmenting humans. It is around how it is we can work with the machines and how they can support us, and human creativity and oversight being at the center.
But the way you’ve just laid out, there’s a lot of what is human work, which is overlap from what you’ve described.
So just at a first step, thinking about individuals, right? Professionals, knowledge workers — and so they have had, there’s a few layers. You’ve had your tools, your Excels. You’ve had your assistants which can go and do tasks when you ask them. And now you have agents which can go through sequences and flows of work in knowledge processes.
So what does that mean today for a knowledge worker who is starting to have, where the enterprise starts to bring them in? Or they say, “Well, this is going to support it.” So what are the sorts of things which are manifest now for an individual professional in bringing these agentic workforce play? What are the examples? What are ways to see how this is changing work?
Kieran Gilmurray: Yeah, well, let’s dig into that a little bit, because there’s a couple of layers to this.
If you look at what AI potentially can do through generative AI, all of a sudden, the question becomes: why would I actually hire new trainees, new labor?
On the basis that, if you look at any of the studies that have been produced recently, then there’s two roles, two setups. So let me do one, which is: actually, we don’t need junior labor, because junior labor takes a long time to learn something.
Whereas now we’ve got generative AI and other technologies, and I can ask it any question that I want, and it’s going to give me a pretty darned good answer.
And therefore, rather than having three and four and five years to train someone to get them to a level of competency, why don’t I not just put in agentic labor instead? It can do all that low-ish level work, and I don’t need to spend five years learning. I immediately have an answer.
Now that’s still under threat because the technology isn’t good enough yet. It’s like the first scientific calculator version — they didn’t quite work. Now we don’t even think about it.
So there is a risk that all of a sudden, agentic AI can get me an answer, or generative AI can get me an answer, that previously would have taken six or eight weeks.
Let me give you an example.
So I was talking to a professor from Chicago Business School the other day, and he went to one of his global clients. And normally the global client will ask about a strategy item. He would go away — him and a team of his juniors and equals would research this topic over six or twelve weeks. And then they would come back with a detailed answer, where the juniors would have went round, done all the grunt work, done all the searching and everything else, and the seniors would have distilled it off.
He went — he’s actually written a version of a GPT — and he’s fed it past strategy documents, and he fed in the client details.
Now he did this in a private GPT, so it was clean and clear, and in two and a half hours, he had an answer.
It literally — his words, not mine — he went back to the client and said, “There you go. What do you think? By the way, I did that with generative AI and agentics.”
And they went, “No, you didn’t. That work’s too good. You must have had a team on this.”
And he said, “Literally not.” And he’s being genuine, because I know the guy — he’d put his reputation on it.
So all of a sudden, now all of those roles that might have existed could be impacted.
But where do we get then the next generation of labor to come through in five and six and ten years’ time?
So there’s going to be a lot of decisions need made. As to: look, we’ve got Gen AI, we’ve potentially got agentic AI. We normally bring in juniors over a period of time, they gain knowledge, and as a result of gaining knowledge, they gain expertise. And as a result of gaining expertise, we get better answers, and they get more and more money.
But now all of Gen AI is resulting in knowledge costing nothing.
So where you and I would have went to university — let’s say we did a finance degree — that would have lasted us 30 years. Career done. Tick.
Now, actually, Gen AI can pretty much understand, or will understand, everything that we can learn on a finance degree, plus a politics degree, plus an economics degree, plus, plus, plus — all out of the box for $20 a month.
And that’s kind of scary.
So when it comes to who we hire, that opens up the question now: do we have Gen AI and agentic labor, and do we actually need as many juniors?
Now, someone’s going to have to press the buttons for the next couple of years, and any foresighted firm is going to go, “This is great, but people plus technology actually makes a better answer.” I just might not need as many.
So now, when it comes to the actual hiring and decision-making — as to how am I going to construct my labor force inside of an organization — that’s quite a tricky question, if and when this technology, Gen AI and agentics, really ramps through the roof.
Ross Dawson: I mean, these are — I mean, I think these are fundamentally strategic choices to be made. As in, you — I mean, it’s, crudely, it’s automate or augment.
And you could say, well, all right, first of all, just say, “Okay, well, how do we automate as many of the current roles which we have?” Or you can say, “Oh, I want to augment all of the current roles we have, junior through to senior.”
And there’s a lot more subtleties around those strategic decisions. In reality, some organizations will be somewhere between those two extremes — and a lot in between.
Kieran Gilmurray: 100%. And that’s the question. Or potentially, at the moment, it’s actually, “Why don’t we augment currently?”
Because the technology isn’t good enough to replace. And it isn’t — it still isn’t.
And no, I’m a fan of people, by the way — don’t get me wrong. So anyone listening to this should hear that. I believe great people plus great technology equals an even greater result.
The technology, the way it exists at the moment, is actually — and you look at some research out from Harvard, Ethan Mollick, HBR, Microsoft, you name it, it’s all coming out at the moment — says, if you give people Gen AI technology, of which agentic AI is one component:
“I’m more creative. More productive. And, oddly enough, I’m actually happier.”
It’s breaking down silos. It’s allowing me to produce more output — between 10 to 40% — but more quality output, and, and, and.
So at the moment, it’s an augmentation tool. But we’re training, to a degree, our own replacements.
Every time we click a thumbs up, a thumbs down. Every time we redirect the agentics or the Gen AI to teach it to do better things — or the machine learning, or whatever else it is — then technically, we’re making it smarter.
And every time we make it smarter, we have to decide, “Oh my goodness, what are we now going to do?” Because previously, we did all of that work.
Now, that for me has never been a problem. Because for all of the technologies over the decades, everybody’s panicked that technology is going to replace us.
We’ve grown the number of jobs. We’ve changed jobs.
Now, this one — will it be any different?
Actually — and why I say potentially — is you and I never worried, and our audience never worried too much, when an EA was potentially automated. When the taxi driver was augmented and automated out of a job. When the factory worker was augmented out of a job.
Now we’ve got a decision, particularly when it comes to so-called knowledge work. Because remember, that’s the expensive bit inside of a business — the $200,000 salaries, the $1 million salaries.
Now, as an organization, I’m looking at my cost base, going, “Well, I might actually bring in juniors and make them really efficient, because I can get a junior to be as productive as a two-year qualified person within six months, and I don’t need to pay them that amount of money.”
And/or, actually, “Why don’t I get rid of my seniors over a period of time? Because I just don’t need any.”
Ross Dawson: Things that some leaders will do. But, I mean, it comes back to the theme of amplifying cognition. The sense of — the real nub of the question is, yes, you can sort of say, “All right, well, now we are training the machine, and the machine gets better because it’s interacting. We’re giving it more work.”
But it’s really finding the ways in which the nature of the way we interact also increases the skills of the humans.
And so John Hagel talks about scalable learning. In fact, Peter Senge used to talk about organizational learning — and that’s no different today. We have to be learning.
And so, saying, “Well, as we engage with the AI — and as you rightly point out — we are teaching and helping the AI to learn,” we need to be able to build the process and systems and structures and workflows where the humans in it are not static and stagnant as they use AI more, but they’re more competent and more capable.
Kieran Gilmurray: Well, that’s the thing we need to do, Ross.
Otherwise, what we end up with is something called cognitive offload — where now, all of a sudden, I’ll get lazy, I’ll let AI make all of the decisions, and over time, I will forget and not be valuable.
For me, this is a question of great potential with technology. But the real question comes down to: okay, how do we employ that technology?
And to your point a second ago — what do we do as human beings to learn the skills that we need to learn to be highly employable? To create, be more innovative, more creative using technology?
Ross Dawson: I answered the question you just asked.
Kieran Gilmurray: 100%, and this is — this is literally the piece here, so—
Ross: That’s the question. So do you have any answers to that?
Kieran: No, of course. Of course. Well, mine is — it’s that.
So, for me, AI will be — absolutely — and AI is massive. And let me explain that, because everybody thinks it’s been around. If we look at generative AI for the last couple of years — but AI has been around for 80-plus years. It’s what I call an 80-year-old overnight success story.
Everybody’s getting excited about it. Remember, the excitement is down to the fact that I can now interact with — or you interact with — technology in a very natural sense and get answers that I previously couldn’t.
So now, all of a sudden, we’re experts in everything across the world. And if you use it on a daily basis, all of a sudden, our writing is better, our output’s better, our social media is better.
So the first bit is: just learn how to use and how to interact with the technology.
Now, we mentioned a moment ago — but hold on a second here — what happens if everybody uses it all the time, the AI has been trained, there’s a whole host of new skills?
Well, what will I do?
Well, this for me has always been the case. Technology has always come. There’s a lot less saddlers than there are software engineers. There might be a lot less software engineers in the future.
So therefore, what do we do?
Well, my one is this. All of this has been the same, regardless of the technology: let technology do the bits that technology is really good at. Offload to it.
You still need to understand or develop your digital, your AI, your automation, your data literacy skills — without a doubt. You might do a little bit of offloading, because now we don’t actually think about scientific calculators. We get on with it.
We don’t go into Amazon and automatically work out all of our product sets, because it’s got a recommendation engine. So therefore, let it keep doing all its stuff.
Whereas, as humans, I want to develop greater curiosity. I want to develop what I would describe as greater cognitive flexibility. I want to use the technology — now that I’ve got this — how can I produce even better, greater outputs, outcomes, better quality work, more innovative work?
And part of that is now going, “Okay, let the technology do all of its stuff. Free up tons of hours,” because what used to take me weeks takes me days.
Now I can do other stuff, like wider reading. I can partner with more organizations. I can attempt to do more things in the day — whereas in the past, I was just too busy trying to get the day job done.
The other bits I would be saying: companies need to develop emotional intelligence in people.
Because now, if I can get the technology to do the stuff, now I need to engage with tech. But more importantly, I’m now freed up to work across silos, to work across businesses, to bring in different partner organizations.
And statistically, only 36% of us are actually emotionally intelligent.
Now, AI is an answer for that as well — but emotional intelligence should be something I would be developing inside of an organization. A continuous innovation mindset. And I’d be teaching people how to communicate even better.
Notice I’m letting the tech do all the stuff that tech should do regardless. Now I’m just over-indexing and over-amplifying the human skills that we should have developed over the last 10, 15, or 20 years.
Ross Dawson: Yeah. And so, your point — this comes about people working together. And so I think that was one of the — certainly one of the interesting parts of your book is around team dynamics.
So there’s a sense of, yes, we have agentic systems. This starts to change the nature of workflows. Workflows involve multiple people. They involve AI agents as well.
So as we are thinking about teams — as in multiple humans assisted by technology — what are the things which we need to put in place for effective team dynamics and teamwork?
Kieran Gilmurray: Yeah, so — so look, what you will see potentially moving forward is that mixture of agentic labor working with human labor.
And therefore, from a leadership perspective, we need people — we need to teach people — to lead in new ways. Like, how do I apply agentic labor and human labor? And what proportion? What bits do I get agentic labor to do? What bits do I get human labor to do?
Again, we can’t hand everything over to technology. When is it that I step in? Where do I apply humans in the loop?
When you look at agentic labor, it’s going to be able to do things 24/7, but as people, we physically and humanly can’t. So, how — when am I going to work? What is the task that I’m going to perform?
As a leadership or as a business — well, what are the KPIs that I’m going to measure myself on, and my team on? Because now, all of a sudden, my outputs potentially could be greater, or I’m asking people to do different roles than they’ve done in the past, because we can get agentic labor to do it.
So there’s a whole host of what I would describe as current management consideration. Because, let’s be honest — like when we introduced ERP, CRM, factory automation, or something else — it just changed the nature of the tasks that we perform.
So this is thinking through: where is the technology going to be used? Where should we not use it? Where should we put people? How am I going to manage it? How am I going to lead it? How am I going to measure it?
These are just the latest questions that we need to answer inside of work.
And again, from a skillset perspective — from both a leadership and getting my human labor team to do particular work, or how I onboard them — how do I develop them? What are the skills that I’m now looking for when I’m doing recruitment?
What are the career paths that I’m going to put in place, now that we’ve got human plus agentic labor working together?
Those are all conversations that managers, leaders, and team leaders need to have — and strategists need to have — inside of businesses.
But it shouldn’t worry businesses, because again, we’ve had this same conversation for the last five decades. It’s just been different technology at different times, where we had to suddenly reinvent what we do, how we do it, how we measure it, and how we manage it.
Ross Dawson: So what are specifics of how teams, team dynamics might work in using agentic AI in a particular industry or in a particular situation? Or any examples? So let’s ground this.
Kieran Gilmurray: Yeah, so let’s — let me ground it in physical robots before I come into software robots, because this is what this is: software labor, not anything else.
When you look at how factories have evolved over the years — so take Cadbury’s factory in the UK. At one stage, Cadbury’s had thousands and thousands of workers, and everybody ended up engaging on a very human level — managing people, conversations every day, orchestration, organization. All of the division of labor stuff happened.
Now, when you go into Cadbury’s factory, it’s hugely automated — like other factories around the world. So now we’re having to teach people almost to mind the robots.
Now we have far less people inside of our organizations. And hopefully — to God — this won’t happen in what I’d describe as a knowledge worker park, but we’re going to teach people how to build logical, organized, sequential things. Because to break something down into a process to build a machine — it’s the same thing when it comes to software labor.
How am I going to break it and deconstruct a process down into something else? So the mindset needed to actually put software labor into place varies compared to anything else that we’ve done.
Humans were messy. Robots can’t be. They have to be very logical pieces.
In the past, we were used to dealing with each other. Now I’m going to have to communicate with a robot. That’s a very different conversation. It’s non-human. It’s silicon — not carbon.
So how do I engage with a robot? Am I going to be very polite? And I see a lot of people saying, “Please, would you mind doing the following?” No — it’s a damn robot. Just tell it what to do. My mindset needs to change.
So if I take, in the past, when I’m asking someone to do something, I might say, “Give me three things” or “Can you give me three ideas?” Now, I’ve got an exponential technology where my expectations and requests of agentic labor are going to vary.
But I need to remember — I’m asking a human one thing and a bot another.
Let me give you an example. I might say to you, “Ross, give me three examples of…” Well, that’s not the mindset we need to adopt when it comes to generative AI. I should be going, “Give me 15, 50, 5,000,” because it’s a limitless vat of knowledge that we’re asking for.
And then I need to practice and build human judgment — to say, “Actually, I’m not going to cognitively offload and let it think for me and just accept all the answers.” But I’m now going to have to work with this technology and other people to develop that curiosity, develop that challenging mindset, to suddenly teach people how to do deeper research, to fact-check everything that I’m being told.
To understand when I should use a particular piece of information that’s been given to me — and hope to God it’s not biased, not hallucinated, or anything else — but it’s actually a valuable knowledge item that I should be putting into workflow or a project or a particular document or something else.
So again, it’s just working through: what is technology? What’s the technology in front of me? What’s it really good at? Where can I apply it?
And understanding that — where should I put my people, and how should I manage both?
What are the skills that I need to teach my people — and myself — to allow me to deal with all of this potentially fantastic, infinite amount of knowledge and activity that will hopefully autonomously deliver all the outcomes that I’ve ever wanted?
But not unfettered. And not left to its own devices — ever.
Otherwise, we have handed over human agency and team agency — and that’s not something or somewhere we should ever go. The day we hand everything to the robots, we might as well just go to the care home and give up.
Ross Dawson: We’ll be doing that soon. So around now, let’s think about leadership.
So, I mean, you’ve alluded to that in quite a few — I mean, a lot of it has been really talking about some of the questions or the issues or the challenges that leaders at all levels need to engage with. But this changes, in a way, the nature of leadership.
As you say, you’ve got digital labor as well as human labor. The organization has a different structure. It impacts the boundaries of organizations and the flows of information and processes — cross-organizational boundaries.
So what is the shift for leaders? And in particular, what are the things that leaders can do to develop their capabilities for a somewhat different world?
Kieran Gilmurray: Yeah, it’s interesting.
So I think there’ll be a couple of different worlds here. Number one is, we will do what we’ve always done, which is: we’ll put in a bit of agentic labor, and we’ll put in a bit of generative AI, and we’ll basically tweak how we actually operate. We’ll just make ourselves marginally more efficient.
Because anything else could involve the redesign and the restructure of the organization, which could involve the restructure and the redesign of our roles. And as humans, we are very often very change-resistant.
Therefore, I don’t mind technology that I understand, and I don’t mind technology that makes me more productive, more creative. But I do mind technology that could actually disrupt how I lead, where I actually fit inside of the organization, and something else.
So for those leaders, there’s going to be a minimal amount of change — and there’s nothing wrong with that. That’s what I call the “taker philosophy,” because you go: taker, maker, shaper — and I’ll walk through those in a second — which is, I’ll just take another great technology and I’ll be more productive, more creative, more innovative.
And I recommend every business does that at this moment in time. Who wouldn’t want to be happier with technology doing greater things for you?
So go — box number one.
And therefore, the skills I’m going to have to learn — not a lot of difference. Just new skills around AI. In other words, understanding bias, hallucinations, understanding cognitive offloading, understanding where to apply the technology and not.
And by “not,” I mean: very often people put technology at something that has no economic value. Waste time, waste money, waste energy, get staff frustrated — something else. So those are just skills people have to learn. It could be any technology, I’ve said.
The other method of doing this is almost what I describe as the COVID method. I need to explain that statement.
When COVID came about, we all worked seamlessly. It didn’t matter. There were no boundaries inside of organizations. Our mission was to keep our customers happy. And therefore, it didn’t matter about the usual politics, the usual silos, or something else. We made things work, and we made things work fast.
What I would love to see organizations doing — and very few do it — is redesign and re-disrupt how they actually work.
And I’m sitting there going, it’s not that I’m doing what I’m doing and I’ve now got a technology — “Where do I add it on?” — as in two plus one is equal to three.
What I’m sitting going and saying is: How can I fundamentally reshape how I deliver value as an organization?
And working back from the customer — who will pay a premium for this — and therefore, if I work back from the customer, how do I reconstruct my entire business in terms of leadership, in terms of people, in terms of agentic and human labor, in terms of open ecosystems and partnerships and everything else — to deliver in a way that excites and delights?
If we take the difference between bookstore and Amazon — I never, or rarely, go into a bookstore anymore. I now buy Amazon almost every time, not even thinking about it.
If I look at AI-native labor — they’re what I describe as Uber’s children. Their experiences of the world and how they consume are very different than what you and I have constructed.
Therefore, how do I create what you might call AI-native intelligent businesses that deliver in a way that is frictionless and intelligent?
And that means: intelligent processes, intelligent people, using intelligent technology, intelligent leadership — forgetting about silos and breakdowns and everything else that exists politically inside of organizations — but applying the best technology. Be it agentics, be it automation, be it digital, be it CRM, ERP — it doesn’t really matter what it is.
Having worked back from the customer, design an organization to deliver on its promise to customers — to gain a competitive advantage.
And those competitive advantages will be less and less. I can copy all the technology quicker. Therefore, my business strategy won’t be 10 years. It possibly won’t be five. It might be three — or even less.
But my winning as a business will be my ability to construct great teams. And those great teams will be great people plus great technology — to allow me to deliver something digitally and intelligently to consumers who want to pay a premium for as long as that advantage lasts.
And it might be six months. It might be twelve months. It might be eighteen months.
So now we’re getting to a phase of almost fast technology — just like we have fast fashion.
But the one thing we don’t want to do is play loose and fast with our teams. Because ultimately, I still come back to the core of the argument — that great people who are emotionally intelligent, who’ve been trained to question everything that they’ve got, who are curious, who enjoy working as part of a team in a culture — and that piece needs to be taken care of as well.
Because if you just throw robots at everything and leave very few people, then what culture are you actually trying to deliver for your staff and for your customers?
How do I get all of this work to deliver in a way that is effective, is affordable, is operationally efficient, profitable — but with great people at the core, who want to continue being curious, creating new and better ways of delivering in a better organization?
Not just in the short term — because we’re very short-termist — but how do I create a great organization that endures over the next five or ten years?
By creating flexible labor and flexible mindsets, with flexible leaders organizing and orchestrating all this — to allow me to be a successful business.
Change is happening too quickly these days. Change is going to get quicker.
Therefore, how do I develop an adaptive mindset, adaptive labor force, and adaptive organization that’s going to survive six months, twelve months — and maybe, hopefully to God, sixteen months plus?
Ross Dawson: Fantastic. That’s a great way to round out. So where can people find out more about your work?
Kieran Gilmurray: Yeah, look, I’m on LinkedIn all the time — probably too much. I should get an agentic labor force to sort that out for me, but I’d much prefer authentic relationships than anything else.
Find me on LinkedIn — Kieran Gilmurray. I think there are only two of me: one’s in Scotland, who is related some way back, and the Irish one.
Or www.kierangilmurray.com is where I publish far too much stuff and give far too much stuff — things — away for free. But I have a philosophy that says all boats rise in a floating tide. So the more we share, the more we give away, the more we benefit each other.
So that’s going to continue for quite some time.
I have a book out on agentic AI. Again, it’s being given away for free. Ross, if you want to share it, please go for it, sir, as well.
As I said, let’s continue this conversation — but let’s continue this conversation in a way that isn’t about replacing people. It’s about great leadership, great people, and great businesses that have people at their core, with technology serving us — not us serving the technology.
Ross: Fabulous. Thanks so much, Kieran.
Kieran: My pleasure. Thanks for the invite.
The post Kieran Gilmurray on agentic AI, software labor, restructuring roles, and AI native intelligence businesses (AC Ep84) appeared first on Amplifying Cognition.