

Humans + AI
Ross Dawson
Exploring and unlocking the potential of AI for individuals, organizations, and humanity
Episodes
Mentioned books

Dec 10, 2025 • 37min
Nicole Radziwill on organizational consciousness, reimagining work, reducing collaboration barriers, and GenAI for teams (AC Ep26)
“Let’s get ourselves around the generative AI campfire. Let’s sit ourselves in a conference room or a Zoom meeting, and let’s engage with that generative AI together, so that we learn about each other’s inputs and so that we generate one solution together.”
–Nicole Radziwill
About Nicole Radziwill
Nicole Radziwill is Co-Founder and Chief Technology and AI Officer at Team-X AI, which uses AI to help team members to work more effectively with each other and AI. She is also a fractional CTO/CDO/CAIO and holds a PhD in Technology Management. Nicole is a frequent keynote speaker and is author of four books, most recently “Data, Strategy, Culture & Power”.
Website:
team-x.ai
qualityandinnovation.com
LinkedIn Profile:
Nicole Radziwill
X Profile:
Nicole Radziwill
What you will learn
How the concept of ‘Humans Plus AI’ has evolved from niche technical augmentation to tools that enable collective sense making
Why the generative AI layer represents a significant shift in how teams can share mental models and improve collaboration
The importance of building AI into organizational processes from the ground up, rather than retrofitting it onto existing workflows
Methods for reimagining business processes by questioning foundational ‘whys’ and envisioning new approaches with AI
The distinction between individual productivity gains from AI and the deeper organizational impact of collaborative, team-level AI adoption
How cognitive diversity and hidden team tensions affect collaboration, and how AI can diagnose and help address these barriers
The role of AI-driven and human facilitation in fostering psychological safety, trust, and high performance within teams
Why shifting from individual to collective use of generative AI tools is key to building resilient, future-ready organizations
Episode Resources
Transcript
Ross Dawson: Nicole, it is fantastic to have you on the show.
Nicole Radziwill:Hello Ross, nice to meet you. Looking forward to chatting.
Ross Dawson: Indeed, so we were just having a very interesting conversation and said, we’ve got to turn this on so everyone can hear the wonderful things you’re saying. This is Humans Plus AI. So what does Humans Plus AI mean to you? What does that evoke?
Nicole Radziwill: The first time that I did AI for work was in 1997, and back then, it was hard—nobody really knew much about it. You had to be deep in the engineering to even want to try, because you had to write a lot of code to make it happen. So the concept of humans plus AI really didn’t go beyond, “Hey, there’s this great tool, this great capability, where I can do something to augment my own intelligence that I couldn’t do before,” right?
What we were doing back then was, I was working at one of the National Labs up here in the US, and we were building a new observing network for water vapor. One of the scientists discovered that when you have a GPS receiver and GPS satellites, as you send the signal back and forth between the satellites, the signal would be delayed. You could calculate, to very fine precision, exactly how long it would take that signal to go up and come back. Some very bright scientist realized that the signal delay was something you could capture—it was junk data, but it was directly related to water vapor.
So what we were doing was building an observing system, building a network to basically take all this junk data from GPS satellites and say, “Let’s turn this into something useful for weather forecasting,” and in particular, for things like hurricane forecasting, which was really cool, because that’s what I went to school for. Originally, back in the 90s, I went to school to become a meteorologist.
Ross Dawson: My brother studied meteorology at university.
Nicole Radziwill: Oh, that’s cool, yeah. It’s very, very cool people—you get science and math nerds who have to like computing because there’s no other way to do your job. That was a really cool experience. But, like I said, back then, AI was a way for us to get things done that we couldn’t get done any other way. It wasn’t really something that we thought about as using to relate differently to other people.
It wasn’t something that naturally lent itself to, “How can I use this tool to get to know you better, so that we can do better work together?” One of the reasons I’m so excited about the democratization of, particularly, the generative AI tools—which to me is just like a conversational layer on top of anything you want to put under it—the fact that that exists means that we now have the opportunity to think about, how are we going to use these technologies to get to know each other’s work better?
That whole concept of sense making, of taking what’s in my head and what’s in your head, what I’m working on, what you’re working on, and for us to actually create a common space where we can get amazing things done together. Humans plus AI, to me, is the fact that we now have tools that can help us make that happen, and we never did before, even though the tech was under the surface.
So I’m really excited about the prospect of using these new tools and technologies to access the older tools and technologies, to bring us all together around capabilities that can help us get things done faster, get things done better, and understand each other in our work to an extent that we haven’t done before.
Ross Dawson: That’s fantastic, and that’s really aligned in a lot of ways with my work. My most recent book was “Thriving on Overload,” which is about the idea of infinite information, finite cognition, and ultimately, sense making. So, the process of sense making from all that information to a mental model. We have our implicit mental models of how it is we behave, and one of the most powerful things is being able to make our own implicit mental models explicit, partly in order to be able to share them with other people.
Currently, in the human-AI teams literature, shared mental models is a really fundamental piece, and so now we’ve got AI which can assist us in getting to shared mental models.
Nicole Radziwill: Well, I mean, think about it—when you think about teams that you’ve worked in over the past however many years or decades, one of the things that you’ve got to do, that whole initial part of onboarding and learning about your company, learning about the work processes, that entire fuzzy front end, is to help you engage with the sense making of the organization, to figure out, “What is this thing I’ve just stepped into, and how am I supposed to contribute to it?”
We’ve always allocated a really healthy or a really substantive chunk of time up front for people to come in and make that happen. I’m really enticed by, what are the different ways that we’re going to— for lack of a better word—mind meld, right? The organization has its consciousness, and you have your consciousness, and you want to bring your consciousness into the organization so that you can help it achieve greater things. But what’s that process going to look like? What’s the step one of how you achieve that shared consciousness with your organization?
To me, this is a whole generation of tools and techniques and ways of relating to each other that we haven’t uncovered yet. That, to me, is super exciting, and I’m really happy that this is one of the things that I think about when I’m not thinking about anything else, because there’s going to be a lot of stuff going on.
Ross Dawson: All right. Well, let me throw your question back. So what is the first step? How do we get going on that journey to melding our consciousness in groups and peoples and organizations?
Nicole Radziwill: Totally, totally. One of the people that I learned a lot from since the very beginning of my career is Tom Redman. You know Tom Redman online, the data guru—he’s been writing the best data and architecture and data engineering books, and ultimately, data science books, in my opinion, since the beginning of time, which to me is like 1994.
He just posted another article this week, and one of the main messages was, in our organizations, we have to build AI in, not bolt it on. As I was reading, I thought, “Well, yeah, of course,” but when you sit back and think about it, what does that actually mean? If I go to, for example, a group—maybe it’s an HR team that works with company culture—and I say to them, “You’ve got to build AI in. You can’t bolt it on,” what they’re going to do is look back at me and say, “Yeah, that’s totally what we need to do,” and then they’re going to be completely confused and not know what to do next.
The reason I know that’s the case is because that’s one of the teams I’ve been working with the last couple of weeks, and we had this conversation. So together, one of the things I think we can do is make that whole concept of reimagining our work more tangible. The way I think we can do that is by consciously, in our teams, taking a step back and saying, rather than looking at what we do and the step one, step two, step three of our business processes, let’s take a step back and say, “Why are we actually doing this?”
Are there groups of related processes, and the reason we do these things every day is because of some reason—can we articulate that reason? Do we believe in that reason? Is that something we still want to do? I think we’ve got to encourage our teams and the teams we work with to take that deep step back and go to the source of why we’re doing what we’re doing, and then start there.
Make no assumptions about why we have to do what we’re doing. Make no assumptions about the extent to which we have to keep doing what we’re doing. Just go back to the ultimate goal and say, with no limitations, “How might I do that now, if I didn’t have the corporate politics, if I didn’t have these old, archaic, crusty systems that I had to fight with, what would I do?” Because we’re now in a position where the technical debt of scrapping some of those and starting some things new from scratch maybe is not quite as oppressive as it might have been in the past.
So that’s what I think the first step would be—go back to the why. Why are we doing these business processes? It’s great food for thought.
Ross Dawson: Yeah, well, I am a big proponent of redesigning work in organizations. So basically, all right, call whatever you’ve got in the past—now it’s humans plus AI. You have wonderful humans, you’ve got wonderful AI, how do you reconfigure them? Obviously, there are many pathways—most of them, unfortunately, will be de facto incremental, as in saying, “Well, this is what we’ve got and how do we move forward?” But you have to start with that vision of where it is you are going.
To your point, saying, “Well, why? What is it you’re trying to achieve?” That’s when you can start to envisage that future state and the pathway from here to there. But we’re still only getting hints and glimpses of what these many, many different architectures of humans plus AI organizations can be.
Nicole Radziwill: Totally great. Have you seen any examples recently that really stand out in your mind of organizations that are doing it really well?
Ross Dawson: What I’ve been looking at—so it’s on my agenda to try to find some more—but what I have been looking at is professional service firms that have re-architected, some of them from scratch. So we have Case Team and Super Good, sort of relatively small organizations. Then there’s—forgotten his name—but it’s a new one founded by the former managing partners of EY and PwC in the UK, which is basically from—and I haven’t seen inside it, but I got an inkling that they’re having a decent approach.
But these are relatively fresh, and so it’s harder to see the examples of ones which have shifted from older workflows to new ones. Though, I mean, again, there’s not a lot of transparency. But the best—the sense, as it were, of the best of the top professional firms, or the best if you find the right pockets in the largest ones—
Nicole Radziwill: I totally resonate with what you say about professional services. Those are the organizations that are picking it up more quickly, because they have to. I mean, who’s going to engage a professional services firm that says, “Oh yeah, we haven’t started working with the AI tools yet, we’re just doing it the old way”? No one is going to pick you up, because usually, what do you engage professional services firms for? It’s because they have skills that you don’t have, or because they have the time and the freedom or flexibility to go figure out those new things. You want their learning, you want to bring that into your organization.
So, yeah, that’s a really good thing that you picked up on there, because I’ve seen the same thing.
Ross Dawson: Well, I guess everything is—there’s a lot of rhetoric, as in they’re trying to sell AI services, and they say, “Yeah, well, look, we’re really good at it. Look at all these wonderful things,” and that may or may not reflect the reality. But again, I think the point of saying, look at the best of EY, look at the best of McKinsey, look at the best of Bain—Bain is actually doing some interesting stuff. But unfortunately, there’s not enough visibility, other than the PR talk, to really know how this is architected.
Nicole Radziwill: And you know, also, the other thing that I think about is, when you have a great idea and you’re bringing it into your organization, it doesn’t matter how extensively you’ve researched it, how many prototypes you’ve built—let’s say you have the most amazing idea to revamp the productivity of your organization right now—what’s stopping you is not the sanctity of your idea. It’s overcoming the brain barrier between you and other people.
How many times have you gone into an organization with a really great idea for improvement, but it just takes a long time to talk to people about it, to maybe educate them about the background or why you thought this was a good idea? Maybe you have to convince them that your new idea actually is something that would work in their pre-existing environment that they’re super comfortable with. The challenge is not the depth of the solution—it’s our ability to get into each other’s heads and agree upon a course of action and then do it.
That human part has always been the most difficult, but it’s been easy to think, “Oh no, it’s the technology part, because it takes longer.” The thing that I’m really intrigued by right now is that, since the time to develop technology is shrinking smaller and smaller, it’s going to force us to solve some of the human issues that are really holding us back. And I think that’s pretty exciting.
Ross Dawson: So you are a co-founder of Team-X AI, which I’ve got to say looks like a very interesting organization. Perhaps before talking about what it does, I’d like to ask, what’s the premise, what is the idea that you are putting into practice in the company?
Nicole Radziwill: Cool, cool. So my goal has always been—as a, I mean, the first team that I managed, like I said, was back in the late 90s—my goal has always been to help people work better together and with the new emerging technologies. The nature of the emerging technology is going to change over time; it doesn’t matter what it is right now. It’s to help people work better together with each other, with AI, particularly for generative AI tools.
The thing that’s holding back organizational performance, at least from the teams that I’ve seen implement this, is that people have tended to adopt AI tools for personal productivity improvements. Everybody’s got access to the licenses, and they go in, they try and figure out, “How can I speed up this part of my process? How can I reduce human error here? How can I come into work in the morning and have my day be better than it would be without these tools?” So it’s been very individually focused.
But even a year, year and a half ago, some of my collaborators and I were noticing that the organizations that were really on the leading edge had taken a slightly different starting point. Instead—well, I don’t say instead of, it is in addition to—in addition to using the AI tools for personal productivity, they also said, “Let’s see how we can use these collaboratively. Let’s see how we can study our processes that are cross-cutting, processes that bring us all together in pursuit of results. Let’s study those. Let’s get ourselves around the generative AI campfire.
Let’s sit ourselves in a conference room or a Zoom meeting, and let’s engage with that generative AI together, so that we learn about each other’s inputs and so that we generate one solution together.” Those are the organizations that were really getting the biggest results. And surprisingly, now, a year plus later, that’s still the chasm that organizations have to cross. Think about the people that you’ve worked with—lots of people are saying, “We know how to prompt now, we feel comfortable prompting. When are we going to start seeing the results?” So it’s the transition from individual improvements to improvements at that team level, that are really working at the process level, that’s what’s going to cause people to surge forward.
That’s why we decided to start with that premise and figure out how to help teams work with the people that they had to work with, figure out what the barriers to collaboration were with the people, and in order to make collaboration with AI at that team level more streamlined, more able for the team to pick it up. We wanted to crack that code, and so that’s what we did.
So the Team-X stuff is an algorithm that actually looks at the space between people to help bust up those barriers to collaboration between the humans, so that the humans can collaborate better together with AI.
Ross Dawson: It definitely sounds cool. I want to dig in there. So is it essentially a facilitator, in the sense of being able to understand the humans involved and what they’re trying to achieve, in order to ensure that you have a collective intelligence emerging from that team? And if so, how specifically does it that it?
Nicole Radziwill: Yeah, okay, so for about 10 years, we were studying cognitively diverse teams. One of the problems we were trying to solve was, how do you get groups of people who are completely different from one another—who may be over-indexed in things like anxiety or depression or sensory-seeking or sensory-avoiding characteristics—when you get a group of extremely cognitively diverse people together, how do you help them be the most productive, the fastest? That was the premise 10 years ago. Actually, it’s even more than 10 years ago—if it’s 2025, 13 years ago.
By studying how to engage with those teams, how to be part of one of those teams, how do you do the forming, storming, and norming to get to performing? That was really the question to answer. Over the course of those years, by working through a lot of really unexpected situations, we started to see patterns—not within individual people, but what happened when you got different people together.
Here’s an example of this: when you get people together, the number one most common unspoken norm, hidden tension that we see emerging in groups is where you have people whose preference for receiving information is in writing—if you’re going to tell me something that I need to know, I prefer that you give that to me in writing so that I have reference, I can see it, I can review it, I can keep it and refer to it later. But guess what? The most likely possibility is that my preference to give information to you is talking.
So think about the conflict that’s set up—if I expect everyone to give me information in writing so that I can be most productive, but I expect that I can speak it to you, there’s an imbalance there, because someone is not going to be getting what they need in order to be able to understand that information best.
Just looking at little conflicts like that—these are aspects of work styles, work habits, anything that is part of your style that contributes to how you get results—can get into conflict with other people if your baseline assumptions are different. Here’s another great example.
Ross Dawson: I can see how—what I think you’re describing is saying, okay, you’re picking up some patterns of team dysfunctions, as it were, and I can see how generative AI could be able to do that. It’s a little harder to see how you can get the analysis which would enable machine learning algorithms to identify those patterns.
Nicole Radziwill: Yeah, it’s vintage AI underneath the surface, so the conversational aspect comes later. That’s a really interesting thing to bring up, too—you know that you can’t solve all problems with generative AI, right? Some parts of your problem are best solved deterministically, some parts are best solved statistically, and some parts are best solved using Gen AI completely stochastically, where the window for the types of responses is larger, and that’s fine.
One of the things we had to do was be very cognizant about where we put the machine learning models, what they were producing, and then how we used those to help people engage with their teams so that they could reduce those barriers to collaboration. What we built is a mix of vintage AI—mostly unsupervised PCA and other clustering algorithms. From those, we figured out, here are the patterns that we see a lot, and then from those, we applied the generative AI to help get them to build the narratives that the teams can use to understand what they mean.
Ross Dawson: So crudely, it’s diagnosis, and then solution—
Nicole Radziwill: Diagnosis, Solution, and then human facilitation. So, yeah. Basically, when a team comes in and says, “We want to do Team-X,” we crunch a lot of data, use our models to figure out what are those hidden tensions, what are those unspoken norms, and what are the options available to reduce barriers to collaboration for you. But then we work with the team for them to come up with, “What does that mean for us? How can we create the environment for each other so that we can move beyond our natural challenges, so that we can use generative AI more effectively together?”
Ross Dawson: So there’s a human facilitator that—
Nicole Radziwill: Yes, there’s algorithms, plus a human facilitator, plus ongoing support.
Ross Dawson: So describe that then in terms of saying, all right, you have the analysis which feeds into the diagnosis, the patterns, which feeds into the way in which you’re working with the team. So could you then frame this as a humans plus AI facilitation, as in both the human facilitation—
Nicole Radziwill: Yep, exactly. We collect data, run the algorithms, facilitate a session to get understanding, and then we—
Ross Dawson: So how does the human facilitator work with AI in order to be an effective facilitator of better outcomes?
Nicole Radziwill: Oh, I mean, mainly it’s just learning how to interpret the output and then learning how to guide the team towards the answer that’s right for them. What the algorithms do is they get you in the neighborhood, but the algorithms aren’t going to know exactly what are the challenges you’re dealing with right now. It’s through those immediate challenges that any group is having at the moment that you can really highlight and say, what are the actions that we need to take?
So we get to both of those points, and then we facilitate to bring the results from the algorithm together with what’s meaningful and important to the team right now, so that they can solve a pressing issue for them that they might not have solved any other way.
Ross Dawson: So in that case, the human facilitator has input from the AI to guide their facilitation, because there is, as you know, a body of interesting work around using AI for behavioral nudges in teams.
Nicole Radziwill: Oh, yeah, yeah, yeah. Didn’t that start with Laszlo Bock, the Google guy? He had some great work back then. He started a company, and then he sold the company, but the work that they were doing even back then—we relied upon that heavily as we were building on some of our ideas.
Ross Dawson: Yeah, well, Anita Williams at Carnegie Mellon is doing quite a bit in that space at the moment, and also there’s work in Australia’s CSIRO and a number of others.
Nicole Radziwill: Oh, yeah, yeah.
Ross Dawson: So, tell me, what’s the experience, then, of taking this into organizations? What is the response? Do people feel that they are—yeah, I mean, obviously having a human facilitator is vastly helpful—what’s the response?
Nicole Radziwill: The managers and the leaders feel like, finally, they have someone who they can talk to, who can help them get answers about how to engage with their team in ways that they haven’t gotten answers before. That’s pretty cool. I like the feeling of helping people who otherwise might have just felt like they have to deal with these people situations and the technology situations on their own.
That’s great. We have people say things like, “It’s like personalized medicine for the teams.” The other comment that I thought was really cool is that the person said, “I’ve done a lot of assessments, and the assessments are all at the individual level. This is the only one that helps me figure out what I should do when I have to manage all of these people and somehow get them to work together to get this thing done right now. I don’t have a choice to move people in or out. I have to deal with the positives and the negatives here.
How can I relate to the members of my team as humans and get them what they need so that they can be more productive together?” I like how it’s helping shift the perspective. When I was first leading teams back in the 90s and early 2000s, I really thought it was my job to create an environment where the people are going to be able to work together harmoniously, where you’ll feel satisfied, where you’ll feel engaged, where you’ll feel invigorated. It was crushing to realize, no matter how well I set that up, someone was always going to think it was absolutely terrible, it wasn’t meeting their needs.
So I probably spent 20 years being crushed about, “Why can’t I set up the perfect team?” But then I realized part of creating a perfect team is acknowledging its imperfection and doing it out loud so that people don’t have expectations that are too high of each other. I mean, everyone comes to work for different reasons, right? I always went to work wanting to get self-actualization—how can I better achieve my purpose through this job—and not everybody feels that way.
So instead of me making a value judgment, saying, “That darn person, they’re just not taking their job seriously,” it helps to be able to have an algorithm say, “You should talk about what professionalism and engagement means. You should talk about the extent to which your soul is engaged in your work, and whether that’s a good thing here or not,” because none of those other methods bring stuff up like that—it’s just a little too touchy. So we’re not afraid to bring it up and see what happens.
Ross Dawson: So I understand some of the underlying data is self-reported style or engagement style and issues, but does it also include things like meeting conversations or online interactions?
Nicole Radziwill: No, not at all. In fact, that was one of the things that was most important to me. I don’t like surveillance. I don’t think surveillance is the right thing to do. I would not want to be a part of building any product that did that. Fortunately, one of the things we concluded was the person that you bring to work is largely constructed by your past experiences—last year, the year before, 20 years ago—the experiences that influence how you engage with your team. It’s much more long-term, and not just, “Are there great policies for time off now?”
So that really helps the data collection, because all we need to do is get a sense for—to sample your work habits and your styles over time, and then we can compare people to each other on the basis of that. There tends to be less conflict when you work with people who have similar unspoken habits and patterns as you do. Where the conflict arises is if somebody is behaving way differently, and then people put meaning on it where maybe there isn’t the meaning that they had for that action or that reaction.
Ross Dawson: So from here, what excites you about humans plus AI, or humans plus AI and teams, your work, or where do you see the frontiers we need to be pushing?
Nicole Radziwill: Yeah, okay, so I think I was mentioning to you at the very beginning, but I’ll bring it back up. One of the concepts that’s germane to what we’ve been doing is psychological safety, right? We all know that when you’re engaged in a team that has psychological safety, it’s easier to get adults, people are more satisfied, and performance in general goes up.
But it turns out, when you look at all of the studies, going all the way back to Edmondson’s studies and before, the one factor that’s been—I won’t say left out, but kind of not acknowledged as much—is that it takes a long time for psychological safety to build. You need those relationships, you need the constant reiteration of scenarios, of experiences with each other that encourage you to trust each other.
What we know from practice is the vibe of a team can shift from moment to moment. It takes psychological safety a long time to form. It can be fragile—a new person coming into a team or a person leaving can completely shift the vibe. When trust is broken, the cost to the psychological safety of the team can be extreme. It’s slow to form, and it’s fragile, and can leave quickly.
So when I think about that concept, it reminds me that trust in an organization is constructed. You need a lot of experiences with each other for that to build up. This goes back to one of the things I was mentioning earlier about individual use of Gen AI versus collective use of Gen AI. I think just shifting our perception of what we should be doing from those individual productivity improvements to, “How can we use Gen AI to learn together, to reduce friction, to do that sense making, and to manage our cognitive load?”—I think that is how we construct trust actively.
That’s how we get over the challenge of it taking a long time to build psychological safety, and it being fragile. We just get in the habit of using those generative AI tools collectively as teams to get us literally on the same page. I honestly think that’s the solution that we’re all going to start marching towards over these next couple of years.
Ross Dawson: Yeah, I’m 100% with you. I mean, that’s what I’m focusing on at the moment as well.
Nicole Radziwill: Encourage people to do it, Ross. You’ve got to encourage people to do it, because it’s so easy to get some of those individual improvements and then just stop, or to say, “We know how to prompt and we’re just not getting the ROI we thought we would.” It’s going to be up to people like you to get the message out in the world that there is another level. There’s another place you can go, and it can really unlock some fantastic productivity, excellence, improvements—not just productivity, but true excellence.
Ross Dawson: Yeah, which goes back to what we’re saying about, essentially, the organizations of the future.
Nicole Radziwill: Yeah, I want to live in one of those organizations of the future. I think I felt it long ago, and it’s just been so disappointing that we haven’t gotten there yet. But people are going to be people. We’re always going to have our social dynamics, our power dynamics, but I really think that collective use of the new generation of AI tools is going to help us get somewhere that maybe we didn’t imagine getting to before.
Ross Dawson: So where can people find out more about your work and your company?
Nicole Radziwill: The best place to find me is on LinkedIn, because I’m one of the only Nicole Radziwills on LinkedIn. So I invite new connections, and always like to get into conversations with people. The other place is through our company’s webpage—it’s team-x.ai, and you can get in touch with me either one of those places. But usually, LinkedIn is where I post what I’m thinking or articles or books that I am writing, and I’ve got two books coming up this upcoming year, so I’ll be posting those there too.
Ross Dawson: Fantastic. Thank you so much for your time and your insights and your work Nicole.
Nicole Radziwill: Thank you, Ross. It’s been delightful to chat with you.
The post Nicole Radziwill on organizational consciousness, reimagining work, reducing collaboration barriers, and GenAI for teams (AC Ep26) appeared first on Humans + AI.

Dec 3, 2025 • 37min
Joel Pearson on putting human first, 5 rules for intuition, AI for mental imagery, and cognitive upsizing (AC Ep25)
In this discussion, Joel Pearson, a Professor of Cognitive Neuroscience and founder of Future Minds Lab, dives into the nuances of intuition and AI. He introduces the SMILE framework, guiding listeners on when to trust their intuition or AI advice. Joel highlights the importance of designing AI to enhance human capabilities and outlines strategies for ethical AI usage. He also explores mental imagery and its intersection with AI, plus the concept of cognitive upsizing to stimulate our brains. Get ready for compelling insights on navigating an AI-driven future!

Nov 26, 2025 • 40min
Diyi Yang on augmenting capabilities and wellbeing, levels of human agency, AI in the scientific process, and the ideation-execution gap (AC Ep24)
“Our vision is that for well-being, we really want to prioritize human connection and human touch. We need to think about how to augment human capabilities.”
–Diyi Yang
About Diyi Yang
Diyi Yang is Assistant Professor of Computer Science at Stanford University, with a focus on how LLMs can augment human capabilities across research, work and well-being. Her awards and honors include NSF CAREER Award, Carnegie Mellon Presidential Fellowship, IEEE AI’s 10 to Watch, Samsung AI Researcher of the Year, and many more.
Website:
Future of Work with AI Agents:
The Ideation-Execution Gap:
How Do AI Agents Do Human Work?
Human-AI Collaboration:
LinkedIn Profile:
Diyi Yang
University Profile:
Diyi Yang
What you will learn
How large language models can augment both work and well-being, moving beyond mere automation
Practical examples of AI-augmented skill development for communication and counseling
Insights from large-scale studies on AI’s impact across diverse job roles and sectors
Understanding the human agency spectrum in AI collaboration, from machine-driven to human-led workflows
The importance of workflow-level analysis to find optimal points for human-AI augmentation
How AI can reveal latent or hidden human skills and support the emergence of new job roles
Key findings from experiments using AI agents for research ideation and execution, including the ideation-execution gap
Strategies for designing long-term, human-centered collaboration with AI that enhances productivity and well-being
Episode Resources
Transcript
Ross Dawson: It is wonderful to have you on the show.
Diyi Yang: Thank you for having me.
Ross Dawson: So you focus substantially on how large language models can augment human capabilities in our work and also in our well-being. I’d love to start with this big frame around how you see that AI can augment human capabilities.
Diyi Yang: Yeah, that’s a great question. It’s something I’ve been thinking about a lot—work and well-being. I’ll give you a high-level description of that. With recent large language models, especially in natural language processing, we’ve already seen a lot of advancement in tasks we used to work on, such as machine translation and question answering. I think we’ve made a ton of progress there. This has led me, and many others in our field, to really think about this inflection point moving forward: How can we leverage this kind of AI or large language models to augment human capabilities?
My own work takes the well-being perspective. Recently, we’ve been building systems to empower counselors or even everyday users to practice listening skills and supportive skills. A concrete example is a framework we proposed called AI Partner and AI Mentor. The key idea is that if someone wants to learn communication skills, such as being a really good listener or counselor, they can practice with an AI partner or a digitalized AI patient in different scenarios. The process is coached by an AI mentor. We’ve built technologies to construct very realistic AI patients, and we also do a lot of technical enhancement, such as fine-tuning and self-improvement, to build this AI coach.
With this kind of sandbox environment, counselors or people who want to learn how to be a good supporter can talk to different characters, practice their skills, and get tailored feedback. This is one way I’m envisioning how we can use AI to help with well-being. This paradigm is a bit in contrast to today, where many people are building AI therapists. Our vision is that for well-being, we really want to prioritize human connection and human touch. We need to think about how to augment human capabilities. We’re really using AI to help the helper—to help people who are helping others. That’s the angle we’re thinking about.
Going back to work, I get a lot of questions. Since I teach at universities, students and parents ask, “What kind of skills? What courses? What majors? What jobs should my kids and students think about?” This is a good reflection point, as AI gets adopted into every aspect of our lives. What will the future of work look like? Since last year, we’ve been thinking about this question. With my colleagues and students, we recently released a study called The Future of Work with AI Agents. The idea is straightforward: In current research fields like natural language processing and large language models, a lot of people are building agentic benchmarks or agents for coding, research, or web navigation—where agents interact with computers. Those are great efforts, but it’s only a small fraction of society.
If AI is going to be very useful, we should expect it to help with many job applications, not just a few. With this mindset, we did a large-scale national workforce audit, talking to over 1,500 workers from different occupations. We first leveraged the O*NET database from the Department of Labor Statistics to access occupations that use computers in some part of their work. Then we talked to 10 to 15 workers from each occupation about the tasks they do, how technology can help, in what ways they want technology to automate or augment their work, and so on. Because workers may not know concretely how AI can help, we gave summaries to AI experts, who helped us assess whether, by 2025, AI technology would be ready for automation or augmentation.
We got a very interesting audit. To some extent, you can divide the space into four regions: one where AI is ready and workers want automation; another where AI is not ready but workers want automation; a third where AI is ready but workers do not want automation; and a low-priority zone. Our work shows that today’s investment is pretty uniformly distributed across these four regions, whereas research is focused on just one. We also see potential skill transitions. If you look at today’s highly paid skills, the top one is analyzing data and information. But if you ask people what kind of agency they want for different tasks, moving forward, tasks like prioritizing and organizing information are ranked at the top, followed by training and teaching others.
To summarize, thinking about how AI can concretely augment our capabilities, especially from a work and well-being perspective, is something that I get really very excited.
Ross Dawson: Yeah, that’s fantastic. There are a few things I want to come back to. Particularly, this idea of where people want automation or augmentation. The reality is that people only do things they want, and we’re trying to build organizations where people want to be there and want to flourish. We need to be able to—it’s, to your point, some occupations don’t understand AI capabilities. With some change management or bringing it to them, they might understand that there are things they were initially reluctant to do, which they later see the value in.
The paper, Future of Work with AI Agents, was really a landmark paper and got a lot of attention this year. One of the real focuses was the human agency scale. We talk about agents, but the key point is agency—who is in control? There’s a spectrum from one to five of different levels of how much agency humans have in combination with AI. We’re particularly interested in the higher levels, where we have high human agency and high potential for augmentation. Are there any particular examples, or how do we architect or structure those ways so that we can get those high-agency, high-augmentation roles?
Diyi Yang: Yeah, that’s a very thoughtful question. Going back to the human agency you mentioned, I want to just provide a brief context here. When we were trying to approach this question, we found there was no shared language for how to even think about this. A parallel example is autonomous driving, where there are standards like L0 to L5, which is an automation-first perspective—L0 is no automation, L5 is full automation. Similarly, now we need a shared language to think about agency, especially with more human-plus-AI applications.
So, H1 to H5 is the human agency scale we proposed. H1 refers to the machine taking all the agency and control. H5 refers to the human taking all the agency or control. H3 is equal partnership between human and AI. H2 is AI taking the majority lead, and H4 is human taking the majority lead. This framework makes it possible to approach the question you’re asking.
One misunderstanding many people have about AI for work is that they think, “Oh, that’s software engineering. If they can code, we’ve solved everything.” The reality is that even in software engineering, there are so many tasks and workflows involved in people’s daily jobs. We can’t just view agency at the job level; we need to go into very specific workflow and task levels. For example, in software engineering, there’s fixing bugs, producing code, writing design documentation, syncing with the team, and so on.
When we think about agency and augmentation, the first key step is finding the right granularity to approach it. Sometimes AI adoption fails because the granularity isn’t there. An interesting question is, how do we find where everyone wants to use AI in their work for augmentation? Recently, we’ve been thinking about this, and we’re building a tool called workflow induction. Imagine if I could sit next to you and watch how you do your tasks—look at your screen, see how you produce a podcast, edit and upload it, add captions, etc. I observe where you struggle, where it’s very demanding, and where current AI could help. If we can understand the process, we can find those moments where augmentation can happen.
This is an ongoing effort, thinking about how we can bring in more modalities—not just code, but looking at your surrounding computer use—to see where we can find those right moments for the right intervention.
Ross Dawson: So what stage is that research or project at the moment?
Diyi Yang: We just released a preprint called “How Do AI Agents Do Human Work,” this is exactly related to the Future of Work article. We sampled some job occupations from O*NET, hired both professionals and found a set of AI agents, and recorded the process of how they do tasks. Then we compared how AI agents make slides, write code, and how professionals do the same. We observed step by step where agents are doing things really well, where humans can learn from them, where humans are struggling, and where there might be a better solution offered by human or AI.
With this workflow induction tool, you can really see what’s exactly happening and where you should augment.
Ross Dawson: I looked at that paper, and in the opportunities for collaboration section, it had different workflows. It turned out that where the machine struggled and the human could do something was in finding and downloading a file. So it suggested that the human should download the file and the AI should do the rest, because it could do a lot more, faster—pretty accurately, but not necessarily accurately enough.
So there’s this point: where can humans help machines, and where can AI help humans? But I think there can also be an intent to maximize the human roles, so that where we can augment capabilities, the AI assists, making the workflow more human rather than more AI. That’s one of the problems—call it Silicon Valley or just a lot of current development—it’s about bringing in agents as much as possible. How can we take an approach where we’re always seeking to incorporate and augment the humans, as opposed to just finding where the agent is equivalent or faster, but where the human could benefit by being more involved?
Diyi Yang: That’s a very interesting question. I want to say that I never view this as a competition between humans vs AI or humans vs agents. I view it more as an opportunity: can human plus AI help us do things we couldn’t do before? Our current set of tasks may be much bigger than what we have today. It’s not just about bringing more augmentation or automation to current tasks; it’s about finding more tasks relevant to society that human plus AI can work on together.
Going back to the terms you mentioned—automation versus augmentation—this is a key construct today. But I want to point out something amazing: emergence. It’s not only about automation versus augmentation, because that concept assumes we only have a fixed set of tasks. But what if there are more tasks? What if we solve many existing routine workflows and realize humans can work on higher-value things? That’s the opportunity and emergence we’re thinking about.
From a research perspective, we’re looking at how the technology feels today and how we should think about augmentation, though some of this is constrained by current AI agent capabilities. I’m sure they’ll get much better in the next six months. If we’re just thinking about one task, then maybe models aren’t doing very well for that task, so let’s bring in people to collaborate and get better performance. But from a counter-argument perspective, by observing how humans work with AI, we get more training data, which can be used to train better AI. That means, for that specific task, automation could take a bigger part of the pie, which might not be what we want.
There are both short-term and long-term considerations in human-AI collaboration. Personally, I’m very excited about using current insights and empirical evidence to find more emergence—new areas and discoveries we can do together as a team, rather than framing it as a competition between humans and AI.
Ross Dawson: Yeah, absolutely. I completely agree. As we’re both saying, a lot of the mindset is about getting humans and AI to work together so AI learns to do it better and better, eventually taking the human out. But I think there’s another frame: my belief is that every time humans and AI interact, the human should be smarter as a result, rather than just cognitive offloading.
To your point about emergence, this goes to the fallacy around the future of work being fixed demand. As we can do more things, there’s more demand to do more things—software development is an obvious example. I love this idea of emergence: the emergence of new roles to perform and new ways to create value for society. Is there anything specific you can point to about how you’re trying to draw out that emergence of roles, capabilities, or functions?
Diyi Yang: I think this is a really hard question—can you forecast what new jobs will occur in society? The reality is, I cannot. But I can share some insights. For example, there’s a meme or joke on LinkedIn about coding agents: because coding agents can produce a lot of code, now the burden is more on review or verification. So there’s this new job called “code cleanup specialist.” The skill is shifting from producing things to verification.
I’m not predicting that as a job, but we do have some empirical methods or methodologies that can help. Of course, there are many societal and non-technical factors involved. One thing we’ve been thinking about is identifying hidden skills demonstrated in work that even people themselves aren’t aware of. The workflow induction tool is one lens for that.
All of us find certain parts of our jobs very challenging or cognitively demanding, or sometimes we think, “I could find a different way to approach this,” or “This method could be used for something else,” or “Maybe it inspires a new idea.” There are many non-static dimensions in current workflows. If we could have a tool to audit how we’re doing things—how I’m doing my work, how you’re doing yours, what’s different—we might be able to abstract shared dimensions, pain points, or missing gaps. That could be a very interesting way to think about new opportunities.
For example, if you’re thinking about coding-related skills or jobs, maybe this is one way to reflect on where engineers spend most of their time struggling, and whether we should provide more training or augmentation. I prefer an evidence-based approach. That’s our current thinking on how we can help with that.
The last point I want to add—this is also why I really love this podcast, Human Plus AI. Over time, I’ve realized that talking to people is becoming more valuable, because you get to hear how people approach problems and the unique perspectives they bring, especially domain experts. It’s hard to capture domain knowledge, and much of it is undocumented. That’s the part AI doesn’t have. But if you talk to people and hear how they view their work and new possibilities, that’s how many new AI applications emerge—because people keep reflecting on their work. So I think a more qualitative approach to understanding the workforce today is going to be very valuable.
Ross Dawson: Yeah, absolutely. I believe conversations are becoming more valuable, and conversations are, by their nature, emergent—you don’t know where you’ll end up. In fact, I find the value of conversations is often as much in the things I say, which I find interesting, as in what the other person says. That’s the emergent piece.
Going back to what you said, of course you can’t say what will come out of emergence—that’s the nature of it. But what you can do is create the conditions for emergence. If we’re looking at latent capabilities in humans—and I believe everyone is capable of far more than they imagine, though we don’t know what those things are—how do we create the conditions by which latent capabilities can emerge? Now, AI can assist us in various ways to surface that, maybe through the way it interacts, suggesting things to try. Can you envisage something where AI allows our latent capabilities to become more visible or expressed?
Diyi Yang: That’s also a hard question. Maybe I’ll just use some personal experience. I definitely think that now, when I think about how AI is influencing my own work—as a professor, teaching and doing research—there are many dimensions. For example, I teach a course on human-centered large language models, and I really want to make the human-plus-AI concept clear to my students. Sometimes I’m frustrated because I want to find a really good example or metaphor to make the idea clear, and it’s hard. But AI can help me generate contextualized memes, jokes, or scenarios to explain a complicated algorithm to a broader audience.
On the other side, it helps me reveal capabilities I wasn’t aware of—maybe not capabilities, but desires. The desire to be creative in my teaching, to engage with people and make things clear. I wouldn’t say those are latent skills, but AI helps make my desires more concrete, and certain skills shift in the process.
Earlier, I mentioned that in the future of work, we observe skill shifting in the general population—from information processing to more prioritizing work and similar tasks. I hope we can have more empirical evidence of that. In terms of research, right now it’s more about bi-directional use, rather than helping me discover hidden skills. But we’ve been doing a lot of work to think about how AI can be a co-pilot in our research process.
Ross Dawson: Oh, right. I’d love to hear more about AI in the scientific process. I think it’s fascinating—there are many levels, layers, or phases of the scientific process. Is there any specific way you’re most excited about how AI can augment scientific progress?
Diyi Yang: Yes, happy to talk about this. When we were working on the Future of Work study, I was thinking about scientists or researchers as one job category—how we could benefit or think about this process. One dimension we’ve approached is whether large language models can help generate research ideas for people working on research. This is a process that can sometimes take months.
We built an AI agent to generate research ideas in natural language processing, such as improving model factuality, reducing hallucination, dealing with biases, or building multilingual models—very diverse topics. We gave AI agents access to Google Scholar, Semantic Scholar, and built a pipeline to extract ideas. The interesting part is our large-scale evaluation: we recruited around 50 participants, each writing ideas on the same topic. Then we had a parallel comparison of AI-generated and human-produced ideas. We merged them together, normalized the style, and gave the set to a third group of human reviewers, without telling them which was which. In fact, they couldn’t differentiate based on writing.
We found that, after review, the LLM-generated research ideas were perceived as more novel, with overall higher quality and similar feasibility. This was very surprising. We did a lot of control and robustness checks to make sure there were no artifacts, and the conclusion remained. It was surprising—think about it, natural language processing is a big field. If AI can generate research ideas, should I still do my own research?
So we did a second study: what if we just implemented those ideas? We took a subset of ideas from the first study, recruited research assistants to work on them for about three months, and they produced a final paper and codebase. We gave these to third-party reviewers to assess quality and novelty. Surprisingly, we found an ideation-execution gap: when the ideas were implemented, the human condition scores didn’t change much, but the AI condition scores for novelty and overall quality dropped significantly. So, when you turn AI-generated ideas into actual implementations, there’s a significant drop.
Now we’re thinking about approaches to supervise the process of generating novel research ideas, leveraging reinforcement learning and other techniques.
Ross Dawson: I was just going to say, that paper—the ideation-execution gap—is extremely interesting. Why do you think that’s the case, where humans assess the LLM ideas to be better, but when you put them into practice, they weren’t as good as the human ideas? Why do you think that is?
Diyi Yang: I think there are multiple dimensions. First, with the ideas themselves, you can’t see how well the idea works until you try it. An idea could be great, but in practice, it might not work. On the written form, LLMs can access thousands or millions of papers, so they bring in a lot of concepts together. Many times, if you read the ideas, they sound fancy, with different techniques and combinations, and look very attractive. So, the ideas produced by LLMs look very plausible and sound novel, probably because of cross-domain inspiration.
But when you put them into practice, it’s more about implementation. Sometimes the ideas are just not feasible. Sometimes they violate common sense. The idea isn’t just a two-sentence description—it also has an execution plan, the dataset to use, etc. Sometimes the datasets suggested by AI are out of date, or they’ll say, “Do a human study with 1,000 participants,” which is really hard to implement. That’s our current explanation or hypothesis. Of course, there are other dimensions, but so far, I’d say AI for research idea generation is still at an early stage. It’s easy and fast to generate many ideas, but very challenging to validate that.
Ross Dawson: Yeah, which goes to the human role, of course. I love the way you think about things—your attitude and your work. What are you most excited about now? Where do you think the potential is? Where do we need to be working to move toward as positive a humans-plus-AI world as possible?
Diyi Yang: This is a question that keeps me awake and excited most of the time. Personally, I am very optimistic about the future. We need to think about how AI can help us in our work, research, and well-being. We see a lot of potential negative influences of this wave of AI on people’s relationships, critical thinking, and many skills. But on the other side, it provides opportunities to do things we couldn’t do before. That’s the broader direction I’m excited about.
On the technical side, we need to advance human-AI interaction and collaboration with long-term benefits. Today, we train AI with objectives that are pretty local—satisfaction, user engagement, etc. I’m curious what would happen if we brought in more long-term rewards: if interacting with AI improved my well-being, productivity, or social relationships. How can we bring those into the ecosystem? That’s the space I’m excited about, and I’m eager to see what we can achieve in this direction.
Ross Dawson: Well, no doubt the positive directions will be very much facilitated and supported by your work. Is there anywhere people should go to look at your work? I think you mentioned you have an online course. Is there anything else people should be aware of?
Diyi Yang: If anyone’s interested, feel free to visit the Human-Centered Large Language Model course at the Stanford website, or just search for any of the papers we have chatted.
Ross Dawson: Yeah, we’ll put links to all of those in the show notes. Thank you so much for your time, your insights, and your work. I really enjoyed the conversation.
Diyi Yang: Thank you. I also really enjoyed the conversation.
The post Diyi Yang on augmenting capabilities and wellbeing, levels of human agency, AI in the scientific process, and the ideation-execution gap (AC Ep24) appeared first on Humans + AI.

Nov 19, 2025 • 40min
Ganna Pogrebna on behavioural data science, machine bias, digital twins vs digital shadows, and stakeholder simulations (AC Ep23)
Ganna Pogrebna, a Research Professor and expert in behavioural data science, dives into the intricacies of human bias in AI. She highlights how algorithms can inherit human biases, using Amazon's hiring tool as a cautionary tale. Ganna discusses the need for context-rich prompting when working with AI, alongside the importance of combining human judgment with machine efficiency. She emphasizes the value of simulations and digital twins in refining strategic decisions, illustrating how they can unlock insights into stakeholder dynamics.

7 snips
Nov 12, 2025 • 39min
Sue Keay on prioritizing experimentation, new governance styles, sovereign AI, and the treasure of national data sets (AC Ep22)
Dr. Sue Keay, Director of the UNSW AI Institute and robotics advocate, discusses the pivotal role of AI and robotics in tackling environmental challenges, including innovative methods for preserving the Great Barrier Reef. She emphasizes the necessity of open-minded leadership for effective AI transformation and the importance of sovereign AI for national data security. Sue also explores how AI will reshape job roles, balancing cognitive augmentation with practical applications in education and industry, while urging investment in public AI infrastructure to attract talent.

Nov 6, 2025 • 39min
Dominique Turcq on strategy stakeholders, AI for board critical thinking, ecology of mind, and amplifying cognition (AC Ep21)
Dominique Turcq, founder of Boostzone Institute and former McKinsey partner, dives into how organizational strategy must evolve to encompass societal and environmental stakeholders, alongside shareholders. He discusses the importance of long-term foresight and scenario planning in decision-making. With AI shifting governance dynamics, Turcq highlights the necessity for boards to assess both opportunities and risks. He warns that over-relying on AI can lead to cognitive atrophy and emphasizes the need for organizations to foster a healthy 'ecology of mind' to nurture creativity and critical thinking.

Oct 29, 2025 • 35min
Beth Kanter on AI to augment nonprofits, Socratic dialogue, AI team charters, and using Taylor Swift’s pens (AC Ep20)
“I call it the AI sandwich. When we want to use augmentation, we’re always the bread and the LLM is the cheese in the middle.”
–Beth Kanter
About Beth Kanter
Beth Kanter is a leading speaker, consultant, and author on digital transformation in nonprofits, with over three decades experience and global demand for her keynotes and workshops. She has been named one of the most influential women in technology by Fast Company and was awarded the lifetime achievement in nonprofit technology from NTEN. She is author of The Happy Healthy Nonprofit and The Smart Nonprofit.
Website:
bethkanter.org
LinkedIn Profile:
Beth Kanter
Instagram Profile:
Beth Kanter
What you will learn
How technology, especially AI, can be leveraged to free up time and increase nonprofit impact
Strategies for reinvesting saved time into high-value human activities and relationship-building
A practical framework for collaborating with AI by identifying automation, augmentation, and human-only tasks
Techniques for using AI as a thinking partner—such as Socratic dialog and intentional reflection—to enhance learning
Best practices for intentional, mindful use of large language models to maximize human strengths and avoid cognitive offloading
Approaches for nonprofit fundraising using AI, including ethical personalization and improved donor communication
Risks like ‘work slop’ and actionable norms for productive AI collaboration within teams
Emerging human skills essential for the future of work in a humans-plus-AI organizational landscape
Episode Resources
Transcript
Ross Dawson: Beth, it is a delight to have you on the show.
Beth Kanter: Oh, it’s a delight to be here. I’ve admired your work for a really long time, so it’s really great to be able to have a conversation.
Ross Dawson: Well, very similarly, for the very, very long time that I’ve known of your work, you’ve always focused on how technologies can augment nonprofits. I’d just like to hear—well, I mean, the reason is obvious, but I’d like to know the why, and also, what is it that’s different about the application of technologies, including AI, to nonprofits?
Beth Kanter: So I think the why is, I mean, I’ve always—I’ve been working in the nonprofit sector for decades, and I didn’t start off as a techie. I kind of got into it accidentally a few decades ago, when I started on a project for the New York Foundation for the Arts to help artists get on the internet. I learned a lot about the internet and websites and all of that, and I really enjoyed translating that in a way that made it accessible to nonprofit leaders. So that’s sort of how I’ve run my career in the last number of decades: learn from the techies, translate it, make it more accessible, so people have fun and enjoy the exploration of adopting it.
And that’s what actually keeps me going. Whenever a new technology or something new comes out, it’s the ability to learn something and then turn around and teach it to others and share that learning. In terms of the most recent wave of new technology—AI—my sense is that with nonprofits, we have some that have barreled ahead, the early adopters doing a lot of cutting-edge work, but a lot of organizations are just at that they’re either really concerned about all of the potential bad things that can happen from the technology, and I think that traps them from moving forward, or others where there’s not a cohesive strategy around it, so there’s a lot of shadow use going on.
Then we have a smaller segment that is doing the training and trying to leverage it at an enterprise level. So I see organizations at these different stages, with a majority of them at the exploring or experimenting stage.
Ross Dawson: So, you know, going back to what you were saying about being a bit of a translator, I think that’s an extraordinarily valuable role—how do you take the ideas and make them accessible and palatable to your audience? But I think there’s an inspiration piece as well in the work that you do, inspiring people that this can be useful.
Beth Kanter: Yeah, to show—to keep people past their concerns. There’s a lot of folks, and this has been a constant theme for a number of decades. The technology changes, but the people stay the same, and the concerns are similar. It’s going to take a long time to learn it, I feel overwhelmed. I think AI adds an extra layer, because people are very aware, from reading the headlines, of some of the potential societal impacts, and people also have in their heads some of the science fiction we might have grown up with, like the evil robots.
So that’s always there—things like, “Oh, it’s going to take our jobs,” you name it. Usually, those concerns come from people who haven’t actually worked with the technology yet. So sometimes just even showing them what it can do and what it can’t do, and opening them up to the possibilities, really helps.
Ross Dawson: I want to come back to some of the specific applications in nonprofits, but you’ve been sharing a lot recently about how to use AI to think better, I suppose, is one way of framing it. We have, of course, the danger of cognitive offloading, where we just stick all of our thinking into the machine and stop thinking for ourselves, but also the potential to use AI to think better.
I want to dig pretty deep into that, because you have a lot of very specific advice on that. But perhaps start with the big framing around how it is we should be thinking about that.
Beth Kanter: Sure. The way I always start with keynotes is I ask a simple question: If you use AI and it can give your nonprofit back five hours of time—free up five hours of time—how would you strategically reinvest that time to get more impact, or maybe to learn something new? I use Slido and get these amazing word clouds about what people would learn, or they would develop relationships, or improve strategies, and so forth. I name that the “dividend of time,” and that’s how we need to think about adopting this technology.
Yes, it can help us automate some tasks and save time, but the most important thing is how we reinvest that saved time to get more impact. For every hour that a nonprofit saves with the use of AI, they should invest it in being a better human, or invest it in relationships with stakeholders.
Or, because our field is so overworked, maybe it’s stepping back and taking a break or carving out time for thinking of more innovative ideas. So the first thing I want people to think about is that dividend of time concept, and not just rush headfirst into, “Oh, it’s a productivity tool, and we can save time.”
The next thing I always like to get people to think about is that there are different ways we can collaborate with AI. I use a metaphor, and I actually have a fun image that I had ChatGPT cook up for me: there are three different cooks in the kitchen. We have the prep chef, who chops stuff or throws it into a Cuisinart—that’s like automation, because that saves time. Then we have the sous chef, whose job is tasting and making decisions to improve whatever you’re cooking. That’s a use case or way to collaborate with AI—augmentation, helping us think better. And the third is the family recipe, which is the tasks and workflows that are uniquely human, the different skills that only a human can do.
So I encourage nonprofits to think about whatever workflow they’re engaged with—whether it’s the fundraising team, the marketing team, or operations—to really think through their workflow and figure out what chef hat they’re wearing and what is the appropriate way to collaborate with AI.
Ross Dawson: So in that collaboration or augmentation piece, what are some specific techniques or approaches that people can use, or mindsets they can adopt, for ideation, decision making, framing issues, or developing ideas? What approaches do you think are useful?
Beth Kanter: One of the things I do when I’m training is—large language models, generative AI, are very flexible. It’s kind of like a Swiss army knife; you could use it for anything. Sometimes that’s the problem. So I like to have organizations think through: what’s a use case that can help you save time? What’s something that you’re doing now that’s a rote kind of task—maybe it’s reformatting a spreadsheet or helping you edit something?
Pick something that can save you some time, then block out time and preserve that saved time for something that can get your organization more impact. The next thing is to think about where in your workflow is something where you feel like you can learn something new or improve a skill—where your skills could flourish.
And then, where’s the spot where you need to think? I give them examples of different types of workflows, and we think about sorting them in those different ways. Then, get them to specifically take one of these ways of working—that is, to save time—and we’ll practice that.
Then another way of working, which is to learn something new, and teach them, maybe a prompt like, “I need to learn about this particular process. Give me five different podcasts that I should listen to in the right order,” or “What is the 80/20 approach to learning this particular skill?”
So it’s really helping people take a look at how they work and figuring out ways where they can insert a collaboration to save time, or a collaboration to learn something new.
Ross Dawson: What are ways that you use LLMs in your work?
Beth Kanter: I use them a lot, and I tend to stay on the—I never have them do tasks for me. I use it mostly as a thought partner, and I use it to do deep research—not only to scan and find things that I want to read related to what I’m learning, but also to help me think about it and reflect on it.
One of my favorite techniques is to share a link of something I’ve read and maybe summarize it a bit for the large language model, saying, “I found these things pretty interesting, and it kind of relates to my work in this way. Lead me through a Socratic dialog to help me take this reflection deeper.” Maybe I’ll spend 10 minutes in dialog with Claude or ChatGPT in the learn mode, and it always brings me to a new insight or something I haven’t thought of. It’s not that the generative AI came up with it; it just prompted me and asked me questions, and I was able to pull things from myself. I find that really magical.
Ross Dawson: So you just say, “Use a Socratic dialog on this material”?
Beth Kanter: Yeah, sometimes a Socratic dialog, or I might say what I think about it and ask it to argue with me. I’ll tell it, “You vehemently disagree. Now debate me on this.”
Ross Dawson: Yeah, yeah. I love the idea of using LLMs to challenge you. So I tend to not start with the LLM giving me stuff, but I start with giving the LLM stuff, and then say, “All right, tell me what’s missing. How can I improve this? What’s wrong with it?”
Beth Kanter: I call it the AI sandwich. When we want to use augmentation, we’re always the bread and the LLM is the cheese in the middle. You always want to do your own thinking. I take it one step further—I think with a pen and paper first.
Ross Dawson: Right. So, as you were alluding to before, one of the very big concerns, just over the last three to six months, has really risen—everyone sharing these things like “GPT makes you dumber,” and things to that effect, which I think is, in many ways, about how you use it. So you raise this idea of, “What can I learn? How can I learn it?” But more generally, how can we use LLMs to become smarter, more intelligent, better—not just when we use the tools, but also after we take them away?
Beth Kanter: That’s such a great question, and it’s one I’ve been thinking about a lot. I think the first thing we just discussed is a key practice: think for yourself first. Don’t automatically go to a large language model to ask for answers—start with something yourself.
I also think about how can I maximize my human, durable skills—the things that make me human: my thinking, my reflection, my adaptability. So things like, if I need to think about something, I go out for a walk first and think it through. I’ve also tried to approach it with a lot of intention, and I encourage people to think about what are human brain–only tasks, and actually write them up for yourself. Then, what are the tasks where you might start with your human brain and then turn to AI as a partner, so you have some examples for yourself that you can follow.
I encourage people to ask a couple of reflection questions to help them come up with this. Will doing this task myself strengthen my abilities I need for leadership, or is it something that I should collaborate with AI for? Does this task require my unique judgment or creativity, so I need to think about it first? Am I reaching for AI because I don’t want to think this through myself? Am I just being tired? I don’t want to use the word lazy, but maybe just being, “Oh, I don’t want to feel like thinking through this.” If you find yourself in that category, I think that’s a danger, because it’s very easy to slide into that, because the tools give you such easy answers if you ask them to provide just the answers.
So being really intentional with your own use cases—what’s human brain–only, what’s human brain–first, and then when do you go to AI? The other thing that’s also really important—I read this article. I’m not a Taylor Swift fan, but I am a pen addict, and I collect fountain pens and all kinds of pens. It was a story about how Taylor Swift has three different pens that she uses to write her songs: a fountain pen for reflective ballads, a glitter pen for bouncy pop tunes, and a quill for serious kinds of songs. She decides, if she wants to write a particular song, she’ll cue her brain by using a particular pen.
So that’s the thing I’ve started to train myself to do when I approach using this tool: what mode am I in, and remember that when I’m collaborating with AI. The other thing, too—all of the models, Claude, ChatGPT, Gemini, have all launched a guided learning or a study and learn mode, which prevents you from just getting answers. I use that as my default. I never use the tools in the other modes.
Ross Dawson: All right, so you’re always in study mode.
Beth Kanter: I’m always in study mode, except if I’m researching something, I might go into the deep research. The other thing that I’ve also done for myself is that with ChatGPT, because you can do it, I’ve put customized instructions in ChatGPT on how I’d like to learn and what my learning style is. One of the points that I’ve given it is: never give me an answer unless I’ve given you some raw material from myself first, unless I tell you to override it.
Because, honestly, occasionally there might be a routine—some email that I don’t need to go into study and learn mode to do that, I just want to do it quickly. That’s my “I’m switching pens,” but I can override it when I want to. But my default is making myself think first.
Ross Dawson: Very interesting. Not enough people use custom instructions, but I think they also need to have the ability to switch them, so we don’t have one standard custom instruction, but just a whole set of different ways in which we can use different modes. As you say, I think the Taylor Swift pens metaphor is really lovely.
Beth Kanter: Yeah, it is. It’s like, okay, is this some routine email thing? It’s okay to let it give you a first draft, and it’ll save you some time. It’s not like this routine email is something I need to deeply think about. But if I’m trying to master something or learn something, or I want to be able to talk about something intelligently, and I want to use ChatGPT as a learning partner, then I’m going to switch into study mode and be led through a Socratic dialog.
Ross Dawson: So, going back to some of the specific uses for it—you regularly run sessions for nonprofits on fundraising, and that’s quite a specific function and task. AI can be useful in a number of different aspects of that. So let’s just look at nonprofit fundraising. How can these tools—humans plus AI—be useful in that specific function?
Beth Kanter: If we step away from large language models and look at some of the predictive analytic tools that fundraisers use in conjunction with generative AI, it can help them. Instead of just segmenting their audience into two or three target groups and sending the same email pitch to a target group that might have 10,000 or 5,000 people, if they have the right data and the right tools, they can really customize the ask or the communication to different donors.
This is the kind of thing that would only be reserved for really large donors—the million-dollar donors—to get that extreme customization and care. But the tools allow fundraisers to treat everyone like a million-dollar donor, with more personalized communication. So that’s a really great way that fundraisers can get a lot of value from these tools.
Ross Dawson: So what would you be picking up in the profile—assuming the LLM generates the email, but they would use some kind of information about the individual or the foundation to customize it. What data might you have about the target?
Beth Kanter: You could have information on what appeals they’ve opened in the past, what kinds of specific campaigns they donated to. Depending on the donor level, there might even be specific notes in the database that the AI could draw from. There could be demographic information, giving history, interests—whatever data the organization is collecting.
Ross Dawson: So everything in the CRM. I guess one of the other interesting things, though, is that most people have enough public information about them—particularly foundations—that the LLM can just find that in the public web for decent customization.
Beth Kanter: Yeah, but there’s also, I think, a need to think a little bit about the ethics around that too. If it is publicly accessible, you don’t want to cross the line into using that information to manipulate them into donating. But having a more customized communications approach to the donor makes them feel special.
Ross Dawson: Well, it’s just being relevant. When we’re communicating with anybody on anything, we need to tailor our communication in the best way, based on what we know. But this does—one of the interesting things coming out of this is, how does AI change relationships? Obviously, we know somebody to whatever degree when we’re interacting with them, and we use that human knowledge. Now, as you say, there’s an ethical component there. If LLMs intermediate those relationships, then that’s a very different kind of relationship.
Beth Kanter: Yes, it shouldn’t replace the human connection. It should free up the time so the fundraiser can actually spend more time and have more connection with the donor. Another benefit is that AI can help organizations generate impact reports in almost real time and provide those to donors, instead of waiting and having a lag before they get their report on what their donation has done. I think that could be really powerful.
Ross Dawson: Yeah, absolutely. That’s proactive communication—showing how it is you’ve helped. That’s been a lot of legwork, and which that time can be reinvested in other useful ways.
Beth Kanter: Another example, especially with not so much smaller donors but maybe mid-size to higher donors: typically, organizations have portfolios of donors they have to manage, and it could be a couple hundred people. They have to figure out, “Who do I need to touch this week, and what kind of communication do I need to have with them? Is it time to take this person out to lunch? I’m planning a trip to another city and want to meet with as many donors as possible.” I think AI can really help the fundraiser organize their time and do some of the scanning and figuring out so the fundraiser can spend more FaceTime with the donor.
Ross Dawson: Yes, that’s the key thing—if we can move to a point where we’re able to put, as you say, that very first question you ask: What do you apply that time to? One of the best possible applications is more human-to-human interaction, be it with staff, colleagues, partners, donors, or people you are touching through your work.
Beth Kanter: Yeah, I think the other thing that’s really interesting—and I’m sure you’ve seen this, I know we’ve seen a lot in the Humans and AI community—is this whole idea around work slop.
And I think about that in terms of fundraising teams, especially with organizations that don’t have an overall strategy, where maybe somebody on the team is using it for a shortcut to generate a strategy, but it generates slop, and then it creates more of a burden for other people on the team to figure out what this is and rewrite it. That’s another reason to move away from thinking about AI as just a gumball machine where we put a quarter in and out comes a perfect gumball or perfect content.
Ross Dawson: That’s a great point. The idea of work slop—recent Harvard Business Review article—where the idea is that some people just use AI, generate an output, and then that slows down everything else because it’s not the quality it needs to be. So it’s net time consumption rather than saving. So in an organization, small or large, what can we do to make AI use constructive and useful, as opposed to potentially being work slop and creating a net burden?
Beth Kanter: I think this comes down to something that goes beyond an acceptable use policy. It gets down to what are our group or team norms around collaborating with each other and AI, and having some rituals. Maybe there’s a ritual around checking things, checking information to make sure it’s accurate, because we know these tools hallucinate—sort of find the thing that’s not true. Or maybe it’s having a group norm that we don’t just generate a draft and send it along; we always think first, collaborate to generate the draft, and then look at it before we send it off to somebody else.
And maybe having a session where we come up with a formal team charter around how we collaborate with this new collaborator.
Ross Dawson: Yes, I very much believe in giving teams the responsibility of working out for themselves how they work together, including with their new AI colleagues.
Beth Kanter: Yeah, and it’s kind of hard because some organizations just jump into the work. I see, especially the smaller ones that are more informal, even when they hear the word “team charter,” they think it’s too constricting or something. But I think this whole idea—what we’re talking about—is a bit of metacognition, of thinking about how we work before we do the work.
Ross Dawson: And while we do the work.
Beth Kanter: And while we do the work. Some people feel like it’s an extra step, especially when you’re resource constrained: “Why do I want to think through the work before we’re doing the work? We’ve got to get the work done. Why would we even pause while we’re doing the work to think about where we are with it?” So I think that skill of reflection in action is one of those skills we really need to hone in an AI age.
Ross Dawson: Yes, and an attitude. So to round out, what’s most exciting for you now? We’re almost at the end of 2025, we’ve come a long way, we’ve got some amazing tools, we’ve learned somewhat how to use them. So what excites you for the next phase?
Beth Kanter: I’m still really excited about how to use AI to stay sharp, because I think that’s going to be an ongoing skill. The thing I’m most excited about—and I’m hopeful organizations are going to start to get there in the nonprofit sector—is this whole idea around what are the new emerging skills, the human skills that we’re going to need to really be successful once we scale adoption of these tools. And then, how does that change the structure of our jobs, our team configurations, and the way that we collaborate? Those are the things that I’m really interested in seeing—where we go with this.
Ross Dawson: I absolutely believe that organizations—the best organizations—are going to look very different than the most traditional organizations of the past. If we move to a humans-plus-AI organization, it’s not about every human just using AI; it changes what the organization is. We have to reimagine that, and that’s going to be very different for every organization.
Beth Kanter: Yeah. So I’m really excited about maybe giving some practices that we’re doing now without the AI that aren’t working a funeral—a joyful funeral—and then really opening up and redesigning the way we’re working. That’s really exciting to me, because we’ve been so stuck, at least in the nonprofit sector, in our busyness and under pressure to get things done, that I think the promise of these tools is really to open up and reinvent the way we’re working. To be successful with the tools, you kind of have to do that.
Ross Dawson: Yes, absolutely. So Beth, where can people go to find out more about your work?
Beth Kanter: Well, I’m on LinkedIn, so you can find me on LinkedIn, and also at www.bethkanter.org.
Ross Dawson: Fabulous. Love your work. So good to finally have a conversation after all these years, and I will continue to learn from you as you share things.
Beth Kanter: Yes, and likewise. I’ve really enjoyed being in a community with you and enjoy reading everything you write.
Ross Dawson: Fantastic. Thank you.
The post Beth Kanter on AI to augment nonprofits, Socratic dialogue, AI team charters, and using Taylor Swift’s pens (AC Ep20) appeared first on Humans + AI.

Oct 22, 2025 • 17min
Ross Dawson on Levels of Humans + AI in Organizations (AC Ep19)
“It is our duty to find out how we can best use it, where humans are first and Humans + AI are more together.”
–Ross Dawson
About Ross Dawson
Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload.
Website:
Levels of Humans + AI in Organizations
futuristevent.com
LinkedIn Profile:
Ross Dawson
Books
Thriving on Overload
Living Networks 20th Anniversary Edition
Implementing Enterprise 2.0
Developing Knowledge-Based Client Relationships
What you will learn
How organizations can transition from traditional models to Humans Plus AI structures
An introduction to the six-layer Humans Plus AI in Organizations framework
Ways AI augments individual performance, creativity, and well-being
The dynamics and success factors of human-AI hybrid teams
The role of scalable learning communities integrating human and AI learning
How fluid talent models leverage AI for dynamic task matching and skill development
Strategies for evolving enterprises using AI and human insight for continual adaptation
Methods for value co-creation across organizational ecosystems with AI-facilitated collaboration
Real-world examples from companies like Morgan Stanley, Schneider Electric, Siemens, Unilever, Maersk, and MELLODDY
Practical steps to begin and navigate the journey toward Humans Plus AI organizations
Episode Resources
Transcript
Ross Dawson: If you have been hanging out for new episodes of Humans Plus AI, sorry we’ve missed a number of those. We will be back to weekly from now on, and from next week, we’ll be coming back with some fantastic interviews with our guests.
I’ll just give you a quick update and then run through my Levels of Humans Plus AI in Organizations framework. So, just a quick update: the reason for the big gap was that I was in Dubai and Riyadh giving keynotes at the Futurist X Summit in Dubai. It was an absolutely fantastic event organized by Brett King and colleagues, where I gave a keynote on “Humans Plus AI: Infinite Potential,” which seemed to resonate very well and fit with the broader theme of human potential and how we can create a better future.
Then I went to Riyadh, where I gave a keynote at the Public Investment Forum, PMO Forum, which is the organization of the sovereign wealth fund of Saudi Arabia. There, we were again looking at macro themes of organizational performance, including specifically Humans Plus AI.
When I got back home from those, I had to move house. So, it’s been a just digging myself out of the travel and moving house and getting back on top of things. We won’t have a gap in the podcast again for quite a while. We’ve got a nice compilation of wonderful conversations with guests coming up soon.
So, just a quick state of the nation: Humans Plus AI is a movement, and by listening to this, you are part of that movement. We are all together in believing that AI has the potential to amplify individuals, organizations, society, and humanity. Thus, it is our duty to find out how we can best use that, where humans are first and humans plus AI are together. The community is the center of that.
Go to humansplus.ai/community and you can join the community if you’re not there already. We have some amazing people in there, great discussions, and we are very much in the process of co-creating that future of Humans Plus AI.
We also have a new application coming out soon, Thought Weaver. In fact, it’s actually a redevelopment of a project which we launched at the beginning of last year, and we’re rebuilding that to create Humans Plus AI thinking workflows and provide a tool to do that to the best effect. In the community, people will be testing, using, and helping us create something as useful as possible.
I want to run through my Levels of Humans Plus AI in Organizations framework. This comes from my extensive work with organizations—essentially, those who understand that they need to become Humans Plus AI organizations, not just what they have been. It’s based on moving from humans, technology, and processes to organizations where AI is a complement, supporting them not just to tack on AI, but to transform themselves into very high-potential organizations.
There are six layers in the framework. It starts with augmented individuals, then humans-AI hybrid teams, learning communities, fluid talent, evolutionary enterprise, and ecosystem value co-creation. Each of those six layers is where organizations, leaders, and strategists need to understand how they can transform from what they have been to apply the best of Humans Plus AI, and how those come together to become the organizations of the future.
I’ll run through those levels quickly. The first one is augmented individuals, which is where most people are still playing as individuals. We’re using AI to augment us. Organizations are giving various LLMs to their workforce to help them improve, but this can be done better and to greater effect by being intentional about how AI can augment reasoning, creativity, thinking, work processes, and the well-being of individuals.
The framework lays out the features and some of the success factors of each of those layers. I won’t go into those in detail here, but I’ll point to some examples. In augmented individuals, a nice example is Morgan Stanley Wealth Management, where they’ve used LLMs to augment their financial advisors, providing analysis around client portfolios and ways to communicate effectively. They rely on humans for strong relationships and understanding of client context and risk profiles, but they’re supported by AI.
The second layer is human-AI hybrid teams. This is really the focus of my work, and I’ll be sharing a lot more on the frameworks, structures, and processes that support effective Humans Plus AI teams. Now we have teams that include not just humans, but also AI agents—not just multi-agent systems, but multi-agents where there are both humans and AI involved. We can design them as effective swarms that learn together and are highly functional, based on trust and understanding of relative roles, dramatically amplifying the potential of people and organizational performance.
One example is Schneider Electric, which has used its teaming approach both on the shop floor of its manufacturing plants—explicitly providing AI complements to humans to assist in their work—and with knowledge workers in designing and building human-AI teams.
The third layer is that of learning communities. I often refer to John Hagel’s mantra of scalable learning, which is the foundation of successful organizations today. This is based on not just individuals learning, but also organizations effectively learning. As John points out, this is not about learning static content, but learning by doing at the edge of change.
AI can provide an extraordinary complement to humans, of course, in classic things such as AI-personalized learning journeys, but also in providing matching for peer learning, where individuals can be matched around the challenges they are facing or have faced, to communicate, share lessons learned, and learn together. We can start to capture these lessons in structures such as ontologies, where AI and humans are both learning together, individually and as a system.
An example is Siemens, which has created a whole array of different learning pathways that include not just curated, personalized AI learning, but also a variety of ways to provide specific insights to individuals on what’s relevant to them.
The fourth layer is fluid talent. For about 15 years, I’ve been talking about fluid organizations and how talent is reapplied, where the most talented people can be applied to whatever the challenge or opportunity is, wherever it is across the organization. This becomes particularly pertinent as we move from jobs to task level—jobs are being decomposed into tasks. Some can be done very well by AI, others less so. When we move to the task level, we have to reconfigure all the work that needs to be done and where humans come in.
Instead of being at a job role, we’re now using the talent of the organization wherever and whenever it has the greatest value, using AI to match individuals with their ability to do that work. One aspect is that we can use AI to augment learning capabilities, so all work done by individuals in this fluid talent model is designed not just to use their existing talent, but to develop new relevant skills for new situations moving forward.
One example is Unilever’s FLEX program, which has been more classically based on longer-term, around six-week assignments to different parts of the organization. It’s absolutely designed for learning and growth—not just to connect people into different parts of the organization to apply their talents in specific ways, but also to develop new skills that will make them more valuable in their own careers and to the organization.
Moving above that to the higher level of the evolutionary enterprise: AI is moving fast, the competitive landscape is moving fast, and the shape of organizations needs to be not just re-architected for what is relevant, but so that it can continually evolve. We need both human and AI insight and perspectives to sense change, reconfigure the structure of the organization, and amplify value.
We need governance that enables that—constraining where relevant what is done and how it is done—but using data and insights from humans and AI together to create an evolutionary loop. One relevant example is Maersk, the Scandinavian logistics company, which was a shipping company and now has really become a data-enabled logistics service platform. It has evolved substantially in its business model, structure, and ways of working, but continues to evolve as it gathers new data and insights from across its operations, using both human and AI insights to develop and evolve how it creates value.
This takes us to the sixth level, which is ecosystem value co-creation. Back in my book “Living Networks,” I described how value is no longer created within an organization, but across an ecosystem of organizations, where there are both human experts who may reside in one organization but whose talents and capabilities can be applied across organizational boundaries, and where AI is architected not just to be inside an organization, but to evolve across an ecosystem—be that suppliers, customers, or peer organizations.
This is illustrated by MELLODY, which is a consortium or federated data structure of major pharmaceutical companies that have proprietary data around their pharmaceutical research. This data can be pooled effectively across the group of companies participating, without exposing their individual intellectual property. This creates an example of how we can use data and AI learning structures across the system, where insights learned from data from multiple organizations can be applied for learning, insights, feedback, and acceleration of drug development across different pharmaceutical companies.
So, to run through those six layers: augmented individuals, where a lot of work is happening now but much more can be done; humans plus AI teams, which I think is really the next phase; learning communities, where we absolutely need to drive learning but need to design that around humans plus AI structures; fluid talent, the reality of what will happen in a world where AI changes the nature of existing human roles; evolutionary enterprise, where we evolve over time; and finally, ecosystem value co-creation.
I’ve been working with a range of interesting organizations to put these into practice. Of course, it’s not about doing this all at once—it’s about finding a starting point as part of an overall roadmap to build not just the future state of the organization, but to build the organization into a Humans Plus AI organization that continues to evolve, be responsive to, and resilient in the face of the extraordinary pace of change we have.
We’ll be exploring these issues, among others, in conversations with guests. We have some amazing people coming up. Thank you for being part of the Humans Plus AI movement and community. Do join our other activities or tap into our resources at humansplus.ai/resources, which includes the framework I’ve just run through—so that’s accessible there.
Thank you, and I look forward to being on the journey of the Humans Plus AI podcast. Back soon next week.
The post Ross Dawson on Levels of Humans + AI in Organizations (AC Ep19) appeared first on Humans + AI.

11 snips
Sep 10, 2025 • 37min
Iskander Smit on human-AI-things relationships, designing for interruptions and intentions, and streams of consciousness in AI (AC Ep18)
Iskander Smit, founder of the Cities of Things Foundation, dives into the evolving landscape of human-AI relationships in physical environments. He emphasizes the importance of designing friction into AI interactions to enhance engagement and intentionality. The conversation explores collaborative intelligence, where human and AI co-performance boosts creativity. Smit also discusses the shifting role of designers in creating adaptive systems and how deliberate interruptions can foster deeper connections with technology, paving the way for richer, more meaningful interactions.

Sep 3, 2025 • 40min
Brian Kropp on AI adoption, intrinsic incentives, identifying pain points, and organizational redesign (AC Ep17)
“If you’re not moving quickly to get these ideas implemented, your smaller, more agile competitors are.”
–Brian Kropp
About Brian Kropp
Brian Kropp is President of Growth at World 50 Group. Previous roles include Managing Director at Accenture, Chief of HR Research at Gartner and Practice Leader at CEB. His work has been extensively featured in the media, including in Washington Post, NPR, Harvard Business Review, and Quartz.
Website:
world50.com
LinkedIn Profile:
Brian Kropp
X Profile:
Brian Kropp
What you will learn
Driving organizational performance through AI adoption
Understanding executive expectations versus actual results in AI performance impact
Strategies for creating effective AI adoption incentives within organizations
The importance of designing organizations for AI integration with a focus on risk management
Middle management’s evolving role in AI-rich environments
Redefining organizational structures to support AI and humans in tandem
Building a culture that encourages AI experimentation
Empowering leaders to drive AI adoption through innovative practices
Leveraging employees who are native to AI to assist in the learning process for leaders
Learning from case studies and studies of successful AI integration
Episode Resources
Transcript
Ross Dawson: Brian, it’s wonderful to have you on the show.
Brian Kropp: Thanks for having me, Ross. Really appreciate it.
Ross: So you’ve been doing a lot of work for a long time in driving organizational performance. These are perennials, but there’s this little thing called AI, which has come along lately, which is changing.
Brian: You might have heard of it somewhere. I’m not sure if you’ve been alive or awake for the last couple of years, but you might have heard about it.
Ross: Yeah, so we were just chatting before, and you were saying the pretty obvious thing, okay, got AI. Well, it’s only useful when it starts to be used. We need to drive the adoption. These are humans, humans who are using AI and working together to drive the performance of the organization. So love to just hear a big frame of what you’re seeing in how it is we drive the useful use of AI in organizations.
Brian: I think a good starting point is actually to try to take a step back and understand what is the expectation that executive senior leaders have about the benefit of these sorts of tools.
Now, to be honest, nobody knows exactly what the final benefit is going to be. There is definitely guesswork around. There are different people with different expectations and all sorts of different viewpoints on them, so the exact numbers are a little bit fuzzy at best in terms of the estimates of what performance improvements we will actually see.
But when you think about it, at least at kind of orders of magnitude, there are studies that have come out. There’s one recently from Morgan Stanley that talked about their expectation around a 40 to 50% improvement in organizational performance, defined as revenue and margin improvements from the use of AI tools.
So that’s a really big number. It’s a very big number.
When you do analysis of earnings calls from CEOs and when they’re pressed on what their expectation is, those numbers range between 20 and 30%. That’s still a really big number, and this is across the next couple of years, so it’s a timeframe.
What’s fascinating is that when you survey line executives, senior executives—so think like vice president, people three layers down from the CEO—and you look at some of the actual results that have been achieved so far, it’s in that single digits range.
So the challenge that’s out there, there’s a frontier that says 50, CEOs say 30, the actualized is, call it five. And those numbers, plus or minus a little bit, are in that range.
And so there’s enormous pressure on executives in businesses to actually drive adoption of these tools. Not necessarily to get to 50—I think that’s probably unrealistic, at least in the next kind of planning horizon—but to get from five to 10, from five to 15.
Because there are billions of dollars of investments that companies are making in these tools. There are all sorts of startups that they’re buying. There are all sorts of investments that they’re making.
And if those executives don’t start to show returns, the CFO is going to come knocking on the door and say, “Hey, you wrote a check for $50 million and the business seems kind of the same. What’s up with that?” There’s enormous pressure on them to make that happen.
So if you’re, as an executive, not thinking hard about how you’re actually going to drive the adoption of these tools, you’re certainly not going to get the cost savings that are real potential opportunities from using these tools. And you will absolutely not get the breakthrough performance that your CEO and the investment community are expecting from use of these tools.
So there’s an absolute imperative that executives figure out the adoption problem, because right now the technology, I think, is more than good enough to achieve some of these savings. But at the end of the day, it’s really an adoption, use, application problem.
It’s not a “Can we afford to buy it or not” problem. It’s “We can afford to buy it. It’s available. We have to use it as executives to actually achieve some sort of cost savings or revenue improvements.” And that, I think, is the size of the problem that executives are struggling with right now.
Ross: Yeah. Well, the thing is, the old adage says you can take a horse to water, but you can’t make it drink. And in an organizational context, again, I think the drive to use AI in organizations needs to be intrinsic, as in people need to want to do it. They can see that it’s part of the job. They want to learn. It gives them more possibilities and so on.
And there’s a massive divergence where I think there are some organizations where it truly is now part of the culture. You try things. You tell people you’re using it. You share prompts and so on. That’s probably the minority, but they absolutely exist.
In many organizations, it’s like, “I hate it. I’m not going to tell anybody I’m using it if I am using it.” And top-down, telling people to use it is not going to get there.
Brian: It’s funny, just as a quick side note about not telling people they’re using it. There’s a study that just came out. I think it was from ChatGPT, I can’t remember those folks. But one of the things that they were looking at was, are teachers using generative AI tools to grade papers?
And so the numbers were small, like seven or eight percent or something like that, less than 10%. But it just struck me as really funny that teachers have spent all this time saying, “Don’t use generative AI tools to write your papers,” but some are now starting to use generative AI tools to grade those papers.
So it’s just a little funny, the whole don’t use it, use it, not use it, don’t tell people you’re using it. I think those norms and the use cases will evolve in all sorts of places.
Ross: So you have a bit of a high-level framework, I believe, for how it is we think through driving adoption.
Brian: Yes. There are three major areas that I think are really important.
One, you have to create the right incentive structure. And that, to your point, is both intrinsic incentives. You have to create reasons for people to use it. In a lot of cases, there’s some fear over using it—“I don’t know how,” “Am I going to eliminate my own job?” Those sorts of things. So you have to create an incentive structure to use it.
Two, you have to think about how the organization is designed. Organizations from a risk aversion perspective, from a checks-and-balances perspective, from who gets to say no to stuff, from a willingness-to-experiment perspective, are designed to minimize risk in many cases.
And in order to really drive AI adoption, there is risk that’s involved. It’s a different way of doing things that will disrupt the old workflows that exist in the organization. So you have to really think hard about what you do from an org design perspective to make that happen.
And then three, you could have the right incentives in place, you could have the right structure in place, but leaders need to actually create the environment where adoption occurs. One of the great ironies here is that the minority of leaders—there was a Gartner study that came out just a little bit ago—showed that, on average, only about 15% of leaders actually feel comfortable using generative AI tools. And that’s the ones that say they feel comfortable doing it, which might even be a little bit of an overestimate.
So how do you work with leaders to actually create an environment where leaders encourage the adoption and are supportive of the adoption, beyond “You should go use some AI tools”?
Those are the three categories that companies and executives need to be thinking about in order to get from what is now relatively low levels of adoption at a lot of organizations to even medium levels of adoption, to close that gap between the 50% and 5% around the delta in expectations that people have.
Ross: So in particular, let’s go through those one by one. I’m particularly focused on the organizational design piece myself. For leaders, I think we can get to some solutions there. But let’s start with the incentives. I’d love to hear any specifics around what you have seen that works, that doesn’t work, or any suggestions or ideas. How do you then design and give that drive for people to say, “Yes, I want to use it”?
Brian: One of the things that’s really fascinating to me about getting people the drive to use it is that people often don’t know where, when, and how to use it.
So from an incentive structure, what a lot of companies do—what the average company will do—is say, “Well, we’re going to give you a goal to experiment with using generative AI tools, and you’ll just have a goal to try to do something.” But that comes without specificity around where, what, or when.
There’s one organization I’m working with, a manufacturing company, and what they’re doing right now is, rather than saying broadly, “You should be using these tools,” they actually go through a really specific process. They start by asking: what are the business problems that are there? What are the customer pain points in particular?
That’s where they start. They say, “What are the biggest friction points in our organization between one employee and another employee, or the friction points between the customer and the organization?”
So they first design and understand what those pain points are.
The second thing they actually do is not give goals for people to experiment more broadly. They give a goal for an output change that needs to occur. That output change could be faster time to customers, response time between employees, decrease in paperwork, or decrease in emails—some sort of tangible output that is measured within that.
And what’s interesting is they don’t measure the inputs or how hard it is to change that output. And that’s really important, because early on with incentives, we too often think about what is the ROI that we’re getting from this particular change. Right now, we don’t know how easy or hard it’s going to be to make these changes.
But what we know with certainty is if we don’t make a change, there’s no return on that investment. Small investment, big investment—if there’s no return, it’s zero. So first they’re identifying the places where they can get the return, and then later they’ll figure out what is the right way to optimize it.
So from an incentive structure, what they’re incentivizing—and they’re giving cash and real money associated with it, real hard financial outcomes—is: one, have you identified the most important pain points? two, have you conducted experiments that have improved the outcome, even if it is more expensive to do today?
That problem can be solved later. The more important problem is to focus on the places where there’s actually a return, and give incentives for people that can impact the return, not just people that have gotten an ROI measure.
And that is a fundamentally different approach than a finance perspective, because the finance question is, “Well, what’s the ROI?” Wrong question to ask right now. The right question is, “Where is the return?” and set people to get a return, not a return on an investment.
Ross: That sounds very, very promising. So I want to just get specific here. In terms of surfacing those pain points, is that done in a workshop format? Do they get groups of people across the frontline to workshop and create lists of these pain points, which are then listed, and then disseminated, and say, “Okay, now you can go out and choose a pain point where you can come up with some ideas on how to improve that”?
Brian: Yeah. So the way that this particular company does it, it’s part of their high-potential program. One of the things they’ve got is a high-potential program they’re always trying to figure out. And a lot of companies are working with this: where can those high potentials actually have a really big impact across the organization and start to develop an enterprise mindset?
So they’ve run a series of workshops with their high potentials to identify what those pain points are.
Now, the inputs to those workshops include surveys from employees, surveys from customers, operations people who come through and chart out what takes time from one spot to another spot—a variety of inputs. But you want to have a quantitative measure associated with those inputs, because at the end of the day, you have to show that that pain point is less of a pain point, that speed is a little bit faster. So you need to have some way to get to a quantitative measure of it.
Now, what they did is, once they workshopped that and got to a list, their original list was about 40 different spots. What a lot of companies are doing is saying, “Well, here are the pain points, go work on these 40 different things.” And what invariably happens is you get a little bit of work across all of them, but it peters out because there’s not enough momentum and energy behind them.
Once they got to those 40, they actually narrowed it down through a voting process amongst their high potentials to about five that are there. And those are the five that they shared with the broader organization.
And then what they’ve done is each of those groups of high potentials, about four or five per team, actually lead tiger teams across the company to focus on driving those pain points and trying to drive resolution around them.
So I don’t believe that the approach of “plant 1000 flowers and something good will happen” plays out. Every once in a while, sure, but it rarely plays out because these significant changes require significant effort. And as soon as you plant 1000 flowers, you can’t put enough effort against any of them to really work through the difficult, hard parts that are associated with it.
So pick the five spots that are the real pain points for customers, employees, or in your process. Then incent people to get a return on them—not a return on investment on them, but a return on them. And then you can start to reward people for just driving a return around the things that actually will help the organization get better.
Ross: Yeah, it sounds really solid. And I guess to the point about the more broad initiative, Johnson & Johnson literally called their AI program “Let 1000 Flowers Bloom.” And then they consolidated later to 100. But that’s Johnson & Johnson. Not everybody’s a J&J. Depending on size and capability, 1000 issues might not be the right way to start.
Brian: They did rationalize down, yeah. Once they started to get some ideas, they rationalized down to a smaller list.
Ross: I do think they made the comment themselves that they needed to do the broader thing before being able to think. They couldn’t get to the 100 ones which were high value without having done some experimentation, and that is the learning process itself. And it gets people involved.
So I’d love to move on to the organizational design piece. That’s a special favorite topic of mine. So first of all, big picture, what’s the process? Okay, we have an organizational design. AI is going to change it. We’re moving to a humans-plus-AI workforce and workflows. So what’s the process of redesigning that organization? And what are any examples of that?
Brian: One of the first things to realize is AI can be very threatening to significant parts of the organization that are well established. So here are a couple of things that we know, with a lot of uncertainty.
AI will create more cost-effective processes across organizations that will have impacts on decreasing headcount, in some cases, for sure. There are other companies—your competitors—that are coming up with new ideas that will lower costs of providing the same services that you provide.
However, the way that organizations are designed, in many ways, is to protect the parts of the business that are already successful, driving revenue, driving margin. And those parts of the business tend to be so big that they dominate small new parts of the business.
Because you find yourself in these situations where it’s like, yes, AI is the future, but today it’s big business unit A. Now, five years from now, that’s not going to be the case. But the power sits in big business unit A, and the resources get sucked up there. The innovation gets shut down in other places because it’s a threat to the big business units that are there.
And I get that, because you still have to hit a quarterly number. You can’t just put the business on pause for a couple of years while you figure out the new, innovative way of doing things.
So the challenge that organizations have, from an org design perspective, I believe, or one of them at least, is: how do you continue to get revenue and margin from the businesses that are the cash cows of the business, but not have them squash the future part of the business, which is the AI components?
If you slowly layer in new AI technologies, you slowly get improvements. One of the interesting things in a study that came out a little bit ago was the speed at which companies can operate. Large companies, on average, take nine months to go from idea to implementation. Smaller companies, it takes three months. My guess is in even smaller companies, it probably takes 30 days to go from idea to implementation of an AI pilot.
Ross: This was the MIT Nanda study.
Brian: Correct, yep. And the people that had a big reaction to 95% of companies haven’t seen results from what they’re doing that’s real. And lots of questions within that.
But the speed one, the clock speed one, is really interesting to me. Because if you’re not moving quickly to get these ideas implemented, your smaller, more agile competitors are. If you’re a big, large company, and it takes you nine months to go from idea to implementation, and your small, more nimble competitor is doing it in a month or two, that gives them seven, eight months of lead time to capture market share from you, because you’re big and slow.
So from an org design perspective, what I believe is the most effective thing—and we’re seeing companies do this—when General Motors launched their electric vehicles division, as an example of how this played out at scale.
What companies are doing is creating small, separate business units whose job it is to attack their own business unit and create the products and services that are designed to attack their own business unit. You almost have to do it that way. You almost have to create an adversarial organization design. Because if you’re not doing it to yourself, someone else is doing it to you.
Ross: That’s more a business model structure. That’s a classic example of innovation, a separate unit to cannibalize yourself. But that doesn’t change the design of the existing organization. It creates a new unit, which is small and which cannot necessarily scale as fast. And it may have a very innovative organizational structure to be able to do that, but that doesn’t change the design of the existing organization.
Brian: Yeah. I think the way that the design of existing organizations is going to change the most is on two dimensions. It comes down a lot to the middle management part of the organization and the organization design.
There are two major reasons why I think this is going to happen.
One: organizations will still have to do tasks, and some of those tasks will be done by humans, some of those tasks will be done by AI. But at the end of the day, tasks will have to get done. There are activities that will have to get done at the bottom layer of the organization, or the front layer of the organization, depending on how you think about it.
But those employees that are doing those tasks will need less managerial support. Right now, when you’ve got a question about how to do things, more often than not, you go to your manager to say, “How do I do this particular thing?” The reality is, AI tools, in some cases, are already better than your manager at providing that information—on how to do it, advice on what to do, how to engage a customer, whatever it might be. So employees will go to their managers less often.
So one, the manager roles will change. There will be fewer of them, and they’re going to be focusing more on relationship building, more on social-work-type behaviors—how to get people to work together—not helping people do their tasks. So I think one major change to what organizations look like is fewer managers spread across more people.
The second thing that I think will happen: when you look at what a lot of middle management does, it is aggregation of information and then sharing information upwards. AI tools will manage that aggregation and share it up faster than middle managers will.
So what will happen, I believe, is that organizations will also get flatter overall.
There’s been a lot of focus and attention on this question of entry-level jobs and AI decreasing the number of entry-level jobs that organizations need. I think that’s true, and we’re already seeing it in a lot of different cases.
But from an organizational design perspective, I think organizations will get flatter and broader in terms of how they work and operate because of these two factors: one, employees not needing their managers as much, so you don’t need as many managers; and two, that critical role of aggregation of information and then dissemination of information becomes much less important in an AI-based world.
So if you had frontline employees reporting to managers, managers reporting to managers, managers reporting to VPs, VPs reporting to CEOs—at least one of those layers in the middle can go away.
Ross: Similar trends for quite a while. And the logic is there. So can you ground us with any examples or instances?
Brian: We’re seeing the entry-level roles eliminated in all sorts of different places right now. We don’t have organizations that have actually gone through a significant reduction in staff in that middle, but that is the next big phase.
So, for example, when you look at a manager, it’s the next logical step. And if you just work through it, you say, well, what are the things that managers do? They provide…
Ross: Are there any examples of this?
Brian: Where they’ve started to eliminate those roles already? Not that I’ve seen. There are organizations that are talking about doing it, and they’re trying to figure out what that looks like, because that is a fundamental change that will be AI-driven.
There are lots of times when they’re using cost efficiencies to eliminate layers of middle management, but they’re only now starting to realize that this is an opportunity to make that organization design change. This, I think, is what will happen, as opposed to what organizations are doing right now, but they’re actively debating how to do it.
Ross: Yeah. I mean, that’s one of the things where the raw logic you’ve laid out seems plausible. But part of it is the realities of it, as in some people will be very happy to have less contact with their manager.
A lot of it, as you say, is an informational role. But there are other coaching, emotional, or engagement roles where, depending on the culture and the situation, those things may surface and become less.
We don’t know. We don’t know until we point to examples, though, which I think support your thesis. One is an old one but is relevant: Jensen Huang has, I think, something like 40 direct reports. He’s been doing that for a long time, and that’s a particular relationship style.
But I do recall seeing something to the effect that Intel is taking out a whole layer of its management. That’s not in a similar situation—same industry, but extremely different situation—yet it points to what you’re describing.
Brian: I can give you an example of how the managerial role is already starting to change. There are several startups, early-stage companies, whose product offering has been managerial training. You come, you do e-learning modules, you do other sorts of training for managers to improve their ability to provide feedback, and so on.
The first step they’re engaging in is creating a generative AI tool, just a chatbot, that a manager can go to and say, “Hey, I’m struggling with this employee. What do I do around this thing versus that thing?”
So where we’re seeing the first frontier is managers not talking to their HR business partner to get advice on how to handle employees, but managers starting to talk to a chatbot that’s based upon all the learning modules that already existed. They’re putting that on top to decrease the number of HR business partners they need.
But it begs the second question: if an employee is struggling with a performance issue, why should they have to go to their manager, and then their manager go to a tool?
So the next evolution of these tools is the employee talking directly to a chatbot that is built on top of all the guides, all of the training material, all of the information that was created to train that employee the first time. We’re starting to see companies in the VC space build those sorts of tools that employees would then use.
That’s one part of it. Here’s another example of where we’re seeing the managerial role get eliminated. One of the most important parts historically of the managerial role is identifying who the highest performers are.
There are a couple of startup companies creating new tools to layer on top of the existing flow of information across the organization, to start identifying—based on conversations and interactions among employees, whether video, email, Slack, or whatever channels—who is actually making the bigger contributions.
And when they’ve gone back and looked at it, one of the things they found is that about two-thirds of the employees who get the highest performance review scores are actually not making the highest contributions to the organization. So it’s giving a completely different way to assess and manage performance.
Ross: Just to round out, because we want to get to the third point. And I guess, just generally reflecting on what you’re saying. I mean, AI feeds on data. We have far more data. And so there’s a whole layer of issues around what data can we gather around employee activities, behaviors, etc., which are useful and flows into that.
But despite those constraints, there is data which can provide multiple useful perspectives on performance, amongst other things, and feedback to be able to feed on that. But I want to round out with your third point around leaders—getting leaders to use the tools to the point where they are A, comfortable, and B, competent, and C, effective leaders in a world which is more and more AI-centric.
Brian: Yeah. Here’s part of the reality. For most leaders, if you look at a typical company, most leaders are well into their 40s or later. They have grown up with a set of tools and systems to run their business. And those are the tools that they grew up with, which is like moving to an internet age. They did not grow up in this environment.
And as I mentioned earlier, most of them do not feel comfortable in this environment, and their advice is just go and experiment with different things. This is the exact same advice if you roll the clock back to the start of the internet in the workplace, or the start of bring your own device to work. It was experiment with some stuff and get comfortable with it.
And in each of those previous two situations—when should we give people access to the internet at work, should we allow people to bring their own devices—most companies wasted a year or two or three years because their leaders had no idea what to do. And the net result of most of that is people use these tools to plan their vacations or to do slightly better Google searches.
This is what’s going to happen now if we don’t change the behavior and approaches of our leaders. So in order to actually get the organization to work, in order to get the right incentives in place, you need to have leaders that are willing to push much harder on the AI front and develop their own skills and capability and knowledge around that. There’s a lot of…
Ross: Any specifics again, just any overall practices or how to actually make this happen?
Brian: Yeah. So there’s kind of a series of maturities that we’re seeing out there in organizations.
There’s a ton of online learning that leaders can take to get them familiar with what AI is capable of. So that’s kind of maturity level one: just build that sort of awareness, create the right content material that they can access to learn how to do things.
Maturity level two is change who is advising them. Most leaders go through a process where the people that are advising them are people that are more experienced than them, or their peers. So what we’re seeing organizations do is starting to create shadow cabinets of younger employees who have actually started to grow up in the AI age, where they’re forced to spend time with them.
So each leader is given a shadow cabinet of four or five employees that are actually really familiar with AI, and that leader actually then has to report back to those junior employees about what they’re actually doing from an AI perspective. That’s a forcing mechanism to make sure that something happens with people that are more knowledgeable about what’s going on.
So that’s kind of a second level of maturity that we’re starting to see play out.
For the leaders that are truly making progress here, what we’re actually seeing is that they’re creating environments where failure is celebrated. When you think back to a lot of the early IT stages, and a lot of the early IT innovation, it’s fraught with failure. More things don’t work than do work.
So they are creating environments and situations where they’re actually celebrating failure to reduce risk that’s associated with employees. And so they’re creating environments where, “I failed, but we’ve learned,” and that’s really valuable.
Then the fourth idea, and this is what IDEO is doing. IDEO is a design consultancy, and they do something really, really interesting when it comes to leaders. What they’ve come to realize is that leaders, by definition, are people that have been incredibly successful throughout their career. Leaders also, by definition, hate to ask for help, because many of them view it as a weakness. Leaders also, by definition, like to celebrate the great stuff that they’ve done.
So what they actually do—and they do this about every six months or so—every leader has to film and record a short video. And that video is: here are the cool things that I did using AI across the last six months, and here are the next set of things that I’m going to do, that I’m working on, where I’m thinking about using AI for the next six months. And every leader has to do that.
And what that actually achieves—when you have to record that video and then show that to everybody—is that if you haven’t done anything in the last six months, you kind of look like a loser leader. So it puts pressure on that leader to actually have done something that’s interesting, that they have to put in front of the broader organization.
And then the “what I’m going to work on next,” they’re not actually asking for help, so it really works with a leader psyche, but they’re saying, “Here are the next things I’m going to do that are awesome.” And that gives other leaders a chance to say, “Hey, I’m working on something similar,” or, “Oh, I figured that out last time.”
So it takes away a lot of the fear that’s associated with leaders, where they have to fake that they know what they’re doing or lie about what’s working. But it forces them to do something, because they have to tell everyone else what they did, and it creates the opportunity for them to get help without actually asking for help.
That is a really cool way that organizations are getting leaders to embrace AI, because none of them want to stand up in front of the company and be like, “Yeah, I haven’t really been doing anything on this whole AI issue for the last six months.”
Ross: That’s great. That’s a really nice example. It’s nice and tangible, and it doesn’t suit every company’s culture, but I think it can definitely work.
Brian: Yeah, the takeaway from it is put pressure on leaders to show publicly that they’re doing something. They care about their reputation, and whatever way makes the most sense for you as an organization, put the pressure on the leader to show that they’re doing something.
Ross: Yeah, absolutely. So that’s a nice round out. Thanks so much for your time and your insight, Brian. It’s been great to get the perspectives on building AI adoption.
Brian: Great. Thanks for having me, Ross. And this is a time period where there’s an analogy that I like to use in a car race: people don’t pass each other in straightaways, they pass each other in turns. And this is a turn that’s going on, and this creates the moment for organizations to pass each other in that turn.
And then one other racing analogy I think is really important here: you accelerate going into a turn. When you’re racing, you don’t decelerate. Too many companies are decelerating. They have to accelerate into that turn to pass their competitors in the turn. And whoever does that well will be the companies that win across the next 3, 5, 7 years until the next big thing happens.
Ross: And it’s going to be fun to watch it.
Brian: For sure, for sure.
The post Brian Kropp on AI adoption, intrinsic incentives, identifying pain points, and organizational redesign (AC Ep17) appeared first on Humans + AI.


