

Humans + AI
Ross Dawson
Exploring and unlocking the potential of AI for individuals, organizations, and humanity
Episodes
Mentioned books

Sep 3, 2025 • 40min
Brian Kropp on AI adoption, intrinsic incentives, identifying pain points, and organizational redesign (AC Ep17)
“If you’re not moving quickly to get these ideas implemented, your smaller, more agile competitors are.”
–Brian Kropp
About Brian Kropp
Brian Kropp is President of Growth at World 50 Group. Previous roles include Managing Director at Accenture, Chief of HR Research at Gartner and Practice Leader at CEB. His work has been extensively featured in the media, including in Washington Post, NPR, Harvard Business Review, and Quartz.
Website:
world50.com
LinkedIn Profile:
Brian Kropp
X Profile:
Brian Kropp
What you will learn
Driving organizational performance through AI adoption
Understanding executive expectations versus actual results in AI performance impact
Strategies for creating effective AI adoption incentives within organizations
The importance of designing organizations for AI integration with a focus on risk management
Middle management’s evolving role in AI-rich environments
Redefining organizational structures to support AI and humans in tandem
Building a culture that encourages AI experimentation
Empowering leaders to drive AI adoption through innovative practices
Leveraging employees who are native to AI to assist in the learning process for leaders
Learning from case studies and studies of successful AI integration
Episode Resources
Transcript
Ross Dawson: Brian, it’s wonderful to have you on the show.
Brian Kropp: Thanks for having me, Ross. Really appreciate it.
Ross: So you’ve been doing a lot of work for a long time in driving organizational performance. These are perennials, but there’s this little thing called AI, which has come along lately, which is changing.
Brian: You might have heard of it somewhere. I’m not sure if you’ve been alive or awake for the last couple of years, but you might have heard about it.
Ross: Yeah, so we were just chatting before, and you were saying the pretty obvious thing, okay, got AI. Well, it’s only useful when it starts to be used. We need to drive the adoption. These are humans, humans who are using AI and working together to drive the performance of the organization. So love to just hear a big frame of what you’re seeing in how it is we drive the useful use of AI in organizations.
Brian: I think a good starting point is actually to try to take a step back and understand what is the expectation that executive senior leaders have about the benefit of these sorts of tools.
Now, to be honest, nobody knows exactly what the final benefit is going to be. There is definitely guesswork around. There are different people with different expectations and all sorts of different viewpoints on them, so the exact numbers are a little bit fuzzy at best in terms of the estimates of what performance improvements we will actually see.
But when you think about it, at least at kind of orders of magnitude, there are studies that have come out. There’s one recently from Morgan Stanley that talked about their expectation around a 40 to 50% improvement in organizational performance, defined as revenue and margin improvements from the use of AI tools.
So that’s a really big number. It’s a very big number.
When you do analysis of earnings calls from CEOs and when they’re pressed on what their expectation is, those numbers range between 20 and 30%. That’s still a really big number, and this is across the next couple of years, so it’s a timeframe.
What’s fascinating is that when you survey line executives, senior executives—so think like vice president, people three layers down from the CEO—and you look at some of the actual results that have been achieved so far, it’s in that single digits range.
So the challenge that’s out there, there’s a frontier that says 50, CEOs say 30, the actualized is, call it five. And those numbers, plus or minus a little bit, are in that range.
And so there’s enormous pressure on executives in businesses to actually drive adoption of these tools. Not necessarily to get to 50—I think that’s probably unrealistic, at least in the next kind of planning horizon—but to get from five to 10, from five to 15.
Because there are billions of dollars of investments that companies are making in these tools. There are all sorts of startups that they’re buying. There are all sorts of investments that they’re making.
And if those executives don’t start to show returns, the CFO is going to come knocking on the door and say, “Hey, you wrote a check for $50 million and the business seems kind of the same. What’s up with that?” There’s enormous pressure on them to make that happen.
So if you’re, as an executive, not thinking hard about how you’re actually going to drive the adoption of these tools, you’re certainly not going to get the cost savings that are real potential opportunities from using these tools. And you will absolutely not get the breakthrough performance that your CEO and the investment community are expecting from use of these tools.
So there’s an absolute imperative that executives figure out the adoption problem, because right now the technology, I think, is more than good enough to achieve some of these savings. But at the end of the day, it’s really an adoption, use, application problem.
It’s not a “Can we afford to buy it or not” problem. It’s “We can afford to buy it. It’s available. We have to use it as executives to actually achieve some sort of cost savings or revenue improvements.” And that, I think, is the size of the problem that executives are struggling with right now.
Ross: Yeah. Well, the thing is, the old adage says you can take a horse to water, but you can’t make it drink. And in an organizational context, again, I think the drive to use AI in organizations needs to be intrinsic, as in people need to want to do it. They can see that it’s part of the job. They want to learn. It gives them more possibilities and so on.
And there’s a massive divergence where I think there are some organizations where it truly is now part of the culture. You try things. You tell people you’re using it. You share prompts and so on. That’s probably the minority, but they absolutely exist.
In many organizations, it’s like, “I hate it. I’m not going to tell anybody I’m using it if I am using it.” And top-down, telling people to use it is not going to get there.
Brian: It’s funny, just as a quick side note about not telling people they’re using it. There’s a study that just came out. I think it was from ChatGPT, I can’t remember those folks. But one of the things that they were looking at was, are teachers using generative AI tools to grade papers?
And so the numbers were small, like seven or eight percent or something like that, less than 10%. But it just struck me as really funny that teachers have spent all this time saying, “Don’t use generative AI tools to write your papers,” but some are now starting to use generative AI tools to grade those papers.
So it’s just a little funny, the whole don’t use it, use it, not use it, don’t tell people you’re using it. I think those norms and the use cases will evolve in all sorts of places.
Ross: So you have a bit of a high-level framework, I believe, for how it is we think through driving adoption.
Brian: Yes. There are three major areas that I think are really important.
One, you have to create the right incentive structure. And that, to your point, is both intrinsic incentives. You have to create reasons for people to use it. In a lot of cases, there’s some fear over using it—“I don’t know how,” “Am I going to eliminate my own job?” Those sorts of things. So you have to create an incentive structure to use it.
Two, you have to think about how the organization is designed. Organizations from a risk aversion perspective, from a checks-and-balances perspective, from who gets to say no to stuff, from a willingness-to-experiment perspective, are designed to minimize risk in many cases.
And in order to really drive AI adoption, there is risk that’s involved. It’s a different way of doing things that will disrupt the old workflows that exist in the organization. So you have to really think hard about what you do from an org design perspective to make that happen.
And then three, you could have the right incentives in place, you could have the right structure in place, but leaders need to actually create the environment where adoption occurs. One of the great ironies here is that the minority of leaders—there was a Gartner study that came out just a little bit ago—showed that, on average, only about 15% of leaders actually feel comfortable using generative AI tools. And that’s the ones that say they feel comfortable doing it, which might even be a little bit of an overestimate.
So how do you work with leaders to actually create an environment where leaders encourage the adoption and are supportive of the adoption, beyond “You should go use some AI tools”?
Those are the three categories that companies and executives need to be thinking about in order to get from what is now relatively low levels of adoption at a lot of organizations to even medium levels of adoption, to close that gap between the 50% and 5% around the delta in expectations that people have.
Ross: So in particular, let’s go through those one by one. I’m particularly focused on the organizational design piece myself. For leaders, I think we can get to some solutions there. But let’s start with the incentives. I’d love to hear any specifics around what you have seen that works, that doesn’t work, or any suggestions or ideas. How do you then design and give that drive for people to say, “Yes, I want to use it”?
Brian: One of the things that’s really fascinating to me about getting people the drive to use it is that people often don’t know where, when, and how to use it.
So from an incentive structure, what a lot of companies do—what the average company will do—is say, “Well, we’re going to give you a goal to experiment with using generative AI tools, and you’ll just have a goal to try to do something.” But that comes without specificity around where, what, or when.
There’s one organization I’m working with, a manufacturing company, and what they’re doing right now is, rather than saying broadly, “You should be using these tools,” they actually go through a really specific process. They start by asking: what are the business problems that are there? What are the customer pain points in particular?
That’s where they start. They say, “What are the biggest friction points in our organization between one employee and another employee, or the friction points between the customer and the organization?”
So they first design and understand what those pain points are.
The second thing they actually do is not give goals for people to experiment more broadly. They give a goal for an output change that needs to occur. That output change could be faster time to customers, response time between employees, decrease in paperwork, or decrease in emails—some sort of tangible output that is measured within that.
And what’s interesting is they don’t measure the inputs or how hard it is to change that output. And that’s really important, because early on with incentives, we too often think about what is the ROI that we’re getting from this particular change. Right now, we don’t know how easy or hard it’s going to be to make these changes.
But what we know with certainty is if we don’t make a change, there’s no return on that investment. Small investment, big investment—if there’s no return, it’s zero. So first they’re identifying the places where they can get the return, and then later they’ll figure out what is the right way to optimize it.
So from an incentive structure, what they’re incentivizing—and they’re giving cash and real money associated with it, real hard financial outcomes—is: one, have you identified the most important pain points? two, have you conducted experiments that have improved the outcome, even if it is more expensive to do today?
That problem can be solved later. The more important problem is to focus on the places where there’s actually a return, and give incentives for people that can impact the return, not just people that have gotten an ROI measure.
And that is a fundamentally different approach than a finance perspective, because the finance question is, “Well, what’s the ROI?” Wrong question to ask right now. The right question is, “Where is the return?” and set people to get a return, not a return on an investment.
Ross: That sounds very, very promising. So I want to just get specific here. In terms of surfacing those pain points, is that done in a workshop format? Do they get groups of people across the frontline to workshop and create lists of these pain points, which are then listed, and then disseminated, and say, “Okay, now you can go out and choose a pain point where you can come up with some ideas on how to improve that”?
Brian: Yeah. So the way that this particular company does it, it’s part of their high-potential program. One of the things they’ve got is a high-potential program they’re always trying to figure out. And a lot of companies are working with this: where can those high potentials actually have a really big impact across the organization and start to develop an enterprise mindset?
So they’ve run a series of workshops with their high potentials to identify what those pain points are.
Now, the inputs to those workshops include surveys from employees, surveys from customers, operations people who come through and chart out what takes time from one spot to another spot—a variety of inputs. But you want to have a quantitative measure associated with those inputs, because at the end of the day, you have to show that that pain point is less of a pain point, that speed is a little bit faster. So you need to have some way to get to a quantitative measure of it.
Now, what they did is, once they workshopped that and got to a list, their original list was about 40 different spots. What a lot of companies are doing is saying, “Well, here are the pain points, go work on these 40 different things.” And what invariably happens is you get a little bit of work across all of them, but it peters out because there’s not enough momentum and energy behind them.
Once they got to those 40, they actually narrowed it down through a voting process amongst their high potentials to about five that are there. And those are the five that they shared with the broader organization.
And then what they’ve done is each of those groups of high potentials, about four or five per team, actually lead tiger teams across the company to focus on driving those pain points and trying to drive resolution around them.
So I don’t believe that the approach of “plant 1000 flowers and something good will happen” plays out. Every once in a while, sure, but it rarely plays out because these significant changes require significant effort. And as soon as you plant 1000 flowers, you can’t put enough effort against any of them to really work through the difficult, hard parts that are associated with it.
So pick the five spots that are the real pain points for customers, employees, or in your process. Then incent people to get a return on them—not a return on investment on them, but a return on them. And then you can start to reward people for just driving a return around the things that actually will help the organization get better.
Ross: Yeah, it sounds really solid. And I guess to the point about the more broad initiative, Johnson & Johnson literally called their AI program “Let 1000 Flowers Bloom.” And then they consolidated later to 100. But that’s Johnson & Johnson. Not everybody’s a J&J. Depending on size and capability, 1000 issues might not be the right way to start.
Brian: They did rationalize down, yeah. Once they started to get some ideas, they rationalized down to a smaller list.
Ross: I do think they made the comment themselves that they needed to do the broader thing before being able to think. They couldn’t get to the 100 ones which were high value without having done some experimentation, and that is the learning process itself. And it gets people involved.
So I’d love to move on to the organizational design piece. That’s a special favorite topic of mine. So first of all, big picture, what’s the process? Okay, we have an organizational design. AI is going to change it. We’re moving to a humans-plus-AI workforce and workflows. So what’s the process of redesigning that organization? And what are any examples of that?
Brian: One of the first things to realize is AI can be very threatening to significant parts of the organization that are well established. So here are a couple of things that we know, with a lot of uncertainty.
AI will create more cost-effective processes across organizations that will have impacts on decreasing headcount, in some cases, for sure. There are other companies—your competitors—that are coming up with new ideas that will lower costs of providing the same services that you provide.
However, the way that organizations are designed, in many ways, is to protect the parts of the business that are already successful, driving revenue, driving margin. And those parts of the business tend to be so big that they dominate small new parts of the business.
Because you find yourself in these situations where it’s like, yes, AI is the future, but today it’s big business unit A. Now, five years from now, that’s not going to be the case. But the power sits in big business unit A, and the resources get sucked up there. The innovation gets shut down in other places because it’s a threat to the big business units that are there.
And I get that, because you still have to hit a quarterly number. You can’t just put the business on pause for a couple of years while you figure out the new, innovative way of doing things.
So the challenge that organizations have, from an org design perspective, I believe, or one of them at least, is: how do you continue to get revenue and margin from the businesses that are the cash cows of the business, but not have them squash the future part of the business, which is the AI components?
If you slowly layer in new AI technologies, you slowly get improvements. One of the interesting things in a study that came out a little bit ago was the speed at which companies can operate. Large companies, on average, take nine months to go from idea to implementation. Smaller companies, it takes three months. My guess is in even smaller companies, it probably takes 30 days to go from idea to implementation of an AI pilot.
Ross: This was the MIT Nanda study.
Brian: Correct, yep. And the people that had a big reaction to 95% of companies haven’t seen results from what they’re doing that’s real. And lots of questions within that.
But the speed one, the clock speed one, is really interesting to me. Because if you’re not moving quickly to get these ideas implemented, your smaller, more agile competitors are. If you’re a big, large company, and it takes you nine months to go from idea to implementation, and your small, more nimble competitor is doing it in a month or two, that gives them seven, eight months of lead time to capture market share from you, because you’re big and slow.
So from an org design perspective, what I believe is the most effective thing—and we’re seeing companies do this—when General Motors launched their electric vehicles division, as an example of how this played out at scale.
What companies are doing is creating small, separate business units whose job it is to attack their own business unit and create the products and services that are designed to attack their own business unit. You almost have to do it that way. You almost have to create an adversarial organization design. Because if you’re not doing it to yourself, someone else is doing it to you.
Ross: That’s more a business model structure. That’s a classic example of innovation, a separate unit to cannibalize yourself. But that doesn’t change the design of the existing organization. It creates a new unit, which is small and which cannot necessarily scale as fast. And it may have a very innovative organizational structure to be able to do that, but that doesn’t change the design of the existing organization.
Brian: Yeah. I think the way that the design of existing organizations is going to change the most is on two dimensions. It comes down a lot to the middle management part of the organization and the organization design.
There are two major reasons why I think this is going to happen.
One: organizations will still have to do tasks, and some of those tasks will be done by humans, some of those tasks will be done by AI. But at the end of the day, tasks will have to get done. There are activities that will have to get done at the bottom layer of the organization, or the front layer of the organization, depending on how you think about it.
But those employees that are doing those tasks will need less managerial support. Right now, when you’ve got a question about how to do things, more often than not, you go to your manager to say, “How do I do this particular thing?” The reality is, AI tools, in some cases, are already better than your manager at providing that information—on how to do it, advice on what to do, how to engage a customer, whatever it might be. So employees will go to their managers less often.
So one, the manager roles will change. There will be fewer of them, and they’re going to be focusing more on relationship building, more on social-work-type behaviors—how to get people to work together—not helping people do their tasks. So I think one major change to what organizations look like is fewer managers spread across more people.
The second thing that I think will happen: when you look at what a lot of middle management does, it is aggregation of information and then sharing information upwards. AI tools will manage that aggregation and share it up faster than middle managers will.
So what will happen, I believe, is that organizations will also get flatter overall.
There’s been a lot of focus and attention on this question of entry-level jobs and AI decreasing the number of entry-level jobs that organizations need. I think that’s true, and we’re already seeing it in a lot of different cases.
But from an organizational design perspective, I think organizations will get flatter and broader in terms of how they work and operate because of these two factors: one, employees not needing their managers as much, so you don’t need as many managers; and two, that critical role of aggregation of information and then dissemination of information becomes much less important in an AI-based world.
So if you had frontline employees reporting to managers, managers reporting to managers, managers reporting to VPs, VPs reporting to CEOs—at least one of those layers in the middle can go away.
Ross: Similar trends for quite a while. And the logic is there. So can you ground us with any examples or instances?
Brian: We’re seeing the entry-level roles eliminated in all sorts of different places right now. We don’t have organizations that have actually gone through a significant reduction in staff in that middle, but that is the next big phase.
So, for example, when you look at a manager, it’s the next logical step. And if you just work through it, you say, well, what are the things that managers do? They provide…
Ross: Are there any examples of this?
Brian: Where they’ve started to eliminate those roles already? Not that I’ve seen. There are organizations that are talking about doing it, and they’re trying to figure out what that looks like, because that is a fundamental change that will be AI-driven.
There are lots of times when they’re using cost efficiencies to eliminate layers of middle management, but they’re only now starting to realize that this is an opportunity to make that organization design change. This, I think, is what will happen, as opposed to what organizations are doing right now, but they’re actively debating how to do it.
Ross: Yeah. I mean, that’s one of the things where the raw logic you’ve laid out seems plausible. But part of it is the realities of it, as in some people will be very happy to have less contact with their manager.
A lot of it, as you say, is an informational role. But there are other coaching, emotional, or engagement roles where, depending on the culture and the situation, those things may surface and become less.
We don’t know. We don’t know until we point to examples, though, which I think support your thesis. One is an old one but is relevant: Jensen Huang has, I think, something like 40 direct reports. He’s been doing that for a long time, and that’s a particular relationship style.
But I do recall seeing something to the effect that Intel is taking out a whole layer of its management. That’s not in a similar situation—same industry, but extremely different situation—yet it points to what you’re describing.
Brian: I can give you an example of how the managerial role is already starting to change. There are several startups, early-stage companies, whose product offering has been managerial training. You come, you do e-learning modules, you do other sorts of training for managers to improve their ability to provide feedback, and so on.
The first step they’re engaging in is creating a generative AI tool, just a chatbot, that a manager can go to and say, “Hey, I’m struggling with this employee. What do I do around this thing versus that thing?”
So where we’re seeing the first frontier is managers not talking to their HR business partner to get advice on how to handle employees, but managers starting to talk to a chatbot that’s based upon all the learning modules that already existed. They’re putting that on top to decrease the number of HR business partners they need.
But it begs the second question: if an employee is struggling with a performance issue, why should they have to go to their manager, and then their manager go to a tool?
So the next evolution of these tools is the employee talking directly to a chatbot that is built on top of all the guides, all of the training material, all of the information that was created to train that employee the first time. We’re starting to see companies in the VC space build those sorts of tools that employees would then use.
That’s one part of it. Here’s another example of where we’re seeing the managerial role get eliminated. One of the most important parts historically of the managerial role is identifying who the highest performers are.
There are a couple of startup companies creating new tools to layer on top of the existing flow of information across the organization, to start identifying—based on conversations and interactions among employees, whether video, email, Slack, or whatever channels—who is actually making the bigger contributions.
And when they’ve gone back and looked at it, one of the things they found is that about two-thirds of the employees who get the highest performance review scores are actually not making the highest contributions to the organization. So it’s giving a completely different way to assess and manage performance.
Ross: Just to round out, because we want to get to the third point. And I guess, just generally reflecting on what you’re saying. I mean, AI feeds on data. We have far more data. And so there’s a whole layer of issues around what data can we gather around employee activities, behaviors, etc., which are useful and flows into that.
But despite those constraints, there is data which can provide multiple useful perspectives on performance, amongst other things, and feedback to be able to feed on that. But I want to round out with your third point around leaders—getting leaders to use the tools to the point where they are A, comfortable, and B, competent, and C, effective leaders in a world which is more and more AI-centric.
Brian: Yeah. Here’s part of the reality. For most leaders, if you look at a typical company, most leaders are well into their 40s or later. They have grown up with a set of tools and systems to run their business. And those are the tools that they grew up with, which is like moving to an internet age. They did not grow up in this environment.
And as I mentioned earlier, most of them do not feel comfortable in this environment, and their advice is just go and experiment with different things. This is the exact same advice if you roll the clock back to the start of the internet in the workplace, or the start of bring your own device to work. It was experiment with some stuff and get comfortable with it.
And in each of those previous two situations—when should we give people access to the internet at work, should we allow people to bring their own devices—most companies wasted a year or two or three years because their leaders had no idea what to do. And the net result of most of that is people use these tools to plan their vacations or to do slightly better Google searches.
This is what’s going to happen now if we don’t change the behavior and approaches of our leaders. So in order to actually get the organization to work, in order to get the right incentives in place, you need to have leaders that are willing to push much harder on the AI front and develop their own skills and capability and knowledge around that. There’s a lot of…
Ross: Any specifics again, just any overall practices or how to actually make this happen?
Brian: Yeah. So there’s kind of a series of maturities that we’re seeing out there in organizations.
There’s a ton of online learning that leaders can take to get them familiar with what AI is capable of. So that’s kind of maturity level one: just build that sort of awareness, create the right content material that they can access to learn how to do things.
Maturity level two is change who is advising them. Most leaders go through a process where the people that are advising them are people that are more experienced than them, or their peers. So what we’re seeing organizations do is starting to create shadow cabinets of younger employees who have actually started to grow up in the AI age, where they’re forced to spend time with them.
So each leader is given a shadow cabinet of four or five employees that are actually really familiar with AI, and that leader actually then has to report back to those junior employees about what they’re actually doing from an AI perspective. That’s a forcing mechanism to make sure that something happens with people that are more knowledgeable about what’s going on.
So that’s kind of a second level of maturity that we’re starting to see play out.
For the leaders that are truly making progress here, what we’re actually seeing is that they’re creating environments where failure is celebrated. When you think back to a lot of the early IT stages, and a lot of the early IT innovation, it’s fraught with failure. More things don’t work than do work.
So they are creating environments and situations where they’re actually celebrating failure to reduce risk that’s associated with employees. And so they’re creating environments where, “I failed, but we’ve learned,” and that’s really valuable.
Then the fourth idea, and this is what IDEO is doing. IDEO is a design consultancy, and they do something really, really interesting when it comes to leaders. What they’ve come to realize is that leaders, by definition, are people that have been incredibly successful throughout their career. Leaders also, by definition, hate to ask for help, because many of them view it as a weakness. Leaders also, by definition, like to celebrate the great stuff that they’ve done.
So what they actually do—and they do this about every six months or so—every leader has to film and record a short video. And that video is: here are the cool things that I did using AI across the last six months, and here are the next set of things that I’m going to do, that I’m working on, where I’m thinking about using AI for the next six months. And every leader has to do that.
And what that actually achieves—when you have to record that video and then show that to everybody—is that if you haven’t done anything in the last six months, you kind of look like a loser leader. So it puts pressure on that leader to actually have done something that’s interesting, that they have to put in front of the broader organization.
And then the “what I’m going to work on next,” they’re not actually asking for help, so it really works with a leader psyche, but they’re saying, “Here are the next things I’m going to do that are awesome.” And that gives other leaders a chance to say, “Hey, I’m working on something similar,” or, “Oh, I figured that out last time.”
So it takes away a lot of the fear that’s associated with leaders, where they have to fake that they know what they’re doing or lie about what’s working. But it forces them to do something, because they have to tell everyone else what they did, and it creates the opportunity for them to get help without actually asking for help.
That is a really cool way that organizations are getting leaders to embrace AI, because none of them want to stand up in front of the company and be like, “Yeah, I haven’t really been doing anything on this whole AI issue for the last six months.”
Ross: That’s great. That’s a really nice example. It’s nice and tangible, and it doesn’t suit every company’s culture, but I think it can definitely work.
Brian: Yeah, the takeaway from it is put pressure on leaders to show publicly that they’re doing something. They care about their reputation, and whatever way makes the most sense for you as an organization, put the pressure on the leader to show that they’re doing something.
Ross: Yeah, absolutely. So that’s a nice round out. Thanks so much for your time and your insight, Brian. It’s been great to get the perspectives on building AI adoption.
Brian: Great. Thanks for having me, Ross. And this is a time period where there’s an analogy that I like to use in a car race: people don’t pass each other in straightaways, they pass each other in turns. And this is a turn that’s going on, and this creates the moment for organizations to pass each other in that turn.
And then one other racing analogy I think is really important here: you accelerate going into a turn. When you’re racing, you don’t decelerate. Too many companies are decelerating. They have to accelerate into that turn to pass their competitors in the turn. And whoever does that well will be the companies that win across the next 3, 5, 7 years until the next big thing happens.
Ross: And it’s going to be fun to watch it.
Brian: For sure, for sure.
The post Brian Kropp on AI adoption, intrinsic incentives, identifying pain points, and organizational redesign (AC Ep17) appeared first on Humans + AI.

Aug 27, 2025 • 31min
Suranga Nanayakkara on augmenting humans, contextual nudging, cognitive flow, and intention implementation (AC Ep16)
“There’s a significant opportunity for us to redesign the technology rather than redesign people.”
–Suranga Nanayakkara
About Suranga Nanayakkara
Suranga Nanayakkara is founder of the Augmented Human Lab and Associate Professor of Computing at National University of Singapore (NUS). Before NUS, Suranga was an Associate Professor at the University of Auckland, appointed by invitation under the Strategic Entrepreneurial Universities scheme. He is founder of a number of startups including AiSee, a wearable AI companion to support blind & low vision people. His awards include MIT TechReview young inventor under 35 in Asia Pacific and Outstanding Young Persons of Sri Lanka.
Website:
ahlab.org
intimidated.info
LinkedIn Profile:
Suranga Nanayakkara
University Profile:
Suranga Nanayakkara
What you will learn
Redefining human-computer interaction through augmentation
Creating seamless assistive tech for the blind and beyond
Using physiological sensors to detect cognitive load
Adaptive learning tools that adjust to flow states
The concept of an AI-powered inner voice for better choices
Wearable fact-checkers to combat misinformation
Co-designing technologies with autistic and deaf communities
Episode Resources
Transcript
Ross Dawson: Suranga, it’s wonderful to have you on the show.
Suranga Nanayakkara: Thanks, Ross, for inviting me.
Ross: So you run the augmented human lab. So I’d love to hear more about what does augmented human mean to you, and what are you doing in the lab?
Suranga: Right? I started the lab back in 2011 and part of the reasoning is personal. And my take on augmentation is really, everyone needs assistance. All of us are disabled, one way or the other.
It may be a permanent disability. It may be you’re in a country that you don’t speak the language, you don’t understand the culture. For me, when I first moved to Singapore, I never spoke English. I was very naive to computers, and to the point that I remember very vividly back in the day, Yahoo Messenger had this notification sound of knocking, and I misinterpreted that as being somebody knocking on my door.
That was very, very intimidating. I felt I’m not good enough, and that could have been career-defining. And with that experience, as I got better with the technology, and when I wanted to set up my lab, I wanted to think of ways. How do we redefine these human-computer interfaces such that it provides assistance and everyone needs help?
And how do we, instead of just thinking of assistive tech, think of augmenting our ability depending on your context, depending on your situation, how to use that? I started the lab as augmented sensors. We were focusing on sensory augmentation, but a couple of years later, with the lab growing, we created a bit more broad definition of augmenting human, and that’s when the name became augmented human lab.
Ross: Fantastic. And there’s so many domains in which so many projects which you have on which are very interesting and exciting. So just one. We would just like to go through some of those in turn. But the one you just mentioned was around assisting blind people. I’d love to hear more about what that is and how that works.
Suranga: Right. So the inspiration for that project came when I was a postdoc at MIT Media Lab, and there was a blind student who took the same assistive tech class with me. The way he accessed his lecture notes was he was browsing to a particular app on his mobile phone, then he opened the app and took a picture, and the app reads out notes for him.
For him, this was perfect, but for me, observing his interactions, it didn’t make sense. Why would he have to do so many steps before he can access information? And that sparked a thought: what if we take the camera out and put it in a way that it’s always accessible and you need minimum effort?
I started with the camera on the finger. It was a smart ring. You just point and ask questions. And that was a golf ball-sized, bulky interface, just to show the concept. As you iterate, it became a wearable headphone which has the camera, speaker, and a microphone. So the camera sees what’s in front of you. The speaker can speak back to you, the microphone listens to you.
With that, you can enable very seamless interaction for a blind person. Now you can just hold the notes in front of you and just ask, please read this for me. Or you might be in front of a toilet, you want to know which one is female, which one is male. You can point and ask that question.
So essentially, this device, now we call ISee, is a way of providing this very seamless, effortless interaction for blind people to access visual information. And now we realize it’s not just for blind people. For me, I actually used it.
Recently I went to Japan, and I don’t read anything Japanese, and pretty much everything is in Japanese. I went to a pharmacy, I wanted to buy this medicine for headache, and ISee was there for me to help. I can just pull out a package and ask, ISee, hey, help me translate this, what is in this box? So it translates for me.
So the use cases, as I said, although it started with a blind person, cut across various abilities. And again, it is supporting people to achieve things that are otherwise hard to achieve.
Ross: Fantastic. So just hopping to one of the many other projects or research which you’ve done, and is around AI-augmented reasoning. This is something which can assist anybody, and you particularly focus on this area of flow.
We understand flow from the original work of Csikszentmihalyi and so on, how to get into this flow state. I understand that you have sensors that can understand when people are in flow states, to be able to help them in their reasoning as appropriate.
Suranga: Right. So this is very early stage. We just started this a few months ago. The idea is we have been working with some of the physiological sensors — the skin conductance, heart rate variability — and we understand that based on this, you can infer the cognitive state.
For example, when you are at a high cognitive state, or when you are at a low cognitive state, these physiological sensors have certain patterns, and it’s a nice, non-invasive way of getting a sense of your cognitive load.
As the flow theory says, this is about making the task challenging enough — not too challenging or too easy. We can measure the load based on these non-invasive signals, at least get an estimate, so that you can adjust the difficulty level of the task.
That’s one of the very early stage projects where we want to have these adaptive interfaces. The user doesn’t drop the task because it’s too difficult, or drop the task because it’s too easy. You can adjust the task difficulty based on the perceived cognitive load.
Ross: So interested. Where do you think the next steps are there? What is the potential from being able to sense degree of cognitive load or your frame of mind, so that you can interact differently?
Suranga: One of the things I’m really excited about is lifelong learning, continuous learning. Because of the emergence of technology, there’s a lot of emphasis on needing to upskill and reskill.
I’m also overseeing some of our university adult learning courses. If you think of adults who are trying to re-upskill themselves, the way to teach and provide materials is very different from teaching, say, regular undergraduate classes.
For those, there is a possibility of providing certain learning materials when the adult learner is ready to learn. They’re busy with lots of other responsibilities — work, families, and all these things. So if we can have a way of providing these learning opportunities based on when they are ready to learn, it may be partly based on cognitive state, partly based on their schedules.
I think one way to use this information is to decide when to initiate and how to increase or decrease the level of difficulty of the learning material as you go. If you can detect the cognitive load and then maintain the flow, that’s a hugely potential area.
Ross: Yeah, absolutely. So one of the projects was called Prospero, which is, I think, on the lines which you’re discussing. It’s a tool to help memorize useful material, but it understands your cognitive context as to when and how to feed you things for learning.
Suranga: Right. This we started specifically for older adults, and the idea was we wanted to help train their prospective memory. One of the techniques that has been reported as effective in literature is called intention implementation.
So basically, if I want to remember that when I meet Ross, I need to give you something, you mentally visualize that as an if-then technique. Firstly, we tried, okay, can we digitize that without a human through a mobile app? I provide what I would like to do, break it down to if-then statement, and get me to visualize that. That was the first part.
We saw that digitization does retain the effectiveness. Then the next question was, is there a better timing to initiate this training? That’s where we brought in the cognitive load estimation. Instead of doing a time-based or user pre-assigned time to train, we compared our technique, which is based on the cognitive load.
We found that when you provide this nudging to start training when the user has less load, they are more likely to notice this and more likely to actually start the training.
I think this principle probably goes beyond just training memory. It could be used as a strategy for getting attention to any notification. Rather than notifying randomly, you can notify when you think the person is more likely to attend to that notification.
Ross: Yeah, no, I think that’s part of it. If you have a learning tool, you want to use it at the right times. There’s partly a bit of self-guidance, as in saying, well, this is a good time for me to study or not. But I think it’s wonderful if the tools start to recognize when is a good time for you to be learning or saying, hey, now’s the time when this is a good task to do.
If we can proactively understand cognitive state or cognitive load and then guide what are appropriate activities, resting might be the best thing to do. Or something provided with a more entertaining frame in another state. Or sometimes it may say, okay, well, this is more complex, and this is the right time to serve it to you.
So very deeply, as I think all of your work is, context-aware.
Suranga: Yeah, exactly. And that’s a keyword. I think just the cognitive load alone may not be the cut. For example, I may be in a low cognitive load, but contextual information, like time, might matter. It’s the middle of the night, so there’s no point nudging me. Or my schedule might indicate I’m in a party.
So we need to take this contextual information — time, the location, what’s in my schedule — plus your body context through these physiological sensors, so that we can try and make the best decision to support the user.
Ross: Which goes to just one of your other many wonderful projects around AI in a voice for contextual nudging. I believe very much in this idea of behavioral nudges and AI being able to understand when and how are the best nudges for behavioral change. Could you tell us more about this AI inner voice?
Suranga: Right. This is actually a joint project between my former advisor, Pranav Mistry from the Media Lab, and my lab. The students explored this idea where you have your better self.
You promise yourself that you’re going to eat healthy, and then you have that perfect self. With contextual-aware wearables, let’s say, for example, if I’m now seeing a chocolate and I’m very tempted to take it, the wearable might see there’s some apples on the side. Then your better version, your own voice, says, “Hey, that apple looks fresh. Why don’t you try that?”
Or say, for example, I’m facing an interview and I’m searching for words, and my better self, who wanted to be confident, might whisper to me, “Hey, you can do this,” and even suggest a couple of words for me to fill in the gaps.
So that’s the concept we published last year in one of the main Human-Computer Interaction conferences, to show that this inner voice, your voice clone, has a lot of opportunities to nudge you, making you more likely to change your behavior.
Ross: That’s an absolutely fabulous idea. So is this just a concept of this voice, or is this being implemented?
Suranga: In the research paper, we showed this proof of concept — making better choices of what you eat, being able to face an interview more confidently. We showed a couple of proof-of-concept cases where this was actually implemented as a working prototype.
Ross: Another thing which is very relevant today is a wearable fact checker. Because facts are sometimes not facts wherever we go. So it’s good to have a wearable fact checker. How does this function?
Suranga: As you rightly said, these are very emerging and again very early stage projects. But the idea is, how do we allow users to be more aware of the presence of potential misinformation?
The way we have implemented our initial prototype is it listens to the conversation, and then firstly, it tries to differentiate what’s just an opinion versus what’s a fact-checkable statement. If that’s the case, it then looks for factual consistency, looking for agreement among multiple sources from a knowledge-based search.
If there is a potential of this being a factually wrong statement, it nudges the user through a vibration on your smartwatch at that point. The user could then tap that and see why this is nudging, what the contradiction might be.
So we are, as we speak, running a study to figure out how people respond when they watch videos. Some videos look very real, some are not actually deepfakes — they are real — but especially some of the political speeches where lots of statements are factually incorrect. We are nudging the users, and we want to see what that nudging leads to.
Do users stop the video, go and search for themselves, and make informed decisions? Or do they just continue to watch it because they believe in that particular person so much? Or do they take the nudging as completely true — because AI can make mistakes — and mark all those statements where they felt a nudge as incorrect?
So we are trying to look at how actual users behave when there is a system that gives you a vibration nudge when it thinks there is potential misinformation. We will see the results very soon, and hopefully we want to put that as a research paper.
Ross: Very interesting indeed. So more generally, you know, you just started off by saying that being able to assist people were required, and so some of the tools are also in situations such as autism or dyslexia. And you know, there’s obviously any number of ways in which we can assist in those veins. So where do you think in the most promising directions for technology to support — let’s start with autism.
Suranga: So I think the key thing, even before the technologies, what we realize is the co-design. One of the projects we did with kids with autism, we actually worked with the therapist, the school teachers for about a year to come up with what might be effective.
Rather than doing a technology push, we wanted to co-design so that we are not building things for the sake of building, but there’s a real value. And one specific example is we built these interactive tiles. They can be on the floor. Smaller versions can be on the wall, and they light up. They sense the user’s touch. They can also make sound.
It’s a simple technology, but the use case was, again, after this year-long co-design process, the teachers were like, we want this to have specific interactions to support their social skills, support their physical skills, support their cognitive skills.
So for example, the teachers can put these tiles and make them light up in a certain order. The kids have to follow the same order — that’s training their memory. The same tiles can be spread across the room, and then they light up, and the kids have to run and tap them before the light goes off — that’s getting them to engage physically.
These tiles can also be distributed among a set of kids, and each tile becomes a music instrument, and then they can jam together. That’s getting them to enhance their social interaction.
Yeah, I think that the main lesson I learned is there’s a huge potential of technology, but it’s also equally important to work with the stakeholders so that we know what’s the best way to utilize them, so that the end solution is going to be effective and used in real context.
Ross: Yeah, which I think goes to this point of feedback loops in building these systems, where part of it is, as you say, the co-design. You’re not just giving something to somebody and saying, hey, use it, but helping them to design it and create it. But also the way in which things are used, or the outcomes they have, start to flow back into the design. And I imagine that there’s various ways AI can be very useful in being able to bring that feedback to refine or improve the product or interaction.
Suranga: Yep, that’s very true. And the other beautiful thing with this co-design process is sometimes you discover things as you go. You don’t go with a preset of things that you just want to convince the other stakeholder. True co-design is you discover things as you develop.
I remember my PhD project, which was about providing musical experience to a deaf kid through converting music into vibration so that you can feel. Initially, thinking of the sensitivity range of vibration sensation — the hearing is 20 to about 20,000 hertz, whereas vibration is much lower, it cuts off around 1,000 hertz.
So initially we thought, why don’t we compress all the audio into the haptic range and then provide that through the vibration feedback mechanism? But it didn’t work. Some of the deaf kids and the therapists we worked with were like, no, when you compress awkwardly, these kids can also feel that awkwardness. Some of them said this is not even music.
Accidentally, one of the kids tried our system bypassing that whole compression, just playing the music as per normal, and letting their body pick up different vibration frequencies. The legs and back are good at picking up low frequency. The fingertip is good at picking up high frequency. That completely changed the design.
So instead of doing our own filtering, we let the body become the filter and just convert the music without preprocessing through this chair structure. And that was super useful.
Why that’s impactful is that now for about 15 years, these school kids are using this on a daily basis, feeling music and developing their own preference to different music genres. For me, that was a moment of discovery. Rather than forcing what you thought and trying to convince others, you kind of discover as you go.
Ross: Absolutely, that’s a great example of that. So I’d like to come back to the beginning, where you said you were confused by Yahoo Messenger and felt you were confused by technology. And I think that’s a universal experience. Almost everybody comes across and thinks, this is hard, this is difficult, it’s confusing. But you obviously went past that to now being able to use technology as an enabler to understand the capabilities.
So what is it that enabled you, what brought you from being confused by technology to now being able to use it to help so many people?
Suranga: I think a bit of that was the thought process. Initially, as I said, I was very concerned that I wasn’t good enough for engineering. But when I really thought about that specific example, what a sensible person would do when you hear a knocking sound was just checking the door, right? Nobody would expect you to check what’s on the screen.
So it convinced me that what I did, although it was a mistake, was the sensible thing to do. And it also established a deep belief that technology has the opportunity to be redesigned. I don’t need to change myself to learn them. There should be a way to redesign them so that changing our natural behavior so much should not be the case.
And one particular example that I did immediately after my graduation was moving digital media across devices. In our culture, we have this color powder. We take them from a container, put it here. That’s copy-paste. And we enabled the technique where you can just touch a phone number on a web page and drop it to the other device. It copies.
Of course, the digital transfer happens through the cloud, but the interaction is super simple. And with those examples, my belief became more and more stronger that there’s a significant opportunity for us to redesign the technology rather than redesign people.
Ross: No, totally. 100% right. So I gotta say, there are so many times when I’m using a technology, I think, am I stupid? No, the technology is badly designed.
Yeah, it’s still amazing — it’s 2025, and we still have so much bad design. If it’s not easy to use, if it’s not intuitive, if we can’t work it out for ourselves, if it’s confusing — that’s bad design. It’s not a stupid person.
So where do you see the potential? What’s next? You’re obviously doing so many exciting things at the moment. What’s on the horizon for Augmented Human Lab?
Suranga: I think there’s a lot of momentum from the ecosystem. If you think about it, AI is going to stay here. Every morning when you wake up, there’s a new model being released and a new paper being published. There’s momentum there.
I think it’s a matter of time before robotics is going to catch up. Also, some of these wearable devices at a consumer level have become commodities, so you can have very easy ways of building things that are super seamless to wear.
With all these things, I think there’s a significant opportunity for us to create these augmentations that help us make better decisions, help us learn things, basically help us become better versions of ourselves. And they shouldn’t even need to be so dependent on things. They could be done in a way that helps us acquire certain skills, and then they can drop off.
So they should be more like crutches than permanent augmentation. That’s why I believe so much in this non-invasive augmentation, where I need to get a particular skill, and just like a rocket engine, it might push me to a certain level, and then it can drop off.
With this emergence of AI, robotics, and some of the wearables, we are excited to design this next layer of human-computer interfaces.
Ross: That’s fantastic. So where can people go to find out more about your work?
Suranga: They can check out our work at our website, www.ahlab.org — and that has all the stuff that we have been doing.
Ross: Fantastic. Thank you so much for your time and your insights and your wonderful work.
Suranga: Thanks, Ross.
The post Suranga Nanayakkara on augmenting humans, contextual nudging, cognitive flow, and intention implementation (AC Ep16) appeared first on Humans + AI.

Aug 20, 2025 • 42min
Michael I. Jordan on a collectivist perspective on AI, humble genius, design for social welfare, and the missing middle kingdom (AC Ep15)
“The fact is that its input came from billions of humans… When you’re interacting with an LLM, you are interacting with a collective, not a singular intelligence sitting out there in the universe.”
–Michael I. Jordan
About Michael I. Jordan
Michael I. Jordan is the Pehong Chen Distinguished Professor in Electrical Engineering and Computer Science and professor in Statistics at the University of California, Berkeley, and chair of Markets and Machine Learning at INRIA Institute in Paris. His many awards include the World Laureates Association Prize, IEEE John von Neumann Medal, and the Allen Newell Award. He has been named in the journal Science as the most influential computer scientist in the world.
Website:
arxiv.org
LinkedIn Profile:
Michael I. Jordan
University Profile:
Michael I. Jordan
What you will learn
Redefining the meaning of intelligence
The social and cultural roots of human genius
Why AI is not true superintelligence
Collective genius as the driver of innovation
The missing link between economics and AI
Decision making under uncertainty and asymmetry
Building AI systems for social welfare
Episode Resources
Transcript
Ross Dawson: Michael, it’s wonderful to have you on the show.
Michael I. Jordan: My pleasure to be here.
Ross: Many people seem to be saying that AI is going to beat all human intelligence very soon. And I think you have a different opinion.
Michael: Well, there’s a lot of problems with that framing for technology. First of all, we don’t really understand human intelligence. We think we do because we’re intelligent, but there’s depths we haven’t probed, and there’s the field of psychology just getting going—not to mention neuroscience.
So just saying that something that mimics humans, or took a vast amount of data and brute-forced mimicked humans, seems like a kind of leap to me—that it has human intelligence nailed. Moreover, the idea that it was a sequence of logic doesn’t particularly work for me. We figured out human intelligence, now we can put it in silicon and scale it, and therefore we’ll get superintelligence.
Every step there I mean the scaling part, I guess, is okay, but we have not figured out human intelligence. Even if we had, it’s not really clear to me as a technology that our goal should be to mimic or replace humans. In some jobs, sure, but we should think more about overall social welfare and what’s good for humans. How do we complement humans?
So, no, I don’t think we’ve got human intelligence figured out at all. It’s not that it’s a mystical thing, but we have creativity. We have experience and shared experience, and we plumb the depths of that when we interact and when we create things.
Those machines that are doing brute force gradient descent on large amounts of text and even images or whatever—they’re not getting there. It is brute force. I don’t think sciences have really progressed by just having brute force solutions that no one understands and saying, “That’s it, we’re done.”
So if you want to understand human intelligence It’s going to be a while.
Ross: There’s a lot to dig into there, but perhaps first: just intelligence. You frame that as, among other things, social and cultural, not just cognitive?
Michael: Absolutely. I don’t think if you put me on a desert island, I’d do very well. I need to be able to ask people how to do things. And if you put me not just on a desert island, but in a foreign country, and you don’t give me the education—the 40 years of education I had as well—that imbued me with the culture of our civilization.
Anytime I’m not knowledgeable about something, I can go find it, and I can talk to people. Yes, I can now use technology to find it, but I’m really talking to people through the technology. I don’t think we appreciate how important that cultural background is to our thinking, to our ability to do things, to execute, and then to figure out what we don’t know and what we’re not good at. That’s how we trade with others who are better at it, how we interact, and all that.
That’s a huge part of what it means to be human, and how to be a successful and happy human. This mythological Einstein sitting all by himself in a room, thinking and pondering—I think we’re way too wedded to that. That’s not really how our intelligence is rolled out in the real world.
Generally, we’re very uncertain about things in the real world. Even Einstein was uncertain, had to ask others, learn things, and find a path through the complexity of thought.
Also, I’ve worked on machine learning for many years, and I’m pretty comfortable saying that learning is a thing we can define, or at least start to define: you improve on certain tasks. Intelligence—I’m just much less happy with trying to define it. I think there’s a lot of social intelligence, so I’m using that term loosely. But human, single intelligence—what is that? What does it mean to generalize it?
Talking about thought in the computer is the old dream of AI. I don’t know if we have thought in a computer. Some people sort of say, “Yeah, we have it,” because it’s doing these thinking-like things. But it’s still making all kinds of errors. You can brute force around them for as long as you can and get humans to aid you when you’re making errors.
But at some point you have to say, “Wait a minute, I haven’t really understood thought. I’m not getting it. I’m getting something else. What am I getting? How do I understand that? How does it complement things? How does it work in the real world?”
Then you need to be more of an engineer—try to build it in a way that actually works, that is likely to help out human beings, and think like an engineer and less like a science fiction guru.
Ross: So you’ve used the phrase “human genius” as a sort of what we compare AI with. And the phrase “human collective genius,” I suppose, ties into some of your points here—where that genius, or that ability to do exceptional things, is a collective phenomenon, not an individual one.
Michael: Oh no, without a doubt. I’ve known some very creative people, and every time you talk to them, they make it very clear that the ideas came from the ether—from other people. Often, they just saw the idea develop in their brain, but they don’t know why.
They are very aware of the context that allowed them to see something differently, execute on it, and have the tools to execute. So my favorite humans are smart and humble. Right now in technology, we have a lot of people who are pretty smart but not very humble, and they’re missing something of what I think of as human genius: the ability to be humble, to understand what you don’t know, and to interact with other humans.
Ross: One of the other things you emphasize is when we’re designing these systems. We’ve created some pretty amazing things. But as you suggest, there seems to be this very strange obsession with artificial general intelligence as a focus.
For all of the reasons that’s flawed, one of them is being able to imbue social welfare as a fundamental principle that we should be using to design these.
Michael: I think you’ve just hit on it. To me, that’s the fundamental flaw with it. I mean, you can say the flaw is that you can’t define it, and so on and so forth. But for me, the flaw is really that it’s an overall system.
In fact, if you think about an LLM, whether it’s smart or not, or intelligent or not, it’s almost beside the point. The fact is that its input came from billions of humans, and those humans did a lot of thinking behind that. They worked out problems, they wrote them down, they created things. Sometimes they agreed, sometimes they disagreed, and the computer takes all that in.
To the extent that there’s signal, and there’s a lot of agreement among lots of humans, it’s able to amplify that and create some abstractions that characterize that. But when you’re interacting with an LLM, you are interacting with essentially all those humans. You’re interacting with a collective. You are not interacting with a singular intelligence sitting out there in the universe.
You’re interacting with all of humanity—or at least a lot of humanity—and all of the background that those people brought to it. So if you’re interacting with a collective, then you have to ask: is there a benefit to the collective, and what’s my contribution? What’s my role in that overall assemblage of information?
It’s not just that the whole goal is the Libertarian goal of the individual being everything. Somehow, the system should work such that there are overall good outcomes for everyone. It’s kind of obvious.
It’s obvious like traffic. All of us want to get as fast as possible from point A to point B. But a designer of a good traffic and highway system does not just think about the individual and how fast the car will go. They think about the overall flow of the system, because that may slow down some people, but it’ll make everybody ideally get there as fast as possible.
It’s a sum over all the travel times of all the people. Let’s call that social welfare. The design is usually a huge amount of hard work to achieve such a thing, and then empirically test it out and work out some theory of that.
And that’s going to be true of just about any domain. Think of the medical domain. It’s really not just the doctor and a patient and focusing on one relationship. It’s the overall system. Does it bring all the tools to the right place, at the right time? Has it tested things out in the right way? Things that have been learned about one group of people or one person—does that transfer easily to other people?
Any really working system of humans at scale is someone to sit down and think about the overall flow and flux at a social level. And again, this is not at all novel to say. Economists talk about this.
Yes, what economists do is think about the group and then the overall social welfare. How does the outcome lead to allocations that everyone considers favorable and fair? And then people argue about boundary conditions. Should you make sure there’s a floor or a ceiling, or whatever, and so on? Lots of people talk that language.
Computer scientists, for some reason, seem immune to thinking about economic principles, microeconomic principles, and social welfare. It comes as an afterthought. They build their system, they try it out, it doesn’t work, and they say, “Oh, we screwed up social welfare somehow.”
Then you get people criticizing, other people defending. And it’s like—is this the way to develop a technology? Roll it out, let it mess things up, give life to the critics, and then defend yourself. It’s just a mess right now.
Ross: Yeah, well, particularly given the extraordinary power of these tools. So I think the perspective is useful.
Michael: They’re powerful, and there’s absolutely no denying they’re surprisingly good. I call it brute force and all, but I don’t mean to denigrate it. At that scale, it really is better than one would have thought.
But what’s the business model? They’re powerful—for who? Yes, they sort of empower all of us to do certain things. But in the context of an overall economy, are they actually going to be healthy for everybody?
Are they going to make the rich much, much richer, and put that power in the hands of a few? Definitely those issues are what a lot of people talk about and think about. But Silicon Valley, again, seems immune to worrying about it.
They just say, “This brute force thing is a good machine. Obviously there’ll be some problems, but not big ones. We’ll figure them out as we go.”
That just hasn’t happened in other fields of engineering, to the extent it’s happening now. In chemical engineering, electrical engineering—people thought about the overall systems they were building and whether they’d work or not as they were building them.
Here, there’s just very little thought leaders and a lot of irresponsible people.
Ross: Which takes me to your recent paper—excellent paper—A Collectivist, Economic Perspective on AI. That seems to crystallize a lot of the thinking, a lot of what we’ve been talking about. There’s quite a lot of depth to the paper, and I wonder if you’re able to distill the essence of that in a few words.
Michael: Sure. Thanks for calling out that paper. I hope people will read it. I worried about the title for quite a while. The word “collectivist,” of course, was just a kind of little bit of a thumb.
In the libertarian tradition in Silicon Valley, “collectivist” has been associated historically with socialism, communism, and so on. But really, it’s a technical word that we should own and imbue with our support, with our technology. It is an economics word.
So I made sure the word “economics” is in there, because to me, that is the critical missing ingredient. There has been a lot of talk about networks and data, and then cognition and so on. Rarely do we hear talk about incentives and social welfare.
The paper also aims not to be just negative. There are a lot of people who use the arguments, who are pained in the same way I am about the way technology is being rolled out—but it’s just a critique. I want to turn it into an engineering field.
I want to say: look, what you can do with data and data flows at large scale is make even better markets than we ever had before, and different kinds of markets. Markets arose organically thousands of years ago where people would trade. You had to have some information, but there was always some hidden information.
This is what economics calls information asymmetry. There’s also always a social context to the things you’re doing.
One of the examples I give in the paper is about a duck—or I forget what example I use in the paper, but in my talks I use a duck. A duck is trying to figure out where to get food. There are two choices: one side of the lake or the other side. There’s twice as much grain to be found on one side of the lake than on the other.
The duck has been a statistician over the years and has gotten good estimates of those values. So what should it do the next day?
A Bayesian-optimizing duck would go to the side of the lake where there’s twice as much food. Of course, it’ll give the optimal amount of food. But that’s not what actual ducks do, nor what humans do. They do what’s called probability matching. That means if there’s twice as much food here as there, then twice as much time I will go to that side than the other side.
That’s viewed as a flaw in ducks and in humans. If you’re in a casino and you do that, it’s kind of dumb. But evolutionarily, it makes total sense.
If we’re not just one person but a collective, and all the ducks go to one side, then there’s a resource not being used on the other side. You could say the goal is to build a collectivist system that tells who should go where. But that’s the Soviet Union—that doesn’t work, that’s top-down.
Instead, you ask: are there algorithms that will actually do a better allocation, that aren’t just everybody for themselves? There’s an algorithm—randomized probability matching. With probability two-thirds I go there, with probability one-third I go there. If everyone does that algorithm, they don’t have to coordinate at all. They just go. That will lead to the maximum amount of food being eaten by everybody. That’s called high social welfare.
Now you see that the context of the problem I’m trying to solve—the decision-making problem—involves the collective. If I didn’t have the collective in the context, I would do the wrong thing. In the context of the larger collective, evolution worked that out.
But as engineers, we’re trying to build these new systems, and we don’t have time to wait for evolution. We have to build the system such that the collective is taken into account in the design.
I go through examples like that where uncertainty is shaped by the collective, and then the collective helps reduce uncertainty. Because, again, I can ask people when I don’t know things, and LLMs reduce uncertainty. That’s kind of what they’re doing. It’s part of the you know, their collective property is that they help the collective to reduce its total uncertainty. But then I also get into the so that’s kind of one side of economics is, how do you mitigate uncertainty and how do you think about the social context of your decisions.
And the other probably even more important side is incentives and information asymmetry.
If I come into a market, I don’t know a lot of things. Why am I still incentivized to come in, especially if I know there are adversaries in this market? Well, I’m incentivized because I know enough, and I can probe and I can test, and there are mechanisms I can use to still get value.
We’ve learned how to do that, and our systems should be able to know that kind of way of thinking. And so information asymmetry.
So there are two kinds of uncertainty that, as engineering-oriented people, I think we have to be focusing on—and machine learning has been kind of remiss in thinking about them.
One is just statistics and error bars. We see that in our LLMs: there’s very little concern about error bars around answers, about uncertainty. It’s ad hoc. The LLM might say, “Well, I’m not very sure.” Or, actually, it tends to be oversure: “I’m very sure.” Then it changes its mind in the conversation completely.
Humans are much, much better at saying, hey, when I’m sure and when I’m not, and that’s kind of statistical uncertainty. I haven’t got enough data. I need more. As soon as I get more data, my confidence goes up and so on. So most machine leaders are aware of that, but it’s not very actionable in the machine learning field. Just get more data and the problem will go away. That’s not true in many domains—most domains.
But the other kind of uncertainty is information asymmetry. If you and I are interacting in a market setting, you’re trying to get me to do something, there will probably be a payment involved. You’re going to offer me some price for my labor. What price you offer depends on how good I am.
Well, I know that. So I’m going to pretend to be better than I am—or maybe the opposite way, pretend to be less good than I am, so I can loaf on the job and still make as much money.
All of these things I know that you don’t know—you would love to know them, and then you could design an optimal policy, which in this case would be a price. But you don’t know them.
So what are you reduced to doing? You’re reduced to making some modeling assumptions. Or you can do what economists call Contract Theory. You give me a list of options, and each option has different features associated with it and a price.
If I go to the airline and I want to get on an airplane, there’s not going to be just one price. There’s business class and economy and so on. Everybody gets the same list, but everyone doesn’t make the same choices because they have a different context. The airlines don’t know that context, but the people do.
That’s a different mindset in designing a system: you can’t just dictate everything, you can’t know everything. You have to build in options that are smarter—options that lead to actual good social welfare.
I just don’t think Silicon Valley gets that. I think they think the goal is this superintelligence that somehow knows everything, and we’ll just go to the superintelligence and it’ll tell us the answers.
Just because of information asymmetry, not true. There’ll be lots of lying going on—by the computer, but also by the humans involved in the system. Because lying is not a bad thing. It’s how you interact when there’s uncertainty and information asymmetry.
Ross: One of the things that comes out from what you’re saying is the overlap between decision making—where I’d like to get to in a minute—and that economic structure, which is emergent from decisions.
But just coming back to the paper, you refer to this missing middle kingdom which, crudely, could be described as what’s missing between engineering and the humanities. So how is it that we can fill that? What is that middle kingdom, and how can we fill that so that we do have that bridge between engineering—the main tools we’re creating—and the humanities, in understanding us as a collective group of humans?
Michael: That point in the paper was really somewhat narrowly construed. It was for academics. Anyone who’s been in a university has seen this wave: first it was called data science, or big data, then machine learning, then AI, and so on.
As this wave has hit, there have been initiatives to bring people together on campus. It’s not enough to just have engineers building systems with data. You’ve got to have commentary on that, or critique of that.
There’s a side of campus that loves to comment and critique—and that’s often humanities. Historians will weigh in on previous waves of technology, ethicists will weigh in, sociologists will weigh in.
The language gap is so huge that it just turns into bickering. The computer scientist will say, “Well, our system works. That’s all I care about. You get bits across the stream. I can’t think of anything else.”
The ethicist will say, “We have consequences, and the consequences are this, and blah, blah, blah.” But there are no solutions proposed across that gap. Both are right at some level, but the overall consequence is no progress. There’s no dialog.
I’ve seen many institutes created at many universities—I won’t name them—but it’s basically a computer scientist next to philosophers, and they call it an institute. They talk and “solve” problems. Or you add a few classes in AI and ethics to a computer science curriculum, or a couple of programming classes to a philosophy curriculum.
The naïveté of that is breathtaking.
There are others on campus—and hopefully more of them emerging—that sit more in the middle. Economists, for example, are in the middle. They can talk the technical language, they can think about systems, but they also do it as a social science. Many are behavioral economists, actually studying social systems, so they are really a bridge.
But they’re not the only bridge. Statisticians are also a bridge. They want real data, they want to test things, they want to find causes. Many work with social networks, social systems, and scientific problems.
I could go on. There are large numbers of people in academia, and in the intellectual sphere more generally, who can talk the technical language and the social language. And the social gets into the legal and ethical.
Really, there should be a big collaboration of all these things. If the only “middle” is humanities on one side and engineers on the other, that’s naïve. Unfortunately, that’s what many institutions do. They create institutes where philosophy meets computer science, and think it’s done. Usually it’s physicists creating these things, and it’s just a mess.
Part of the problem is dialog. A journalist will write about some new tech development and explain how exciting and breathtaking it is. Then they bring in an ethicist who says, “Yes, but the consequences will be terrible.”
We’re so awash in this.
Ross: Clearly, you think far more at the systems level than at the granularity of how academic institutions are structured. But I’d like to turn to decision making.
It’s a massive topic. Some of your work has shown that you can actually delegate fully to algorithmic systems, decisions that can be safe within particular domains.
But what I’m most interested in is complex, uncertain decisions—around strategy, life choices, building systems, better frames.
There are a number of aspects that come together here. You’ve already discussed some of them—uncertainty in decision making, information asymmetry. But if we just think from a humans-plus-AI perspective: we’ve got humans with intelligence, perspective, understanding. We have AI with great deals of confidence.
How can we best combine humans and AI in complex, uncertain decisions?
Michael: That’s the million-dollar—or billion-dollar—question. That’s what I think we should all be working on. I don’t have the answer to it. I believe we’re being extremely naïve about how we approach it.
You just gave a good problem statement. When faced with grand problems like that, I typically go into a more concrete vertical. I’ll think about transportation or health care, and I’ll try to write down: who are the participants? What are the players? What are the stakes? What are the resources?
Now, what’s different from just a classical economist or operations research person of the past? Well, again, there’s this huge data source flowing around.
It’s not that now everyone knows everything, and it’s not that you should pull it all into a superintelligence that becomes the source of all knowledge. Rather, you should think about that as you’re thinking about how the system is going to work.
Search engines already did this. They made us capable of knowing things more quickly than we otherwise would have. That changed things.
I think what will probably happen in the first wave—beyond just systems design—is almost an anthropology of this. We already see LLMs in all kinds of environments, like companies, being used in certain ways. There’ll be best practices that emerge.
Meta-systems will arise that don’t just give everybody an LLM. They’ll structure interactions in certain ways. That structure will involve meeting certain human needs that are not being met.
I don’t think it’s going to be academics or mathematics dictating or telling us the story. First, there will be lots of use cases. That’s true of other engineering fields I’ve alluded to, like chemical or electrical engineering.
You had a basic phenomenon—electricity could be moved from here to there, motors could be built, basic chemicals created. Then people would try it out, and they would say, oh, that approach didn’t work. And they would reorganize. There had to be auditors, checkers, specialists in aspects of the problem.
There’ll be brokers emerging. In fact, I don’t see many of us necessarily interacting with LLMs very directly. Take the medical domain: instead, there’ll be brokers whose job is to bring together different kinds of expertise. I bring in a problem, they assemble the appropriate expertise in that moment.
They themselves could be computing-oriented, but probably not purely. It’ll be a blend of human and machine. I’m not going to trust just a computing system—I’ll want a human in the loop for various reasons.
So there’ll be a whole network of brokers emerging. Mathematics won’t tell us how to build that, but it will support us in thinking, “Oh, here’s a missing ingredient. We didn’t take into account information asymmetry, or a certain kind of statistical uncertainty, or causal issues.”
Then people using systems will say, “Oh yeah, let’s do that,” and they’ll try things out. That’s how humans make progress: people become aware of what they could do, and aware of what’s missing. Best practices start to emerge.
I think it’ll be pretty far from where we are right now. The search engine–oriented human-LLM interaction, scaled up to superintelligence—that doesn’t feel right. It’ll be much richer.
Ross: So like you, I think of it in terms of some of the interfaces. What are the interfaces, and how do we present AI inputs in terms of, as you mentioned, degrees of certainty and a whole array of other factors—visualizations to provide effective input to humans?
But just to come back to that phrase of the broker—and whether that aligns with what I’m describing here—what specifically is the nature of that broker in being able to bring together the humans and AIs for complementary decision making?
Michael: Yeah. In my paper, I have another set of examples of different kinds of markets. I try to make them very concrete so that people will resonate with them.
One of them is the music market. You have people who make music, and you have people who listen to music. But you also have brands and other entities that use music in various ways as part of their business model.
For example, the National Basketball Association has music behind its clips. What music? Well, you don’t just randomly pick a song. There’s someone who helps pick the song. Sometimes it’s a recommendation system that uses data from the past to pick it. But it’s also a human making judgments.
You connect all this up. Certain listeners like certain kinds of music—that’s a classical recommendation system. Musicians see that, and they make different kinds of music. But now, especially with brands in the mix, they have money, and they’re willing to pay for things.
So now incentives come into play. Am I incentivized to write a certain kind of song because a brand will be interested in me? Maybe I will. And if a brand notices that a certain demographic listens to a certain artist, they may want to pair with that artist.
All of that is not just made up by sitting down and looking at an Excel spreadsheet. It’s a big system. It has past data, it has to be adaptive, and it has to take into account asymmetries—people gaming the system. It’s a very interesting kind of system.
Plus, you’ll analyze the content itself. The music will be analyzed by the computer, helping to make good decisions.
Ross: So currently, AI is an economic facilitator.
Michael: AI is that economic facilitator. It helps create a market and make that market more effective, more efficient, more desirable.
It doesn’t try to just replace the musician with an AI making music. Rather, it thinks about what kind of overall system we’re trying to build here. What do people really want?
Well, people want to make music. And some people really want to listen to music that is obviously made by humans. That difference, that gap, will continue to be there.
Some brands want to ally themselves with actual humans, not robotic entities—not with Elon Musk. Supporting those kinds of multi-way markets with technology.
You could have talked about that in economics years ago: “I have three kinds of entities, here are my preferences and utilities.” But it wouldn’t have been operable in the real world.
Now, with all the data flowing around, you can have all these connections be viable. You can think about it as a system.
So in some ways, this is not a unique perspective, not all that new. But it really helps. I’m just trying to get people to reorient.
And I keep mentioning Silicon Valley because I can’t believe more of them are not understanding a path that has more of an economic side to it. Instead, they’re just competing on these very, very large-scale things where the business model is unclear. That boggles my mind.
Ross: So to round out, I believe in the potential of humans plus AI. What do you think we should be doing? What are the potentials? What is it right now that can lead us towards humans plus AI as complements—humans first and AI to be able to amplify?
Michael: I guess I’m more of an optimist. I don’t think humans will tolerate being told what to do by computers, or having the computers take over things that really matter.
They’ll be happier when computers take over things that don’t really matter, or things they don’t want to do. I do think humans will keep in the driver’s seat for quite a while.
I am very concerned, though, about the asymmetry of a few people having not just money, but immense power—and all the data flowing to them. The incentives can get way out of whack, and it would take a long time to undo some of that.
Like with the robber barons 100 years ago—there was some good in it, but then it became bad and had to be unwound. I hope we don’t have to get too much to the unwinding stage, but I think we are headed there.
On the other hand, you do see evidence of entities collecting data and using it in various ways, telling Google, “We will not just give you this data, you have to pay for it.” And Google saying, “Yeah, okay, we’ll pay.”
I do think there are some enlightened people who agree that’s a better model. The words “pay” and “markets”—it’s funny. The engineers and computer scientists I know never use those words. But then the humanities people get outraged when you talk about markets and payments.
That’s not human? Of course it is. It’s deeply human to value things and to make clear what your values are.
So I think there will be some good. Right now we just see kind of a mess. But I think that actual humans, when they start using systems and really start to care about the outcomes, and when payments are being used effectively.
These experiments are being run all around the world. It’s not just one country doing it. I don’t think the idea that China or the US is going to take this technology and dominate the world is right. That’s another dumb way to think.
Rather, these experiments will be done worldwide. Different cultures will come up with different ways of using it. Favorable best practices will emerge. People will say, “Look at how they’re doing it, that’s much better,” and those things will flow.
So overall, I’m more optimistic than not. But it is a very weird time to be living in.
Ross: More or less the right things all had the right way. So—
Michael: I can’t tell you where humans should go. I just know that, for example, when the search engine came in, young in my career, it was great. And I think to most of us, it just made life better.
That was an example of technologies expanding our knowledge base, and then people did what they did with it. The designers of the search engine kind of knew it would help people find stuff, but they couldn’t anticipate all the ways it got used.
Another part of technology—more like 100 years ago—was music. The fact that you could have recorded music and everyone could listen to it by themselves changed a lot of people’s lives for the better.
I don’t think the people who wrote down Maxwell’s equations—Maxwell himself, writing down the equations of electromagnetism—were necessarily aware that this would be a consequence. But humans got the best out of that in some ways. And then there were side effects.
Same thing here. I think humans will aim to get the best out of this technology. The technology won’t dictate.
Humans are damn smart. And I really think this “superintelligence” word just bothers me—especially because I think it’s diminishing of how smart humans really are. We can deal with massive uncertainty. We can deal with social context. Our level of experience and creative depth comes through in our creations in ways these computers don’t.
They’re doing brute force prediction sorts of things. Sure, they can write a book, a screenplay, whatever—but it won’t be that good.
I do think humans will be empowered by the tool and get even more interesting. The computers will try to mimic that, but it’s not going to be a reversal.
Ross: Yeah, absolutely agree. Thank you so much for your time and your insight, and also your very strong and distinctive voice, which I think most people should be listening to.
Michael: I appreciate that. Thank you.
The post Michael I. Jordan on a collectivist perspective on AI, humble genius, design for social welfare, and the missing middle kingdom (AC Ep15) appeared first on Humans + AI.

Aug 13, 2025 • 34min
Paula Goldman on trust patterns, intentional orchestration, enhancing human connection, and humans at the helm (AC Ep14)
In this discussion, Paula Goldman, Salesforce’s Chief Ethical and Humane Use Officer, emphasizes the importance of trust in technology. She explores the concept of designing AI with intentional human oversight and the need for continuous human involvement. Goldman advocates for starting small with AI governance and involving diverse voices in ethical decisions. She envisions AI as a tool to enhance human connection and creativity, balancing automation with uniquely human tasks for a more impactful future.

8 snips
Aug 6, 2025 • 48min
Vivienne Ming on hybrid collective intelligence, building cyborgs, meta-uncertainty, and the unknown infinite (AC Ep13)
Vivienne Ming, a theoretical neuroscientist and entrepreneur focused on maximizing human potential through AI, shares her captivating insights. She discusses the power of hybrid collective intelligence and the importance of diversity in fostering unique ideas. Vivienne emphasizes that AI should enhance rather than replace human abilities, particularly in education and healthcare. She highlights the significance of metacognition and preparing for an uncertain future by embracing the unknown, showcasing a vision where technology and human potential align for transformational growth.

Jul 30, 2025 • 39min
Matt Beane on the 3 Cs of skill development, AI augmentation design templates, inverted apprenticeships, and AI for skill enhancement (AC Ep12)
“The primary source of our reliable ability to produce results under pressure—i.e., skill—is attempting to solve complicated problems with an expert nearby.”
–Matt Beane
About Matt Beane
Matt Beane is Assistant Professor at University of California Santa Barbara, and a Digital Fellow with both Stanford’s Digital Economy Lab and MIT’s Institute for the Digital Economy. He was employee number two at the Internet of Things startup Humatics, where he played a key role in helping to found and fund the company, and is the author of the highly influential book The Skill Code: How to Save Human Ability in an Age of Intelligent Machines.
Website:
mattbeane.com
LinkedIn Profile:
Matt Beane
University Profile:
Matt Beane
Book:
The Skill Code
What you will learn
Redefining skill development in the age of AI
Why training alone doesn’t build true expertise
The three Cs of optimal learning: challenge, complexity, connection
How AI disrupts traditional apprenticeship models
Inverted apprenticeships and bi-directional learning
Designing workflows that upskill while delivering results
The hidden cost of ignoring junior talent development
Episode Resources
Transcript
Ross Dawson: Matt, it is awesome to have you on the show.
Matt Beane: I’m delighted to be here. Really glad that you reached out.
Ross: So you are the author of The Skill Code. This builds on, I think, research for well over a decade. It came out over a year ago, and now this is very much of the moment, as people are saying all over the place that entry-level jobs are disappearing, and we’re talking about inverted pyramids and so on. So, what is The Skill Code?
Matt: Right. The first third of the book is devoted to the working conditions that humans need in order to build skill optimally.
The myth that is supported by billions of dollars of misdirected investment is that skill comes out of training. And that is—we just have a mountain of evidence that that’s not so. It can help, it can also hurt. But the primary source of our reliable ability to produce results under pressure, IE, skill, is attempting to solve complicated problems with an expert nearby.
Basically, we can learn, of course, without these conditions—sort of idealized conditions—but it can be great. And the first third of the book is devoted to what does it take for it to be great?
I got there sort of backwards by studying how people were trying to learn in the midst of trying to deal with new and intelligent technologies at work—and mostly failing. But a few succeeded. And so I just looked at those success cases and saw what they had in common across many industries and so on.
So, I break that out in the beginning of the book into three C’s—thankfully, in English, this broke out that way: Challenge, Complexity, and Connection. And those roughly equate—well, pretty precisely, actually, I should own the value of the book—they equate to four chunks of characteristics of the work that you’re embedded in that need to be in place in order for you to learn.
Challenge basically is: are you working close to, but not at, the edge of your capacity?
And complexity is: in addition to focusing on getting good at a thing that you’re trying to improve at, are you also sort of looking left and looking right in your environment to digest the full system you’re embedded in? That’s complexity.
And connection is building warm bonds of trust and respect between human beings. All three of those things—I could go into each—but basically, in concert, in no particular sequence—each workplace, each situation is different—but these are the base ingredients.
I used a DNA metaphor in the book. These are sort of the basic alphabet of what it takes to build skill, and your particular process or approach or situation is going to vary in terms of how those show up.
Ross: So, for getting to solutions or prescriptions, I mean, it’s probably worth laying out the problem.
AI or various technologies are making those who are entering the workforce—or entering particular careers—be able to readily do what they do. And essentially, a lot of the classic apprenticeship-style model has been that you learn by making mistakes and, as you say, alongside the masters.
And if people, if organizations, are saying, “Well, we no longer need so many entry-level people to do the dirty, dull work,” then we don’t have this pathway for people to develop those skills in the way you described.
Matt: Yes, and it’s even worse than that.
So, for those that remain—because, of course, organizations are going to hire some junior people—the problems that I document in my research, starting in 2012… Robotic surgery was one early example, but I’ve since moved on to investment banking and bomb disposal—I mean, very diverse examples.
When you introduce a new form of intelligent automation into the work, the primary way that you extract gains from that is that the expert in the work takes that tool and uses it to solve more of the problem per unit time, independently.
That word independently—I saw in stark relief in the operating room. When I saw traditional surgery—I watched many of these—there’s basically two people, shoulder to shoulder, four hands inside of a body, working together to get a job done. And that’s very intensive for that junior person, the medical resident in that case, and they’re learning a lot.
By contrast, in robotic surgery, there are two control consoles for this one robot that is attached through keyhole incisions into the patient. One person can control that robot and do the entire procedure themselves. And so, it is strictly optional then for that senior surgeon to decide that it’s time to give the controls to the junior person.
And when’s the right time to do that, given that that junior person will be slower and make more mistakes? This is true in law, in online education, in high finance, professional services—you name it. The answer is: never.
It is never a good time. Your CFO will be happy with you for not turning the controls over to the junior practitioner. And you yourself, as an expert, are going to be delighted.
People these days, using LLMs to solve coding problems, report lots more dopamine because they can finally get rid of all this grunt work and get to the interesting bits. And that’s marvelous for them. It’s marvelous for the organization—even if it’s uncertain there’s a little ROI.
But the primary, the net, nasty effect of that is that the novice—the junior person trying to learn—is no longer involved in the action. Because why would you?
And that breaks the primary ladder to skill for that person. And so, that, I think, is happening at great scale across…
Let’s put it this way: the evidence I have in hand indicates to me that there will be very rare and rare exceptions to the rule that junior people will be cut out of the action. Even when they’re hired and in the organization and are supposed to be involved, they will just be less involved—because they’re less necessary to support the work.
So even if you get a job as a junior person, you’re not necessarily guaranteed to be learning a dang thing. It’ll be harder these days by default.
Some interesting exceptions—and that’s what I focus on in the book. But that is the—in my view—I’ve done some arithmetic around this, and it’s all estimation of course. I published a piece in The Wall Street Journal on this about eight months ago.
This is a trillion-dollar problem for the economy, in my view.
Ross: Obviously, this is not destiny. These are challenges which we can understand, acknowledge, and address.
So, let’s say—obviously, part of it is, of course, the attitudes of the senior people and how it is they’ll be on frame. A lot can be organizational structures and how work is allocated. There’s a whole array of different things that can be done to at the very least mitigate the problem—or, I think, as you lay out in your book, move to an even better state for the ability to learn and grow and develop in conjunction, not just using learning tools.
But why don’t we go straight to Nirvana? Or what an ideal organization might do. What are some of the things they might do to be able to give these pathways where people can contribute and add value immediately, as well as rapidly grow and develop their capabilities?
Matt: Right. So, I’ll give you a few examples, one of which was evident in my book—and a couple examples, one of which was in the book, and one of which is new since the book’s publication.
So, the one that’s in the book—and that has always occurred, I think, and is more intensely available now and is a real cool and valuable opportunity for organizations—is what I called inverted apprenticeships.
This comes out of a study that I did with a colleague at NYU named Callan Anthony, where we contrasted our surgical and high finance data. We both have sort of “who said what to who every five seconds” kind of transcript data on thousands of hours of work in both contexts.
What was very clear, as we looked across our data, is that it’s not common for this to go well—but it can go well—for senior people to learn about new tech from junior people.
The “ha ha” example at a cocktail party is the CEO learning about TikTok from their executive assistant. But in the real world, senior software developers are definitely learning about how to use AI to amplify their productivity from junior people.
Organizations now are talking out of both sides of their mouth. On the one hand, you have people saying, “Well, we’re only going to hire senior people.” At the same time, “You have to be AI-native as a junior person.” That’s what we’re looking for, and that’s a prized skill.
Whether they know that that’s what they’re after or not, what they’re setting up when those people arrive is this relationship where the junior person hangs out and works with—and gets to teach, so to speak, or show by example—the senior person how to use AI.
The senior person, sort of as the price of entry for that working relationship, gives that junior person more access to their work and complex problem solving.
The paper itself is worth reading. The section in the book is worth reading because there are lots of ways to do this that are quite exploitative with respect to that junior person—sort of, they have to pay double. But there are ways of doing it where both people—it’s sort of a win-win.
That mode of simultaneous bi-directional learning is going to be really important if you want to adapt as an organization, just on a hyper-local level. So, that’s example one.
The other example—I’ve been, in the last four months now, in a new study I’ve been doing with five doctoral students here at University of California, Santa Barbara. It’s an interview-based study of the use of generative AI in software development across over 80 organizations.
One of the things that has emerged as a working pattern there, that I think is really intriguing and potentially a great example to think with—a sort of design template for how to set work up in a way that seizes the gains while also involving junior people and building your bench strength—is that:
In some cases, anyway, senior software engineers, rather than writing code, will get, say, four to five junior engineers together and give them all impossible tasks—like hugely complicated work and very limited time.
They will all try their… and by the way, obviously, the only way you could attempt this is to use AI—just cheat as aggressively as possible—and then submit your code. You’re talking three weeks of work in two hours, or eight hours, or something like that.
Under that kind of pressure, junior people—their neuroplasticity and willingness to throw themselves into the breach—is the hugest asset.
Everyone involved knows that what they submit may work, and it will be terrible. But it will be terrible in subtle ways.
Then that senior person spends some time with each of those junior people to do a code review or some pair programming, to say, “Right, here are the three or four areas. I’m not going to tell you what the problems are—where there’s problems—go have a go at figuring out what they are and fixing them.”
Or maybe: “I’ll just tell you what they are, and do you see why those are problems?”
Basically, we’re just focusing on the parts of what you built that are problematic—that you might not quite get yet. But 80% of what you built is fit for duty, and I got it 90% faster than I would have otherwise.
That senior person then is sort of a filter feeder. They deal with code and process and review a lot more than they used to actually just write.
But the unit total factor productivity for that group is an order of magnitude higher than it used to be. So, that’s become the sort of template—or the sort of fractal example—that I think…
Treating this hallucination and inconsistency and output problem as a feature, not a bug, and designing your organization to take advantage of that—I could easily see that kind of example scaling into professional services, into law, into medicine.
I mean, where failure in process is acceptable—it’s the output that needs to be high quality—it just seems like savvy organizations are going to be making design choices like that left and right.
Ross: That’s fantastic. So, where did that come from? Is that something which you created and then shared with these organizations? Or did you see this in the wild?
Matt: This is from this interview study. We have a globally representative sample of firms, and all we’re doing is asking them, “What are you doing with Gen AI in software development?” And then they talk for an hour, basically. We have a bunch of specific questions.
So no, we’re not priming anything, we’re not suggesting anything, we’re not sharing information in between them. And this is showing up independently across a number of organizations.
So anyway, there are lots of other cool things popping up, but the fact that these organizations aren’t in touch with one another—they don’t seem to be—they aren’t saying that they got this trick off of Reddit or from some influencer on Twitter, and that some subset of them have invented it locally, is a pretty strong indicator that it’s at least representative of a new potential direction.
Ross: So, this is work yet to be published?
Matt: Correct.
Ross: When? When will it be out?
Matt: That doesn’t operate on AI time. That’s on academic time. If we get enough findings together that I believe will meet the high A+ academic journal standard that I’m used to—which is not obvious, but I think we have a good shot—we’ll submit it for publication sometime in the fall.
Then it’ll probably be two years before the findings come out. You can post a working paper right away, and so as soon as we can do that, we will.
Ross: Awesome. Yeah, because this is the Humans Plus AI podcast. And really, the core of what I think about is humans plus AI workflow.
What is the flow of work between humans and AI? What are their respective roles? How does it move? What are the parallel and linear structures, and so on?
And what you’ve described is a wonderful, pretty clear humans plus AI workflow which is replicable. It can work in different contexts, as you say, across different domains. And these archetypes—if we can uncover these archetypes at work—then that is extraordinary value.
Matt: I think so, yeah. And what’s important is that, I think for them to be valid, they have to show up independently in very different contexts.
Then you’ve got your hands on—potentially, anyway—something that is suited to the new environment. There are many, many cases in which these best practices get trotted out, and they’ve been started by one organization and then shared across.
You can see a clear lineage, and then you have real questions about what, in academic speak, is endogeneity. In other words, it might be that this new best practice is not actually useful. It’s just that people are persuasive about it, and it travels fast because people are desperate for solutions.
So, we have to be very careful about grabbing best practices and labeling them as such.
Ross: You mentioned investment banking as a domain you’ve been exploring. And I think—I look a lot around professional services—and I think professional services are not just your classic accounting and law, and so on.
I mean, arguably, many industries—healthcare is professional services. I mean, if you look inside a consumer products company, they are professionals. You know, the building… there’s a lot of archetypes of things or structures there.
So I’m very interested to see what of what you have seen work in that context—what has been effective in being able to develop capabilities of junior staff.
Matt: Right, yeah. And I have less data there, but I’m always on the hunt for patterns in work that—when you look at them—you think, “I would need some evidence to conclude that that is not valuable or showing up somewhere else.”
In other words, it seems quite portable and generalizable. It’s not bound to the content of the work or some legal barriers or structures around the occupation or profession.
There are some places where that really is true. But as long as it seems like you could do the same thing in any knowledge work profession, then I agree with you. I think those are really important tactics.
And I don’t think anybody really—aside from what I offer in the book, which was my best offering then and I still feel very good about it now—is that whatever the new workflow is, imagined workflow, I offer a ten-point checklist for each of those three Cs in each of those chapters.
It’s about how you would know—very specifically and measurably—whether work was skill-enhancing or skill-degrading the more you did it over time.
Anyone, anywhere, I think, can take a look at any new way of doing the work that involves AI and interrogate it from that lens. So, in addition to a productivity lens—which is obviously critical—you can also say, “Is this likely to enhance skill development or not, if we do it this new way?”
And you can. It takes work, but I think that’s quite necessary.
Ross: So, looking at your three elements of challenge, complexity, and connection—AI used well could assist on a number of those.
Perhaps for me, most obviously in connection, where we have a lot of great studies in collaborative intelligence, where AI is playing a role in being able to nudge interactions that support collective intelligence. Again, we could have AI involved in interactions and able to say, “Well, here’s an opportunity to connect in a particular way to a particular person in a particular context,” for example.
Or it could be able to say, “You’re working on this particular challenge. Let’s give some context to this,” and so on. So either hypothetically or in practice—where are ways you’ve seen AI being able to amplify the challenge, complexity, or connection of skill development?
Matt: I have a Substack called Wild World of Work. It’s at wildworldofwork.org, and one of the first posts I wrote there—forgotten, it’s over a year ago now—is called Don’t Let AI Dumb You Down.
In that piece, I talk about how default use of GenAI—to ChatGPT—is, just as with all these other forms of intelligent automation I’ve studied, likely to deprive you of skill over time.
I’ll just start with connection. One of the reasons for that is that you don’t leave your screen. You get your answer, and it might even be good, and you might even learn some new information—so it’s not just passive, like “do my homework for me” kind of interaction.
But what you won’t notice is missing—and definitely is—is another human being. And ChatGPT is currently not configured—it’s not post-trained, technically—to do anything about that, to attend to that, or to have your welfare with respect to your skills in its consideration set at all.
You can make it do that, though. This is the amazing thing. Even what I suggested in that article back then is still true today. You can go into the custom settings for ChatGPT—and all these models have this now—and you can tell it how to interact with you, basically.
What I have in my custom settings in ChatGPT are specific sets of instructions around: basically, annoy me to some degree so that I need to do things for myself. Keep me challenged. Expose me to complexity—other things going on related to this work—and, as you just said, push me towards other human beings and building bonds of trust and respect with them.
Because otherwise, I’ll just rely on you. And that is what ChatGPT does to me every single time now.
Do I heed its advice all the time? No, of course not. But I have definitely learned a lot of things and met new people that I wouldn’t have if I hadn’t done that. It’s certainly not perfect. And it’s gotten better, but still.
And by the way, it should not be incumbent on the user, in my opinion, to go fix these things for themselves. That’s like asking cigarette smokers to install their own filters or something. You could, in principle, do that, but…
I think—put it this way, positively—there’s a huge market opportunity for these model providers. For any one of them to hold up their hand and say, “We have configured our system such that just by using it, you’re going to have more skills at the end of next week than this week. And you can have your results too.”
None of them have done that. Isn’t that interesting?
I’m trying to embarrass them into doing it, basically, because I think people have a strong and growing intuition that they’re trading something away in exchange for just getting their answer from this magical tech.
A few people aren’t. A few people are both getting their answer and pushing themselves farther than they ever could have before. That’s magical territory, and we need to understand it.
Anyway, I think once the word gets out that this trade-off is going on, then people are gonna start to insist. And I hope we can get some model company to lead in that regard.
Ross: Fantastic. In your book, you refer to Cabrera’s—essentially bringing the humans and AI together. Obviously, people use different terminology around that.
But where do you see the potential now for these human-AI integrations?
Matt: Yep. I have not yet seen this implemented, but the idea I’m just about to describe could have been implemented a year ago—very clearly. Technically, it was possible then; it’s even better now.
Let’s just say I’m a worker at Procter & Gamble, and I work in the marketing function. My agent could be eating all of my emails, all my calendar appointments, and all the documents I produce. It could be looking at my projects and looking for opportunities for me that might offer useful sort of upramp for a certain skill area that I’m interested in.
That agent could then also be conferring with other agents of project managers throughout the corporation to see if there’s a good match.
We’ve seen this “chain of thought” in models before. Just imagine two models meshing their chains of thought. Lots of back-and-forth, like:
“Hey, Matt Beane’s looking to develop this kind of skill, and it looks like you’ve got a project over there.”
This agent over there is more plugged into that context. They spend some time—they can do this at the speed of light—but there’s a quick burning of tokens to assess the utility of that match from my point of view and from that project’s point of view.
Then you get much finer-grained, higher-quality matches of resources, human resources—to projects. The project wins. I win because I get a skill development opportunity. And those agents do most of the legwork to make that match.
You could likewise imagine that with a performance review. So if you’re my manager and I’m your employee, our agents are conferring regularly, constantly about my work, your availability, and so on.
Your agent might pop back to you and say, “Hey, I’ve been talking to Matt’s agent, and it looks like now’s a pretty good time for you two to have a quick performance-oriented conversation about his project—because he’s done really well on these three things and is struggling on these and could use your guidance.”
Then we get these regular—but AI-driven and scheduled—performance review conversations. Both those agents could help us prep for those conversations.
“Here’s a suggested conversation for you two.”
When it comes time for performance reviews—the formal one—we’ve already had a bunch of those. But they aren’t just some arbitrary every-two-week check-in kind of thing. Each is driven by a real, actual, evident challenge or opportunity or strength in my work.
So anyway, I think those are just two kind of hand-wavy examples that I think are implementable now.
Increasingly autonomous AI systems that can call tools, have access to memory, and confer with one another can solve this sort of talent mobility problem within firms—making matches so that I build my skill and we get results and performance optimization.
Any firm would be… I mean, that’s low-hanging fruit almost. For somebody who has no technical expertise to set up—you can just build an internal GPT that does those things. There’s a little bit more required, but anyway…
There is a universe of new modes of organizing that assume agents will be doing most of the talking, and just set humans up for success whenever possible.
You can always turn it away. It’s like getting a potential match on a dating app. You’d be like, “No, not that one.”
But at least—no human could ever manage an organization that well and make matches at that frequency and level of fidelity.
Ross: Yeah, this goes very much to what I’ve long described as the fluid organization, where people get connected to where they can best apply their capabilities—and also to learn—completely fluidly.
Not depending on where their particular job description lies, but simply where their talents and their talent development can be best applied across the organization.
There have been, for quite some time, talent platforms within organizations for connecting people with opportunities or work, and so on. But obviously, AI-enabled—and particularly with a talent development focus—provides far more opportunity.
Matt: I’ve been trying to track this pretty closely because I have a startup now focused on this joint optimization of work performance measurement with human capability development.
The previous wave of firms—B2B SaaS firms—that are trying to solve this talent mobility problem have really been focused on extracting skills from workers’ data and collecting those as a bag of nouns, and trying to match that bag of nouns against a potential opportunity.
And those nouns are just not sufficiently rich to capture what it is that those people are capable of—or not.
But I think a much richer sort of dialogue-based, dynamic, up-to-date, in-the-moment interaction between two informed agents…
You’re informed about the opportunity on the project. You have all the project docs spun up into you—I mean you as an agent.
And then another agent—that is mine—advocates for me on my behalf and has a giant RAG-based system (or whatever is the state of the art) that knows all about me: my preferences, what motivates me, my background, my capabilities under pressure, my career aspirations—all the rest.
Then they could spend a 100-turn conversation assessing fit in a few seconds. And that will be radically better than, “Does this noun match that noun?”
Ross: Yeah, a lot of potential.
So, to round out—for organizational leaders, whether they be the C-suite, or board C-suite, or HR, or L&D, or organizational development—what are the prescriptions you give? What is the advice you would give on how to evolve into an organization where you can have a talent pipeline and maximize the learning that is going to be relevant for today?
Matt: I mention this in the book—lean on the vendors of these AI systems and demand that they give you a product that will enhance the skills of its users while generating results.
There are plenty of design decisions you could make about how to build the organization. We’ve talked about some of them. I think those are important. They’re necessary.
You can hire for AI-native talent. You can set up inverted apprenticeships. But if the root stock—or the new tool that everyone is supposed to use to optimize whatever they’re trying to optimize—is infected with a virus, and the virus is that it will drive experts and novices apart in search of results, almost unwittingly…
Very few will even notice this, or if they do notice it, they’re just not incented to care.
There’s really—I mean, L&D is maybe the only function in the organization that is explicitly put together to know about and deal with this problem—but it’s now a compliance function. The training that L&D offers is just kind of a box-checking activity too often.
So you can’t count on yourself and your own organization and your own chutzpah—and pulling yourself up, or asking your employees to pull themselves up by their bootstraps—as a primary means of ensuring that you grow your talent bench while improving results from AI.
I think companies—and executives in particular—are in a very powerful position right now to choose between model vendors and ask:
“Give them two extra weeks to come back with something in their proposal that gives you reasonable assurance that just by using their product—versus their competitors’—your employees will build skill more and end up with better career outcomes, while still getting productivity gains.”
How can we use this tech and build employee skill at the same time? — that is the powerful question.
So it’s not… I think these vendors need to start to feel some heat. And if you’re a manager, you should be thinking:
“Fine, I’m getting some uncertain and notional—or nominal—productivity gain out of these new tools now just by buying them, and I don’t want to get left behind.”
So not buying is probably not an option.
But anyway, know also that if you just turn it on and hand out licenses, you will de-skill your workforce faster than you expect, and you will be knee-capping your organization for, say, three years from now or five years from now. And you will lose to your competitors.
I guarantee it.
Well, no—guarantee with a big asterisk. There will be many cases in which having fewer junior employees is the right thing to do. There will be many cases in which you don’t really care about de-skilling relative to the gains that you could get productivity-wise. I’m not naive about any of that.
But if you have areas in your organization where you have highly paid talent that is very mobile and wants to learn and grow, they will figure out which organizations are giving them work that will drive their skill curve upward—and they will vote with their feet.
And then you will stop getting high-quality talent.
That is one problem area I would get ready for.
And the other is: get ready to offer remedial training for those people who should know how to do their jobs—but in fact, have not been upskilling because they’ve been using AI too much. And you’ll be bearing that cost as well.
Organizations that invest now to address this problem—they will not. Might come slower out of the gate right now. Maybe they won’t. Maybe they’ll jump ahead faster.
So I think intervening with the model provider is one unexpected and easy place to go—because they won’t see it coming. They will be surprised.
And if a smart business development person—who wants their commission—will go back to their organization and tell OpenAI or Anthropic or Google, “Hey, what can we do?”
And I’m hearing this from lots of people. So I’m not naive to think that just me saying this to you on this podcast is going to have that effect.
I think really what’s starting to happen is that professionals—especially software professionals, right now—are starting to notice this effect without Matt Beane being in the picture at all.
There are articles out there now by software developers saying: “The death of the junior developer” is one. It’s a great one.
They’re all getting concerned on their own.
So I hope that the pressure just gets turned up, and that one of these companies comes out with something that will make a difference.
Ross: Fantastic. Thanks so much for your time, Matt.
Matt: Pleasure
Ross: Wonderful work. Very, very much on point for these days. Extraordinarily relevant. And I very much look forward to seeing what you continue to uncover and share and publish.
Matt: Perfect. Thank you. Like I said, I really appreciated the invite and happy to talk.
The post Matt Beane on the 3 Cs of skill development, AI augmentation design templates, inverted apprenticeships, and AI for skill enhancement (AC Ep12) appeared first on Humans + AI.

Jul 23, 2025 • 41min
Tim O’Reilly on AI native organizations, architectures of participation, creating value for users, and learning by exploring (AC Ep11)
“We’re in this process where we should be discovering what’s possible… That’s what I mean by AI-native — just go figure out what the AI can do that makes something so much easier or so much better.”
– Tim O’Reilly
About Tim O’Reilly
Tim O’Reilly is the founder, CEO, and Chairman of leading technical publisher O’Reilly Media, and a partner at early stage venture firm O’Reilly AlphaTech Ventures. He has played a central role in shaping the technology landscape, including in open source software, web 2.0, and the Maker movement. He is author of numerous books including WTF? What’s the Future and Why It’s Up to Us.
Website:
www.oreilly.com
LinkedIn Profile:
Tim O’Reilly
X Profile:
Tim O’Reilly
Articles:
AI First Puts Humans First
An Architecture of Participation for AI?
AI and Programming: The Beginning of a New Era
What you will learn
Redefining AI-native beyond automation
Tracing the arc of human-computer communication
Resisting the enshittification of tech platforms
Designing for participation, not control
Embracing group dynamics in AI architecture
Unlocking new learning through experimentation
Prioritizing value creation over financial hype
Episode Resources
Transcript
Ross Dawson: Tim, it is fantastic to have you on the show. You were my very first guest on the show three years ago, and it’s wonderful to have you back.
Tim O’Reilly: Well, thanks for having me again.
Ross: So you have seen technology waves over decades and been right in there forming some of those. And so I’d love to get your perspectives on AI today.
Tim: Well, I think, first off, it’s the real deal. It’s a major transformation, but I like to put it in context. The history of computing is the history of making it easier and easier for people to communicate with machines.
I mean literally in the beginning, they had to actually wire physical circuits into a particular calculation, and then they came up with the stored program computer. And then you could actually input a program one bit at a time, first with switches on the front of the computer. And then, wow, punch cards.
And we got slightly higher level languages. First it was big, advanced assembly programming, and then big, advanced, higher level languages like Fortran, and that whole generation.
Then we had GUIs. I mean, first we had command lines. Literally the CRT was this huge thing. You could literally type and have a screen.
And I guess the point is, each time that we had an advance in the ease of communication, more people used computers. They did more things with them, and the market grew.
And I think I have a lot of disdain for this idea that AI is just going to take away jobs. Yes, it will be disruptive. There’s a lot of disruption in the past of computing. I mean, hey, if you were a programmer, you used to have to know how to use an oscilloscope to debug your program.
And a lot of that old sort of analog hardware that was sort of looking at the waveforms and stuff — not needed anymore, right?
I remember stepping through programs one instruction at a time. There’s all kinds of skills that went away. And so maybe programming in a language like Python or Java goes away, although I don’t think we’re there yet, because of course it is simply the intermediate code that the AIs themselves are generating, and we have to look at it and inspect it.
So we have a long way before we’re at the point that some people are talking about — evanescent programs that just get generated and disappear, that are generated on demand because the AI is so good at it. It just — you ask it to do something, and yeah, it generates code, just like maybe a compiler generates code.
But I think that’s a bit of a wish list, because these machines are not deterministic in the way that previous computers were.
And I love this framework that there’s really — we now have two different kinds of computers. Wonderful post — trying to think who, name’s escaping me at the moment — but it was called “LLMs Are Weird Computers.” And it made the point that you have, effectively, one machine that we’re working with that can write a sonnet but really struggles to do math repeatedly. And you have another type of machine that can come up with the same answer every single time but couldn’t write a sonnet to save its life.
So we have to get the best of both of these things. And I really love that as a framework. It’s a big expansion of capability.
But returning back to this idea of more — the greater ease of use expanding the market — just think back to literacy. There was a time when there was a priesthood. They were the only people who could read and write. And they actually even read and wrote in a dead language — Latin — that nobody else even spoke. So it was this real secret, and it was a source of great power.
And it was subversive when they first, for example, printed the Bible in English. And literally, when they printed the printed book — the printed book was the equivalent of our current, “Oh my God, social media turbocharged with AI, social disruption.”
There was 100 years of war after the dissemination of movable type, because suddenly the Bible and other books were available in English. And it was all this mass communication, and people fought for 100 years.
Now, hopefully we won’t fight for 100 years. But disruption does happen, and it’s not pretty. But it’s not — there’s a way that the millennialist kind of version of where this is somehow terminal is just wrong.
I mean, we will evolve. We will figure out how to coexist with the machines. We’ll figure out new things to do with them. And I think we need to get on with it.
But I guess, back to this post I wrote called “AI First Puts Humans First,” there’s a lot of pressure from various companies. They’re saying you must use AI. And they’ve been talking about AI first as a way of, like, “If you try to do it with AI first because we want to get rid of the people.”
And I think of AI first — or what I prefer, the term AI native — as a way of noticing: no, we want to figure out what the capabilities of this machine are. So try it first, and then build with it.
And in particular, I think of the right way to think about it as a lot like the term “mobile first.” It didn’t mean that you didn’t have other applications anymore. It just meant, when companies started talking about mobile first, it meant we don’t want it to be an afterthought.
And I think we need to think that way about AI. How can we reinvent the things that we’re doing using AI? And anybody who thinks it’s just about replacing people is missing the point.
Ross: Yeah, well, that’s going back to the main point around the ease of communication. So the layers of which we are getting our intent to be able to flow through into what the computers do.
So what struck me with the beginning of LLMs is that what is distinctive about humans is our intention and our intention to achieve something. So now, as you’re saying, the gap between what we intend and what we can achieve is becoming smaller and smaller, or it’s getting narrower and faster.
Also, we can democratize it in the sense of — yeah, there is more available to more people in various guises, to different degrees, where you can then manifest in software and technology your intention.
Yeah, so that democratizes — as you say, this is — there are ways in which this is akin to the printing press, because it democratizes that ability to not just understand, but also to achieve and to do and to connect.
Tim: Yeah, there is an issue that I do think we need to confront as an industry and as a society, and that is what Cory Doctorow calls “enshittification.”
This idea — actually, I had a different version of it, but let’s talk about Cory’s version first. The platforms first are really good to their users. They create these wonderful experiences. Then they use the mass of users that they’ve collected to attract businesses, such as advertisers, and they’re really good to the advertisers but they’re increasingly bad to the users.
Then, as the market reaches a certain saturation point, they go, “Well, we have to be bad to everybody, because we need the money first. We need to keep growing.”
I did a version of this. I wrote a paper called Rising Tide Rents and Robber Baron Rents, where I used the language of economic rents. We have this notion of Schumpeterian rents — or Schumpeterian profits — where a company has innovated, they get ahead of the competition, and they have outsized profits because they are ahead.
But in the theory, those rents are supposed to be competed away as knowledge diffuses. What we’ve seen in practice is companies put up all kinds of moats and try to keep the knowledge from diffusing. They try to lock in their users and so on. Eventually, the market stagnates, and they start preying on their users.
We’re in that stage in many ways as an industry. So, coming to AI, this is what typically happens. Companies stagnate. They become less innovative. They become protective of their profits. They try to keep growing with, effectively, the robber baron rents as opposed to the innovation rents.
New competition comes along, but here we have a problem — the amount of capital that’s had to go into AI means that none of these companies are profitable. So they’re actually enshittified from the beginning, or the enshittification cycle will go much, much more quickly, because the investors need their money.
I worry about that.
This has really been happening since the financial crisis made capital really cheap. We saw this with companies like Lyft and Uber and WeWork — that whole generation of technology companies — where the market didn’t choose the winner. Capital chose the winner.
The guy who actually invented all of that technology for on-demand cars was Sunil Paul with Sidecar. Believe it or not, he raised the same amount of money that Google raised — which was $35 million.
Uber and Lyft copied his innovations. Their venture was doing something completely different. Uber was black cars summoned by SMS. Lyft was a web app for inner-city people trying to find other people to share rides between cities.
They pivoted to do what Sunil Paul had invented, and they threw billions at it, and they bought the market.
Sure enough, the companies go public, unprofitable. Eventually, after the investors have taken out their money — it’s all great — then they have to start raising prices. They have to make the service worse.
Suddenly, you’re not getting a car in a minute. You’re getting a car in 10 minutes. They’re telling you it’s coming in five, and it’s actually coming in 15.
So it’s — and I think that we have some of that with AI. We’re basically having these subsidized services that are really great. At some point, that’s going to shake out.
I think there’s also a way that the current model of AI is fundamentally — it’s kind of colonialism in a certain way. It’s like, we’re going to take all this value because we need it to make our business possible. So we’re going to take all the content that we need. We’re not going to compensate people. We’re going to make these marvelous new services, and therefore we deserve it.
I think they’re not thinking holistically.
Because this capital has bought so much market share, we’re not having that kind of process of discovery that we had in previous generations. I mean, there’s still a lot of competition and a lot of innovation, and it may work out.
Ross: I’m just very interested in that point. There’s been a massive amount of capital. There’s this thesis that there is a winner-takes-most economy — so if you’re in, you have a chance of getting it all.
But overlaid on that — and I think there’s almost nobody better to ask — is open source, where of course you’ve got commercial source, you’ve commercially got open source, and quite a bit in between.
I’d love to hear your views on the degree to which open source will be competitive against the closed models in how it plays out coming up.
Tim: I think that people have always misunderstood open source, because I don’t think that it is necessarily the availability of source code or the license. It’s what I call an architecture of participation.
This is something where I kind of had a falling out with all of the license weenies back in the late ’90s and early 2000s, because — see, my first exposure to what we now call open source was with Berkeley Unix, which grew up in the shadow of the AT&T System V license. That was a proprietary license, and yet all this stuff was happening — this community, this worldwide community of people sharing code.
It was because of the architecture of Unix, which allowed you to add. It was small. It was a small kernel. It was a set of utilities that all spoke the same protocol — i.e., you read and wrote ASCII into a stream, which could go into a file.
There were all these really powerful concepts for network-based computing.
Then, of course, the internet came along, and it also had an architecture of participation. I still remember the old battle — Netscape was the OpenAI of its day. They were going to wrest control from Microsoft, in just the same way that OpenAI now wants to wrest control from Google and be the big kahuna.
The internet’s architecture of participation — it was really Apache that broke it open more than Linux, in some ways. Apache was just like, “Hey, you just download this thing, you build your own website.”
But it wasn’t just that anybody could build a website. It was also that Apache itself didn’t try to Borg everything.
I remember there was this point in time when everybody was saying Apache is not keeping up — Internet Information Server and Netscape Server are adding all these new features — and Apache was like, “Yeah, we’re a web server, but we have this extension layer, and all these people can add things on top of it.”
It had an architecture of participation.
The same thing happened with things like OpenOffice and the GIMP, which were like, “Okay, we’re going to do Microsoft Office, we’re going to do Photoshop.”
They didn’t work, despite having the license, despite making the source code available — because they started with a big hairball of code. It didn’t have an architecture of participation. You couldn’t actually build a community around it.
So I think — my question here with AI is: Where is the architecture of participation?
Ross: I would argue that it’s an arXiv, as in that now basically, the degree of sharing — where you get your Stability and your Googles and everyone else just putting it out on arXiv for your deep seek — really detailed.
Tim: Yeah, I think that’s absolutely right. There is totally an architecture of participation in arXiv.
But I think there’s also a question of models. I guess the thing I would say is yes — the fact that there are many, many models and we can build services — but we have to think about specialized models and how they cooperate. That’s why I’m pretty excited about MCP and other protocols.
Because the initial idea — the winner-takes-all model — is: here we are, we’re OpenAI, you call our APIs, we’re the platform. Just like Windows was. That was literally how Microsoft became so dominant.
You called the Windows API. It abstracted — it hid all the complexity of the underlying hardware. They took on a bunch of hard problems, and developers went, “Oh, it’s much easier to write my applications to the Windows API than to support 30 different devices, or 100 different devices.” It was perfect.
Then Java tried to do a network version of that — remember, “Write once, run anywhere” was their slogan. And in some sense, we’re replaying that with MCP.
But I want to go back to this idea I’ve been playing with — it’s an early Unix idea — and I’ve actually got a piece that I’m writing right now, and it’s about groups. Because part of an architecture of participation is: what’s the unit of participation?
I’ve been thinking a lot about one of the key ideas of the Unix file system, which was that every file had, by default, a set of permissions. And I think we really need to come up with that for AI.
I don’t know why people haven’t picked up on it. If you compare that to things like robots text and so on, there’s a pretty simple way. Let me explain for people who might not remember this. Most people who are developers or whatever will know something about this.
You had a variable called umask, which you set, and it set the default permissions for every file you created. There was also a little command called chmod that would let you change the permissions.
Basically, it was read, write, or execute — and it was for three levels of permission: the user, the group, and the world (everyone) right?
So here we are with AI, saying, “We, OpenAI,” or “We, Grok,” or whoever, “are going to be world,” right? “We’re going to Borg everything, and you’re going to be in our world. Then you’ll depend on us.”
Then some people — like Apple maybe — are saying, or even other companies are saying, “Well, we’ll give you permission to have your own little corner of the world.” That’s user. “We’ll let you own your data.”
But people have forgotten the middle — which is group.
If you look at the history of the last 20 years, it’s people rediscovering — and then forgetting — group. Think about what was the original promise of Twitter, or the Facebook feed. It was: I can curate a group of people that I want to follow, that I want to be part of.
Then they basically went, “No, no, actually that doesn’t really work for us. We’re going to actually override your group with our algorithmic suggestions.”
The algorithmically generated group was a really fabulous idea. Google tried to do a manual version of that when they did — originally Buzz — and then, was it called Circles? Which was from Andy Hertzfeld, and was a great thing.
But what happens? Facebook shuts it off. Twitter shuts it off.
And guess what? Where is it all happening now? WhatsApp groups, Signal groups, Discord groups. People are reinventing group again and again and again.
So my question for the AI community is: Where is group in your thinking?
How do we define it? A group can be a company. It can be a set of people with similar beliefs.
There’s a little bit of this, in the sense that — if you think Grok, the group is — even though it aspires to be the world-level — you could say Anthropic is the, let’s call it, the “woke group,” and Grok is the “right group.”
But where’s the French group? The French have always been famously protective. So I guess Mistral is the French group.
But how do people assert that groupness?
A company is a group.
So the question I have is, for example: how do we have an architecture of participation that says, “My company has valuable data that it can build services on, and your company has valuable data. How do we cooperate?”
That’s again where I’m excited — at least the MCP is the beginning of that. Saying: you can make a set of MCP endpoints anywhere.
It’s a lot like HTTP that way. “Oh, I call you to get the information that I want. Oh, I call you over here for this other information.”
That’s a much more participatory, dynamic world than one where one big company licenses all the valuable data — or just takes all the valuable data and says, “We will have it all.”
Ross: That’s one of the advantages of the agentic world — that if you have the right foundations, the governance, the security, and all of the other layers like team, payments, etc., then you can get entirely an economy of participation of agents.
But I want to look back from what you were saying around groups, coming back to the company’s point around the “AI first” or “AI native,” or whatever it may be. And I think we both believe in augmenting humans.
So what do you see as possible now if we look at an organization that has some great humans in it, and we’ve got AI that changes the nature of the organization? It’s not just tacking on AI to make each person more productive. I think we become creative humans-plus-AI organizations.
So what does that look like at its best? What should we be aspiring to?
Tim: Well, the first thing — and again, I’m just thinking out loud from my own process — the first thing is, there’s all kinds of things that we always wished we could do at O’Reilly, but we just didn’t have the resources for, right?
And so that’s the first layer. The example I always use is, there are people who would like to consume our products in many parts of the world where they don’t speak English. And we always translated a subset of our content into a subset of languages.
Now, with AI, we can make versions that may not be as good, but they’re good enough for many, many more people. So — vast expansion of the market there, just by going, “Okay, here’s this thing we always wished we could do, but could not afford to do.”
Second is: okay, is there a new, AI-native way to do things?
O’Reilly is a learning platform, and I’m looking a lot at — yeah, we have a bunch of corporate customers who are saying, “How do you do assessments? We need to see verified skills assessment.” In other words, test people: do they actually know this thing?
And I go — wow — in an AI-native world, testing is a pretty boneheaded idea, right? Because you could just have the AI watch people.
I was getting a demo from one startup who was showing me something in this territory. They had this great example where the AI was just watching someone do a set of tasks. And it said, “I noticed that you spent a lot more time and you asked a lot more questions in the section that required use of regular expressions. You should spend some time improving your skills there.”
The AI can see things like that.
Then I did kind of a demo for my team. I said, “Okay, let me just show you what I think AI-native assessment looks like.” I basically found some person on GitHub with an open repository.
I said, “Based on this repository, can you give me an assessment of this developer’s skills — not just the technical skills, but also how organized they are, how good they are at documentation, their communication skills?”
It did a great write-up on this person just by observing the code.
Then I pointed to a posted job description for an engineer working on Sora at OpenAI and said, “How good of a match is this person for that job?”
And it kind of went through: “Here are all the skills that they have. Here are all the skills that they need.”
And I go — this is AI-native. It’s something that we do, and we’re doing it in probably a 19th-century way — not even a 20th-century way — and you have completely new ways to do it.
Now, obviously that needs to be worked on. It needs to be made reliable. But it’s what I mean by AI-native — just go figure out what the AI can do that makes something so much easier or so much better.
That’s the point.
And that’s why it drives me nuts when I hear people talk about the “efficiencies” to be gained from AI.
The efficiencies are there. Like, yeah — it was a heck of a lot more efficient to use a steam engine to bring the coal out of the mine than to have a bunch of people do it. Or to drive a train. I mean, yeah, there’s efficiency there.
But it’s more that the capability lets you do more.
So we’re in this process where we should be discovering what’s possible.
In this way, I’m very influenced by a book by a guy named James Bessen. It’s called Learning by Doing, and he studied the Industrial Revolution in Lowell, Massachusetts, when they were bringing cotton mills and textile mills to New England.
He basically found that the narrative — AI had unskilled labor replaced skilled labor — wasn’t quite right. They had these skilled weavers, and then these unskilled factory workers. And he looked at pay records and said it took just as long for the new workers to become fully paid as the old workers.
So they were just differently skilled.
And I think “differently skilled” is a really powerful idea.
And he said okay, why did it take so long for this to show up in productivity statistics — 20, 30 years? And he said, because you need a community.
Again — this is an architectural part. You need people to fix the machines. You need people to figure out how to make them work better. So there’s this whole community of practice that’s discovering, thinking, sharing.
And we’re in that ferment right now.
That’s what we need to be doing — and what we are doing. There’s this huge ferment where people are in fact discovering and sharing.
And back to your question about open source — it’s really less about source code than it is about the open sharing of knowledge. Where people do that.
That goes back to O’Reilly. What we do — we describe our mission as being “changing the world by spreading the knowledge of innovators.”
We used to do it almost entirely through books. Then we did it through books and conferences. Now we have this online learning platform, which still includes books but has a big live training component.
We’re always looking for people who know something and want to teach it to other people.
Then the question is, what do people need to know now that will give them leverage, advantage, and make them — and their company — better?
Ross: So just to round out, I mean, you’ve already — well, more than touched on this idea of learning.
So part of it is, as you say, there are some new skills which you need to learn. There’s new capabilities. We want to go away from the old job description because we want people to evolve into how they can add value in various ways.
And so, what are the ways? What are the architectures of learning?
I suppose, as you say, that is a community. It’s not just about delivering content or interacting. There’s a community aspect.
So what are the architectures of learning that will allow organizations to grow into what they can be as AI-native organizations?
Tim: I think the architecture of learning that’s probably most important is for companies to give people freedom to explore.
There’s so many ideas and so much opportunity to try things in a new way. And I worry too much that companies are looking for — they’re trying to guide the innovation top-down.
I have another story that sort of goes back to — it’s kind of a fun story about open source.
So, yeah, one of the top guys at Microsoft is a guy named Scott Guthrie. So Scott and one of his coworkers, Mark Anders, were engineers at Microsoft, and they had basically this idea back in the early — this is 20-plus years ago — and they basically were trying to figure out how to make Windows better fitted for the web.
And they did a project by themselves over Christmas, just for the hell of it. And it spread within Microsoft. It was eventually what became ASP.NET, which was a very big Microsoft technology — I guess it was in the early 2000s.
It kind of spread like an open source project, just within Microsoft — which, of course, had tens of thousands of employees.
Eventually, Bill Gates heard about it and called them into his office. And they’re like, “Oh shit, we’re gonna get fired.”
And he’s like, “This is great.” He elevated them, and they became a Microsoft product.
But it literally grew like an open source project.
And that’s what you really want to have happen. You want to have people scratching their own itch.
It reminds me of another really great developer story. I was once doing a little bit of — I’d been called into a group at SAP where they wanted to get my advice on things. And they had also reached out to the Head of Developer Relations at Google.
And he asked — and we were kind of trying to — I forget what the name of their technology was. And this guy asked a really perfect question. He said, “Do any of your engineers play with this after hours?”
And they said, “No.”
And he said, “You’re fucked. It’s not going to work.”
So that — that play,
Ross: Yeah. Right?
Tim: Encourage and allow that play. Let people be curious. Let them find out. Let them invent. And let them reinvent your business.
Ross: That’s fantastic.
Tim: Because that’s — that will, that will — their learning will be your learning, and their reinvention of themselves will be your reinvention.
Ross: So, any final messages to everyone out there who is thick in the AI revolution?
Tim: I think it’s to try to forget the overheated financing environment.
You know, we talked at the very beginning about these various revolutions that I’ve seen. And the most interesting ones have always been when money was off the table.
It was like — everybody had kind of given up on search when Google came along, for example. It was just like, “This is a dead end.” And it wasn’t.
And open source — it was sort of like Microsoft was ruling the world and there was nothing left for developers to do. So they just went and worked on their own fun projects.
Right now, everybody’s going after the main chance. And — I mean, obviously not everybody — there are people who are going out and trying to really create value.
But there are too many companies — too many investors in particular — who are really trying to create financial instruments. Their model is just, “Value go up.” Versus a company that’s saying, “Yeah, we want value for our users to go up. We’re not even worried about that [financial outcome] right now.”
It’s so interesting — there was a story in The Information recently about Surge AI, which didn’t raise any money from investors, actually growing faster than Scale (scale.ai), which Meta just put all this money through — because they were just focused on getting the job done.
So I guess my point is: try to create value for others, and it will come to you if you do that.
Ross: Absolutely agree. That’s a wonderful message to end on.
So thank you so much for all of your work over the years and your leadership in helping us frame this AI as a positive boon for all of us.
Tim: Right. Well, thank you very much.
And it’s an amazing, fun time to be in the industry. We should all rejoice — challenging but fun.
The post Tim O’Reilly on AI native organizations, architectures of participation, creating value for users, and learning by exploring (AC Ep11) appeared first on Humans + AI.

Jul 16, 2025 • 0sec
Jacob Taylor on collective intelligence for SDGs, interspecies money, vibe-teaming, and AI ecosystems for people and planet (AC Ep10)
“If we’re faced with problems that are moving fast and require collective solutions, then collective intelligence becomes the toolkit we need to tackle them.”
– Jacob Taylor
About Jacob Taylor
Jacob Taylor is a fellow in the Center for Sustainable Development at Brookings Institution, and a leader of its 17 Rooms initiative, which catalyzes global action for the Sustainable Development Goals. He was previously research fellow at the Asian Bureau of Economic Research and consulting scientist on a DARPA research program on team performance. He was a Rhodes scholar and represented Australia in Rugby 7s for a number of years.
Website:
www.brookings.edu
www.brookings.edu
www.brookings.edu
www.brookings.edu
loyalagents.org
LinkedIn Profile:
Jacob Taylor
X Profile:
Jacob Taylor
What you will learn
Reimagining Team Performance Through Collective Intelligence
Using 17 Rooms to Break Down the SDGs Into Action
Building Rituals That Elevate Learning and Challenge Norms
Designing Digital Twins to Represent Communities and Ecosystems
Creating Interspecies Money for Elephants, Trees, and Gorillas
Exploring Vibe Teaming for AI-Augmented Collaboration
Envisioning a Bottom-Up AI Ecosystem for People and Planet
Episode Resources
Transcript
Ross Dawson: Jacob, it is awesome to have you on the show.
Jacob Taylor: Ross, thanks for having me.
Ross: So we met at Human Tech Week in San Francisco, where you were sharing all sorts of interesting thoughts that we’ll come back to. What are your top-of-mind reflections of the event?
Jacob: Look, I had a great week, and largely because of all the great people I met, to be honest. And I think what I picked up there was people really driving towards the same set of shared outcomes.
Really people genuinely building things, talking about ways of working together that were driving at outcomes for, ultimately, for human flourishing, for people and planet.
And I think that’s such an important conversation to have at the moment, as things are moving so fast in AI and technology, and sometimes it’s hard to figure out where all of this is leading, basically.And so to have humans at the center is a great principle.
Ross: Yeah, well, where it’s leading is where we take it. So I think having the humans at the center is probably a pretty good starting point.
So one of the central themes of this blog—for this podcast for ages—has been collective intelligence. And so you are diving deep into applying collective intelligence to achieve the Sustainable Development Goals, and I would love to hear more about what you’re doing and how you’re going about it.
Jacob: Yeah, so I mean, very quickly, I’m an anthropologist by training. I have a background in elite team performance as a professional rugby player, and then studying professional team sport for a number of years.
So my original collective is the team, and that’s kind of my intuitive starting point for some of this. But teams are very well built to solve problems that no individual can achieve alone, and really a lot of the SDG problems that we have—issues that communities at every scale have trouble solving on their own—need a whole community to tackle a problem, rather than just one individual or set of individuals within a community.
So the SDGs are these types of—whether it’s climate action or ending extreme poverty or sustainability at the city level—all of these issues require collective solutions. And so if we’re faced with problems that are moving fast and require collective solutions, then collective intelligence becomes the toolkit or the approach that we need to use to tackle those problems.
I’ve been thinking a lot about this idea that in the second half of the 20th century, economics as a discipline went from pretty much on the margins of policymaking and influence to right at the center. By the end of the 20th century, economists were at the heart of informing how decisions were made at the country level, at firms, and so on. That was because an economic framework really helped make those decisions.
I think my sense is that the problems we face now really need the toolkit of the science of collective intelligence. So that’s kind of one of the ideas I’ve been exploring—is it time for collective intelligence as a science to really inform the way we make decisions at scale, particularly for our hardest problems like the SDG.
Ross: One of your initiatives—so at Brookings Institution, one of the initiatives is 17 Rooms. I’m so intrigued by the name and what that is and how that works.
Jacob: Yeah. So, 17 Rooms. We have 17 Sustainable Development Goals, and so on. Five or so years ago now—or more, I think it’s been running for seven or eight years now—17 Rooms thought: what if we found a method to break down that complexity of the SDGs?
A lot of people talk about the SDGs as everything connected to everything, which sometimes is true. There are a lot of interlinkages between these issues, of course. But what would it look like to actually break it down and say, let’s get into a room and tackle a slice of one SDG?
So Room 1: SDG 1 for ending extreme poverty. Let’s take on a challenge that we can handle as a team.
And so 17 Rooms gathers groups of experts into working groups—or short-term SWAT teams of cooperation, basically—and really gets them to think through big ideas and practical next steps for how to bend the curve on that specific SDG issue.
Then there’s an opportunity for these rooms or teams to interact across issues as well. So it provides a kind of “Team of Teams” platform for multi-stakeholder collaboration within SDG issues, but also connecting across the full surface of these problems as well.
Ross: So what from the science of collective intelligence—or anything else—what specific mechanisms or structures have you found useful? Are you trying to enable the collective intelligence within and across these rooms or teams?
Jacob: Yeah, so I think—I mean, they’re all quite basic principles. We do a lot on trying to curate teams and also trying to run them through a process that really facilitates collaboration. But the principles are quite basic, really.
I mean, one of the most fundamental principles is taking an action stance. One of the biggest principles of collective intelligence is that intelligence comes from action. This is a principle we get from biology. In biology, biology acts first and then learns on the run. So you don’t kind of sit there and go, what kind of action could we take together as a multicellular organism—rather, it just unfolds, and then learning comes off the back of that action.
So in that spirit, we really try to gear our teams and rooms into an action stance, and say, rather than just kind of pointing fingers at all the different aspects of the problem, let’s say: what would it look like for us in this room to act together? And then, what could we learn from that?
Trying to get into that stance is really foundational to the 17 Rooms initiative.
And then I think the other part is really bonding or community—so knowing that action and community are two sides of the same coin. When you act together, you connect and you share ideas and information. But likewise, communities of teams that are connected are probably more motivated to act together and to be creative and think beyond just incentives. But like, what can we really achieve together?
And so we try to pair those two principles together in everything that we do.
Ross: So this comes back to this point—there’s many classic frameworks and realities around acting and then learning from that. So your OODA Loop, your observe, orient, decide, act, or your Lean Startup loop, or Kolb’s learning cycle, or whatever it might be, where we act, but we only learn because we have data or insight.
So that’s a really interesting point—where we act, but then, particularly in a collective intelligence perspective, we have all sorts of data we need to filter and make sense of that not just individually, but collectively—in order to be able to understand how it is we change our actions to move more towards our outcomes.
Do you have any structures for being able to facilitate that flow of feedback or data into those action loops?
Jacob: Yeah, I think—and again, I’m very biased as an anthropologist here—so the third principle that we think about a lot, and that answers your question, is this idea of ritual.
We’re acting, we’re connecting around that action, and that’s a back-and-forth process. But then rituals actually are a space where we can elevate the best ideas that are coming out of that process and also challenge the ideas that aren’t serving us.
Famously across time for humans, ritual has been an opportunity both to proliferate the best behaviors of a society, but also to contest the behaviors that aren’t serving performance. Ultimately—you don’t always think about this in performance terms—but ultimately, when you look at it big picture, that’s what’s happening.
So I think rituals of differentiation between the data that are serving us versus not, I think is really important for any team, organization, or community.
Ross: That’s really interesting. Could you give an example of a ritual?
Jacob: Well, so there are rituals that can really—like walking on hot coals. Again, let’s start anthropological, and then maybe we can get back to collective intelligence or AI.
Walking on hot coals promotes behaviors of courageousness and devotion. Whereas in other settings, you have a lot of rituals that invert power structures—so men dressing up as women, women dressing up as men, or the less powerful in society being able to take on the behaviors of the powerful and vice versa.
That actually calls out some of the unhelpful power asymmetries in a society and challenges those.
So in that spirit, I think when we’re thinking about high-performing teams or communities tackling the SDGs, I think there needs to be more than just… I’m trying to think—how could we form a ritual de novo here?
But really, there needs to be, I guess, those behaviors of honesty and vulnerability as much as celebration of what’s working. That maybe is easier to imagine in an organization, for example, and how a leader or leaders may try to really be frank about the full set of behaviors and activities that a team is doing, and how that’s working for the group.
Ross: So you’ve written a very interesting article referring to Team Human and the design principles that support—including the use of AI—and being able to build better team performance. So what are some of the design principles?
Jacob: Well, I think this work came a little bit out of some DARPA work I did on a DARPA program before coming to Brookings around building mechanisms for collective intelligence. And when you boil it down to that fundamental level, it really comes down to having a way to communicate between agents or between individuals, which in psychology is referred to—the jargon in psychology is theory of mind.
So, do I have a theory of Ross—what you want—and do you have a theory of what I want? That’s basically social intelligence. It’s the basic key here.
But it really comes down to some way of communicating across differences. And then with that, the other key ingredient that we surfaced when we built a computational model of this, in a basic way, was an ability to align on shared goals.
So it feels like there’s some combination of social intelligence and shared goals that is foundational to any collective intelligence that emerges in teams or organizations or networks. And so trying to find ways to build those—whether that’s at the community level…
For example, if a city wants to develop its waste recycling program—but if you break that down, it really is a whole bunch of neighborhoods trying to develop recycling purposes. So the question for me is: do all those neighborhoods have a way of communicating to each other about what they’re doing in service of a shared goal of, let’s say, a completely circular recycling economy at the city level?
And if not, then what kind of interaction and conversations need to happen at the city level so that you can share best practices, challenge practices that are hurting everyone, and then find a way to drive collective action towards a shared outcome. But I’d also think about that, like, at the team level, where there are ways to really encourage theory of mind and perspective sharing.
Ross: So, in some of that work, you refer to digital twins—essentially being able to model how people might think or behave. If you are using digital twins, how is that put into practice in being able to build better team performance?
Jacob: Yeah, great. Yeah, that’s probably really where the AI piece comes in.
Because that recycling-at-the-city-level example that I shared—this kind of collective intelligence happens without AI.
But the promise of AI is to say, well, if you could actually store a lot of information in the form of digital twins that represented the interests and activities of, let’s say, neighborhoods in a city trying to do recycling—
Well, then beyond our human cognition, you could be trying to look for patterns and opportunities for collaboration by leveraging the power of AI to recognize patterns and opportunities across diverse data sets.
The idea is you could kind of try to supercharge the potential collective intelligence about problem-solving by positioning AI as a team support—or a digital twin that could say, hey, actually, if we tweak our dials here and use this approach, that could align with our neighbor’s approach, and maybe we should have a chat about it.
So there’s an opportunity to surface patterns, but then also potentially perform time-relevant interventions for human decision-makers to help encourage better outcomes.
Ross: I think you probably should try a different phrase, because “digital twin” sounds like you’ve got a person, then you’ve got a copy of that person.
Whereas you’re describing it here as representing—could be a neighborhood, or it could be a stakeholder group. So it’s essentially a representation, or some kind of representation, of the ways of thinking or values of a group, potentially, or community, as opposed to an individual.
Jacob: Indeed, yeah. I think this is where it all gets a bit technical, but yeah, I agree that “twin”—”digital twin”—evokes this idea of an individual body.
But if you extend that out, when you really take seriously some of the collective intelligence work, it’s like intelligence collectives become intelligent when they become a full thing, like a body—when they really individuate as a collective.
Teams really click and perform when they become one—so that it’s no longer just these individual bodies. It’s like the team is a body.
So I think in that spirit, when I think about this, I actually think about neighborhoods having a collective identity. That could be reflected in their twin, or like, of the community.
But I agree there’s maybe some better way to imagine what that kind of community AI companion looks like at higher scales.
Ross: So at Human Tech Week, you shared this wonderful story about how AI could represent not just human groups, but also animal species.
Love to—I think that sort of really brings it to—it gives it a very real context, because you’re understanding that from another frame.
Jacob: Yeah. And I think it’s true, Ross.
I’ve been struck by how much this example of interspecies money—that I’ll explain a little bit—is not only exciting because it has potential benefit for nature and the beautiful natural environment that we live in, but I think it actually helps humans understand what it could look like to do it for us too.
And so, interspecies money, basically, is this idea developed by a colleague of ours at Brookings, Jonathan Ledger. We had a room devoted to this last year in 17 Rooms to try and understand how to scale it up.
But what would it look like to give non-human species—like gorillas, or elephants, or trees—a digital ID and a bank account, and then use AI to reverse engineer or infer the preferences of those animals based on the way they behave?
And then give them the agency to use the money in their bank account to pay for services.
So if gorillas, for example, most rely on protection of their habitat, then they could pay local community actors to protect that habitat, to extend it, and to protect them from poachers, for example.
That could all be inferred through behavioral trace data and AI, but then also mediated by a trustee of gorillas—a human trustee.
It’s quite a futuristic idea, but it’s actually really hit the ground running. At the moment, there are pilots with gorillas in Rwanda, elephants in India, and ancient trees in Romania.
So it’s kind of—the future is now, a little bit, on this stuff.
I think what it really does is help you understand: if we really tried to position AI in a way that helps support our preferences and gives agency to those from the bottom up, then what?
What world would that look like?
And I think we could imagine the same world for ourselves. A lot of our AI systems at the moment are kind of built top-down, and we’re the users of those systems.
What if we were able to build them bottom-up, so that at every step we were representing individual, collective, community interests—and kind of trading on those interests bottom-up?
Ross: Yeah, well, there’s a lot of talk about AI alignment, but this is, like, a pretty deep level of alignment that we’re talking, right?
Jacob: Right.
And yeah, I think Sandy Pentland, who I shared the panel with—he has this idea of, okay, so there are large language models.
What would it look like to have local language models—small language models that were bounded at the individual.
So Ross, you had a local language model, which was the contents of your universe of interactions, and you could perform inferences using that.
And then you and I could create a one-plus-one-plus-one-equals-three kind of local language model, which was for some use case around collective intelligence.
This kind of bottom-up thinking, I think, is actually technically very feasible now.
We have the algorithms, the understanding of how to train these models. And we also have the compute—in devices like our mobile phones—to perform the inference.
It’s really just a question of imagination, and also getting the right incentives to start building these things bottom-up.
Ross: So one of the things you’ve written about is vibe teaming.
We’ve got vibe coding, we’ve got viable sorts of things. You and your colleagues created vibe teaming.
So what is it? What does it mean? And how do we do it?
Jacob: Good question.
Yeah, so this is some work that a colleague of mine, Kirsch and Krishna, and I at Brookings did this year.
We got to a point where, with our teamwork—you know, Brookings is a knowledge work organization, and we do a lot of that work in teams. A lot of the work we do is to try and build better knowledge products and strategies for the SDGs and these types of big global challenges.
The irony was, when we were thinking about how to build AI tools into our workflow, we were using a very old-school way of teaming to do that work.
We were using this kind of old industrial model of sequential back-and-forth workflows to think about AI—when AI was probably one of the most, potentially the most, disruptive technologies of the 21st century.
It just felt very ironic. To do a PowerPoint deck, Ross, you would give me the instructions. I would go away and draft it. I would take it back to you and say, “Is this right?” And you would say, “Yes, but not quite.”
So instead, we said, “Wait a minute. The internet is blowing up around vibe coding,” which is basically breaking down that sequential cycle.
Instead of individuals talking to a model with line-by-line syntax, they’re giving the model the vibe of what they want.
We’re using AI as this partner in surfacing what it is we’re actually trying to do in the first place.
So Kirsch and I said, “Why don’t we vibe team this?”
Why don’t we get together with some of these challenges and experts that we’re working with and actually get them to tell us the vibe of what they’ve been learning?
Homie Karas is a world expert—40-year expert—on ending extreme poverty. We sat down with him, and in 30 minutes, we really pushed him to give us, like:
“Tell us what you really think about this issue. What’s the really hard stuff that not enough people know about? Why isn’t it working already?”
These kinds of questions.
We used that 30-minute transcript as a first draft input to the model. And in 90 minutes, through interaction with AI—and some human at the end to make sure it all looked right and was accurate—we created a global strategy to end extreme poverty.
That was probably on par with anything that you see—and probably better, in fact, than many global actors whose main business is to end extreme poverty.
So it’s an interesting example of how AI can be a really powerful support to team-based knowledge work.
Ross: Yeah, so just—I mean, obviously, this is you.
You are—the whole nature of the vibe is that there’s no explicit, well, no specific, replicable structure. We’re going with the vibes.
But where can you see this going in terms of getting a group of complementary experts together, and what might that look like as the AI-augmented vibe teaming?
Jacob: Well, I mean, you’re right. There was a lot of vibe involved, and I think that’s part of the excitement for a lot of people using these new tools.
However, we did see a few steps that kept re-emerging. I’ve mentioned a few of them kind of implicitly here, but the big one—step one—was to really start with rich human-to-human input as a first step.
So giving the model a 30-minute transcript of human conversation versus sparse prompts was a real game changer for us working with these models.
It’s almost like, if you really set the bar high and rich, then the model will meet you there—if that makes sense.
Step two was quickly turning around a first draft product with the model.
Step three was then actually being patient and open to a conversation back and forth with the model.
So not thinking that this is just a one-button-done thing, but instead, this is a kind of conversation—interaction with the model.
“Okay, so that’s good there, but we need to change this.” “Your voice is becoming a little bit too sycophantic. Can you be a bit more critical?”
Or whatever you need to do to engage with the model there.
And then, I think the final piece was really the need to go back and meet again together as a team to sense-check the outputs, and really run a rigorous human filter back over the outputs to make sure that this was not only accurate but analytically on point.
This idea that sometimes AI looks good but smells bad—and with these outputs, sometimes we’d find that it’s like, “Oh, that kind of looks good,” but then when you dig into it, it’s like, “Wait a minute. This wasn’t quite right here and there.”
So just making sure that it not only looks good but smells good too at the end.
Yeah. And so I think these basic principles—we’re seeing them work quite well in a knowledge work context.
And I guess for us now, we’re really interested in a two-barrel investigation with approaches like vibe teaming.
On the one hand, it’s really about the process and the how—like, how are we positioning these tools to support collaboration, creativity, flow in teamwork, and is that possible?
So it’s really a “how” question.
And then the other question for us is a full “what.” So what are we pointing these approaches at?
For example, we’re wondering—if it’s ending extreme poverty, how could we use vibe teaming to actually…
And Scott Page uses this term—how can we use it to expand the physics of collective intelligence?
How can we run multiple vibe teaming sessions all at once to be much more inclusive of the types of people who participate in policy strategy formation?
So that when you think about ending extreme poverty, it’s ending it for whom? What do they want? What does it look like in local communities, for example?
That idea of expanding the physics of collective intelligence through AI and approaches like vibe teaming is very much on our minds at the moment, as we think about next steps and scale-up.
Ross: Obviously, the name of the podcast is Humans Plus AI, and I think what you’re describing there is very much the best of humans—and using AI as a complement to draw out the best of that.
Nice segue—you just sort of referred to “where next steps.”
You’ve described a lot of the wonderful things you’re doing—some fantastic approaches to very, very critically important issues.
So where to from here? What’s the potential? What are the things we need to be doing? What’s the next phase of what you think could be possible and what we should be doing?
Jacob: Yeah, I think I’m really excited about this idea of growing an alternate AI ecosystem that works for people and planet, rather than the other way around.
Part of the work at Brookings is really setting up that agenda—that research agenda—for what that ecosystem could look like.
We discussed it a little bit together at Human Tech Week.
I think of that in three parts.
There’s the technical foundation—so down to the algorithms and the architectures of AI models—and thinking about how to design and build those in a way that works for people.
That includes, for example, social intelligence built into the code.
Another example there is around, in a world of AI agents—are agents working for humans, or are they working for companies?
Sandy Pentland’s work on loyal agents, for example—which maybe we could link to afterward—I think is a great example of how to design agents that are fiduciaries for humans, and actors for humans first, and then others later.
Then, approaches like vibe teaming—ways of bringing communities together using AI as an amplifier.
And then I think the key piece, for me, is how to stitch the community of actors together around these efforts.
So the tech builders, the entrepreneurs, the investors, the policymakers—how to bring them together around a common format.
That’s where I’m thinking about a few ideas.
One way to try to get people excited about it might be this idea of not just talking about it in policy terms or going around to conferences.
But what would it look like to actually bring together a lab or some kind of frontier research and experimentation effort—where people could come together and build the shared assets, protocols, and infrastructures that we need to scale up great things like interspecies money, or vibe teaming, or other approaches?
Where, if we had collective intelligence as a kind of scientific backbone to these efforts, we could build an evidence base and let the evidence base inform new approaches—trying to get that flywheel going in a rigorous way.
Trying to be as inclusive as possible—working on everything from mental health and human flourishing through to population-level collective intelligence and everything in between.
Ross: So can you paint that vision just a little bit more precisely?
What would that look like, or what might it look like?
What’s one possible manifestation of it? What’s the—
Jacob: Yeah, I mean, it’s a good question.
So this idea of a frontier experimental lab—I think maybe I’m a little bit informed by my work at DARPA.
I worked on a DARPA program called ASSIST—AI, I mean, Artificial Social Intelligence for Successful Teams—and that really used this kind of team science approach, where you had 12 different scientific labs all coming together for a moonshot-type effort.
There was that kind of idea of, we don’t really know how to work together exactly, but we’re going to figure it out.
And in the process of shooting for the moon, we’re hopefully going to build all these shared assets and knowledge around how to do this type of work better.
So I guess, in my mind, it’s kind of like: could we create a moonshot for collective intelligence, where collective intelligence is really the engine—and the goal was trying to, for example, end extreme poverty, or reach some scale of ecosystem conservation globally through interspecies money?
Or—pick your SDG issue.
Could we do a collective intelligence moonshot for that issue?
And in that process, what could we build together in terms of shared assets and infrastructure that would last beyond that one moonshot, and equip us with the ingredients we need to do other moonshots?
Ross: Yeah, well, again, going back to the feedback loops—of what you learn from the action in order to be able to inform and improve your actions beyond that.
Jacob: Exactly, yeah.
And I think the key ingredients here are really taking seriously what we’ve built now in terms of collective intelligence. It is a really powerful, transdisciplinary scientific infrastructure.
And I think taking that really seriously, and drawing on the collective intelligence of that community to inform, to create evidence and theories that can inform applications. And then running that around.
I think what I discovered at Human Tech Week with you, Ross, is this idea that there’s a lot of entrepreneurial energy—and also capital as well.
I think a lot of investors really want to put their money where their mouths are on these issues.
So it feels like it’s not just kind of an academic project anymore. It’s really something that could go beyond that.
So that’s kind of time for collective intelligence. We need to get these communities and constituencies working together and build a federation of folks who are all interested in a similar outcome.
Ross: Yeah, yeah. The potential is extraordinary.
And so, you know, there’s a lot going on—not all of it good—these days, but there’s a lot of potential for us to work together.
And again, there’s amplifying positive intent, which is part of what I was sharing at Human Tech Week.
I was saying, what is our intention? How can we amplify that positive intention, which is obviously what you are doing in spades.
So how can people find out more about your work and everything which you’ve been talking about?
Jacob: Well, most of my work is on my expert page on Brookings.
I’m here at the Center for Sustainable Development at Brookings, and I hope I’ll be putting out more ideas on these topics in the coming months.
I’ll be mainly on LinkedIn, sharing those around too.
Ross: Fantastic. Love what you’re doing.
Yeah—and yeah, it’s fun. It’s fantastic. So really, really glad you’re doing that.
Thank you for sharing, and hopefully there’s some inspiration in there for some of our listeners to follow similar paths.
Jacob: Thanks, Ross. I appreciate your time. This has been fun.
The post Jacob Taylor on collective intelligence for SDGs, interspecies money, vibe-teaming, and AI ecosystems for people and planet (AC Ep10) appeared first on Humans + AI.

Jul 9, 2025 • 12min
AI & The Future of Strategy (AC Ep9)
“Strategy really must focus on those purely human capabilities of synthesis, and judgment, and sense-making.”
– Ross Dawson
About Ross Dawson
Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload.
Website:
Ross Dawson
Advanced Human Technologies
LinkedIn Profile:
Ross Dawson
Books
Thriving on Overload
Living Networks 20th Anniversary Edition
Implementing Enterprise 2.0
Developing Knowledge-Based Client Relationships
What you will learn
How AI is reshaping strategic decision-making
The accelerating need for flexible leadership
Why trust is the new competitive advantage
The balance between human insight and machine analysis
Storytelling as the heart of effective strategy
Building learning-driven, adaptive organizations
The evolving role of leaders in an AI-first world
Episode Resources
Transcript
Ross Dawson: This is a little bit of a different episode. Instead of an interview, I will be sharing a few thoughts in the context of now doubling down on the Humans Plus AI theme. Our community is kicking off the next level. As you may have noticed, the podcast has been rebranded Humans Plus AI, and really just fully focused on this theme of how AI can augment humans—individually, organizations, and society.
So what I want to share today is some of the thoughts which came out of Human Tech Week. I was fortunate to be at Human Tech Week in San Francisco a few weeks ago. I did the opening keynote on Infinite Potential: Humans Plus AI, and I’ll share some more thoughts on that another time.
But what I also did was run a lunch event, a panel with myself, John Hagel, and Charlene Lee, talking about AI and the future of strategy. So it was an amazing conversation, and I can’t do it justice now, but what I want to do is just share some of the high-level themes that came out of that conversation, and I suppose, obviously, bringing my own particular slant on those.
So we started off by thinking around how is change generally, including AI, impacting strategy and the strategy process. So fairly obviously we have accelerating change. That means that decision cycles are getting shorter, and strategy needs to move faster.
It also means that there is the ability for creation of all kinds to be democratized within, across, and beyond organizations, allowing them to innovate, to act without necessarily being centralized. And this idea of this abundance of knowledge, coupled with the scarcity of insight, means that strategy really must focus on those purely human capabilities of synthesis, and judgment, and sense-making.
There’s also a theme where we have institutional trust is eroding. So this means that more and more, strategy shifts to relationships-based models, ecosystem-based models.
And this overlying theme, which John Hagel in particular brought out, is this idea that there is greater fear amongst leaders. There’s greater emotional pressure, and these basically shrink the timeline of our thinking. It forces us to shorter-term thinking. We are based on fear—of a whole variety of pressures from shareholders, stakeholders, politicians, and more.
We need to allow ourselves to move beyond the fear, as John’s latest book The Journey Beyond Fear lays out—highly recommended—which then enables us to enable our strategic imagination and ways of thinking, and how we do that.
So one of the core themes of the conversation was around: what are the relative roles of AI and humans in the strategy process? Humans are strategic thinkers by their very nature, and now we have AI which can support us and complement us in various ways.
Of course, there is a strong way in which AI can use data. It can do a lot of analysis. It is very capable at pattern recognition. It can move faster. It can simulate scenarios and futures, identify signals, and so it can scale what can be done in strategy analysis. It can go deeper into the analysis.
But this brings the human role of the higher levels: of the creativity, of the imagination, of the judgment, the ethical framing, the purpose, the vision, the values.
One of the key things which came out of it was around storytelling, where strategy is a story. It’s not this whole array of KPIs and routes to get them—that’s a little part of it. It is telling a story that engages people, that makes them passionate about what they want to do and how they are going to do it—that’s their heroes and heroines’ journey.
So this insight, this sense-making, is still human.
There’s a wonderful quote from the session, saying, “AI without data is extremely stupid,” but even with the data, it can’t deliver the insight or the wisdom on its own. That is something where the human function resides.
And so we are still responsible for the oversight and for the ethical nature of the decisions. Especially as we have more and more autonomous agents, we have very opaque systems. And accountability is fundamental to all leadership and to the nature of strategy.
So a leader’s role is to be able to bring together those ways in which we bring in AI, deciding when to trust it, deciding when to override, and how to frame its contribution for leaders. So that’s an intrinsic part of strategy: the role of AI in the way the what, how the organization functions, and how the organization establishes and communicates direction.
Well, there was a lot of discussion around the tensions. And again, John shared this wonderful frame he’s been using for a while about “zoom out and zoom in.” Essentially, he says that real leaders—the most successful organizations—they have a compelling 10- or 20-year vision, and they also have plans for the next six to twelve months, and they don’t have much in between.
And so you can zoom out to sort of see this massive scale of: Why do we exist? What are we trying to create? But also looking, shrinking down to saying, All right, well, what is it we’re doing right now—creating momentum and moving towards that.
And so this dual framing is emotionally resonant. It shifts people from fear to hope by being able to see this aspiration and also seeing progress today.
And so there are these polarities that we manage in strategy. We’re balancing focus with flexibility. We need to be clearly guided in where we are going. So we need this coherence. We need to be able to know what we are doing, but we also need to be able to focus our resources.
And so this balance between flexibility—where we can adapt to situations—while maintaining continuity in moving forward, is fundamental.
One of the fundamental themes, which, again, which came out of the conversation, which comes back to some of my core themes from a very long time, is this idea of knowledge and trust.
So AI is widely accessible. Everyone’s got it in various guises. So where does competitive advantage reside? And fundamentally, it is from trust. And it is trust that in the AI. Distrust about how the AI is used is around trust in the intentions. It’s around trust in, ultimately, the people that have shaped the systems and used the systems well.
So this means that as you create long-term, trust-based relationships, you get more and more advantages. And this comes back to my first book on Knowledge-Based Client Relationships, which I’ve extended and applied in quite a variety of domains, including in my recent work on AI-driven business model innovation.
We’re essentially saying that in an AI-driven world, that trust in the systems means you can have access to more data and more insight from people and organizations, which you can apply in building this virtual virtuous circle of differentiation. So you add value, you gain trust, you get insight from that, flowing through into more value.
So ultimately, this is about passion. What John calls the passion of the explorer, where we are committed to learning and questioning and creating value.
So I suppose that, in a way, the key theme that ran through the entire conversation was around learning, and where learning is not something about, how do we do these workshops, and how do we—
Take these bodies of knowledge and get everybody to know them. It is about this continuous exploration of the new. And every successful organization needs to harness and to enable people inside those organizations to be passionate about what they are learning, to explore, to learn from their exploration, to share that, and so building this sustainable learning and scalable learning, which is the nature of a fast-moving world,
Where we can have some consistent strategy based around that learning, which enables us to continue to both have direction and be flexible and adaptable in an accelerating world.
So that just touches on some of the themes which we discussed in the session, and I will continue to share, write some more—what I call mini reports—just to frame some of these ideas.
But the reality is that the nature of strategy is changing. This means the nature of leadership is changing, and we need to understand and to dig into the nature of the changing nature of strategy—where AI plays a role, how that shifts human roles, how leadership changes.
Because these are fundamental to our success, not just as individual organizations, but also as industries and society at large. Because our strategies, of course, must support not just individual entities or organizations, but the entire ecosystems and communities and societies in which they are embedded.
So we’ll come back. We’ve got some amazing guests coming up in our next episode, so make sure to tune in for the next episodes.
Please continue to engage. Get onto Humans Plus AI, sign up for our newsletter, and we’ll see you on the journey.
The post AI & The Future of Strategy (AC Ep9) appeared first on Humans + AI.

Jun 25, 2025 • 34min
Matt Lewis on augmenting brain capital, AI for mental health, neurotechnology, and dealing in hope (AC Ep8)
“The big picture is that every human on Earth deserves to live a life worth living… free of mental strife, physical strife, and the strife of war.”
– Matt Lewis
About Matt Lewis
Matt is CEO, Founder and Chief Augmented Intelligence Officer of LLMental, a Public Benefit Limited Liability Corporation Venture Studio focused on augmenting brain capital. He was previously Chief AI Officer at Inizio Health, and contributes in many roles including as a member of OpenAI’s Executive Forum, Gartner’s Peer Select AI Community and faculty at the World Economic Forum’ New Champions’ initiative.
Website:
Matt Lewis
LinkedIn Profile:
Matt Lewis
What you will learn
Using AI to support brain health and mental well-being
Redefining mental health with lived experience leadership
The promise and danger of generative AI in loneliness
Bridging neuroscience and precision medicine
Citizen data science and the future of care
Unlocking human potential through brain capital
Shifting from scarcity mindset to abundance thinking
Episode Resources
Transcript
Ross Dawson: Matt, it’s awesome to have you on the show.
Matt Lewis: Thank you so much for having me. Ross, it’s a real pleasure and honor. And thank you to everyone that’s watching, listening, learning. I’m so happy to be here with all of you.
Ross: So you are focusing on using AI amongst other technologies to increase brain capital. So what does that mean?
Matt: Yeah. I mean, it’s a great question, and it’s, I think, the challenge of our time, perhaps our generation, if you will.
I’ve been in artificial intelligence for 18 years, which is like an eon in the current environment, if you will. I built my first machine learning model about 18 years ago for Parkinson’s disease, under a degenerative condition where people lose the ability to control their body as they wish they would.
I was working at Boehringer Ingelheim at the time, and we had a drug, a dopamine agonist, to help people regain function, if you will. But some small number of people developed this weird side effect, this adverse event that didn’t appear in clinical trials, where they became addicted to all sorts of compulsive behaviors that made their actual lives miserable. Like they became shopping addicts, or they became compulsive gamblers. They developed proclivities to sexual behaviors that they didn’t have before they were on our drug, and no one could quite figure out why they had these weird things happening to them.
And even though they were seeing the top academic neurologists in this country, United States, or other countries, no one can say why Ross would get this adverse event and Matt wouldn’t. It didn’t appear in the studies, and there’s no way to kind of figure it out.
The only thing that kind of really sussed out what was an adverse event versus what wasn’t was advanced statistical regression and later machine learning. But back in the days, almost 20 years ago, you needed massive compute, massive servers—like on trucks—to be able to ship these types of considerations to actually improve clinical outcomes.
Now, thankfully, the ability to provide practical innovation in the form of AI to help improve people’s actual lives through brain health is much more accessible, democratisable, almost in a way that wasn’t available then.
And if it first appeared for motor symptoms, for neurodegenerative disease, some time ago, now we can use AI to help not just the neurodegenerative side of the spectrum but also neuropsychiatric illness, mental illness, to help identify people that are at risk for cognition challenges.
Here in Manhattan, it’s like 97 degrees today. People don’t think the way they normally do when it’s 75. They make decisions that they perhaps wish they hadn’t, and a lot of the globe is facing similar challenges.
So if we can kind of partner with AI to make better decisions, everyone’s better off.
That construct—where we think differently, we make better decisions, we are mentally well, and we use our brains the way that was intended—all those things together are brain capital. And by doing that broadly, consistently, we’re better off as a society.
Ross: Fantastic. So that case, you’re looking at machine learning—so essentially being able to pull out patterns. Patterns between environmental factors, drugs used, background, other genetic data, and so on.
So this means that you can—is this, then, alluding, I suppose, to precision medicine and being able to identify for individuals what the right pharmaceutical regimes are, and so on?
Matt: Yeah. I mean, I think the idea of precision medicine, personalized medicine, is very appealing. I think it’s very early, maybe even embryonic, kind of consideration in the neuroscience space.
I worked for a long time for companies like Roche and Genentech, others in that ecosystem, doing personalized medicine with biomarkers for oncology, for cancer care—where you knew a specific target, an enzyme, a protein that was mutated and there was a degradation, and identified which enzyme was a bit remiss.
Then tried to build a companion diagnostic to find the signal, if you will, and then help people that were suffering.
It’s a little bit more—almost at risk of saying—straightforward in that regard, because if someone had the patient, you knew that the drug would work.
Unfortunately, I think there’s a common kind of misconception—I know you know this exceptionally well, but there are people out there, perhaps listening, that don’t know it as well—that the state of cognitive neuroscience, that is what we know of the brain or how the brain works and how it works in the actual world in which we live, on planet Earth and terra firma, is probably about as far advanced as the state of the heart was when Jesus Christ walked the Earth about 2,000 years ago.
That is, we probably have about 100 years of knowledge truly about how the brain truly works in the world, and we’re making decisions about how to engineer personalized medicine for a very, very, very young, nascent science called the brain—with almost no real kind of true, practical, contextual understanding of how it really works in the world.
So I think personalized medicine has tremendous possible promises. The reality of it doesn’t really pan out so well.
There are a couple of recent examples of this from companies like Nomura, Alto Neuroscience, and the rest, where they try to build these kind of ex post facto precision medicine databases of people that have benefited from certain psychiatric medicines.
But they end up not being as beneficial as you’d like them to be, because we just don’t know really a lot about how the brain actually works in the real world.
There even still is the debate for people—but even if you extend past the brain and mind debate—I think it’s hard to find the number of people that are building in the space that will recognize contextual variables beyond the brain and mind.
Including things like the biopsychosocial continuum, the understanding of spirituality and nature, all the rest.
All these things are kind of moving and changing and dynamic at a constant equilibrium.
And to try to find a point solution that says Matt or Ross are going to be beneficial at this one juncture, and they’re going to change it right now—it’s just exceptionally difficult. Important, but exceptionally difficult.
So I think the focus is more about how do we show up in the real world today, using AI to actually help our actual life be meaningful and beneficial, rather than trying to find this holy grail solution that’s going to be personalized to each person in 2026.
I’m not very optimistic about that, but maybe by 2036 we’ll get a little closer.
Ross: Yeah. So, I mean, I guess, as you say, a lot of what people talk about with precision medicine is specific biomarkers and so on, that you can use to understand when particular drugs would be relevant.
But back to the point where you’re starting with this idea of using machine learning to pick up patterns—does this mean you can perhaps be far more comprehensive in seeing the whole person in their context, environment, background, and behaviors, and so on, to be able to understand what interventions will make sense for that individual, and all of the whole array of patterns that the person manifests?
Matt: Yeah, I think it’s a great question. I think the data science and the kind of health science of understanding, again, kind of what might be called the inactive psychiatry of the person—how they make meaning in the world—is just now starting to catch up with reality.
When I did my master’s thesis 21 years ago in health services research, there were people trying to figure out: if you were working in the world, how do we understand when you’re suffering with a particular illness, what it means to you?
It might mean to the policy wonks that your productivity loss is X, or your quality-adjusted life years is minus Y. Or to your employer, that you can’t function as much as you used to function. But to you—does it really matter to you that your symptom burden is A or Z? Or does it really matter to you that you can’t sleep at night?
If you can’t sleep at night, for most people, that’s really annoying. And if you can’t sleep at night six, seven, ten nights in a row, it’s catastrophic because you almost can’t function. Whereas on the quality score, it doesn’t even register—it’s like a rounding error.
So the difference between the patient-reported outcomes for what matters for real people and what it matters to the decision-makers—there’s a lot of daylight between those things, and there has been for a long time.
In the neuropsychiatric, mental health, brain health space, it’s starting to catch up, for I think a couple of reasons.
One, the lived experience movement. I chair the One Mind Community Advisory Network here in the States, which is a group of about 40 lived experience experts with deep subject matter expertise, all of whom suffer from neuropsychiatric illness, neurodivergence, and the rest. These are people that suffer daily but have turned their pain into purpose.
The industry at large has seen that in order to build solutions for people suffering from different conditions, you need to co-create with those people. I mean, this seems intuitive to me, but for many years—for almost all the years, 100 years—most solutions were designed by engineers, designed by scientists, designed by clinicians, without patients at the table.
When you build something for someone without the person there, you get really pretty apps and software and drugs that often don’t work. Now, having the people actually represented at the table, you get much better solutions that hopefully actually have both efficacy in the lab and effectiveness in the real world.
The other big thing I think that’s changing a lot is that people have more of a “citizen data scientist” kind of approach. Because we’re used to things like our Apple Watch, and our iPads, and our iPhones, and we’re just in the world with data being in front of us all the time, there’s more sensitivity, specificity, and demand for visibility around data in our life.
This didn’t exist 20 years ago.
So just to be in an environment where your mental health, your brain health, is being handed to you on a delivery, if you will—and not to get some kind of feedback on how well it’s working—20 years ago, people were like, “Okay, yeah, that makes sense. I’m taking an Excedrin for my migraine. If it doesn’t work, I’m clearing to take a different medicine.”
But now, if you get something and you don’t get feedback on how well it’s working, the person or organization supporting it isn’t doing their job.
There’s more of an imprimatur, if you will, of expectation on juxtaposing that data analytics discipline, so that people understand whether they’re making progress, what good looks like, are they benchmarking against some kind of expectation—and then, what the leaderboard looks like.
How is Ross doing, versus how Matt’s doing, versus what the gold standard looks like, and all the rest. This didn’t exist a generation ago, but now there’s more to it.
Ross: That’s really interesting. This rise of citizen science is not just giving us data, but it’s also the attitude of people—that this is a normal thing to do: to participate, to get data about themselves, to share that back, to have context.
That’s actually a really strong positive feedback loop to be able to develop better things.
So I think, as well as this idea of simply just getting the patients at the table—so we’ve talked quite a bit, I suppose, from this context of machine learning—of course, generative AI has come along.
So, first of all, just a big picture: what are the opportunities from generative AI for assisting mental well-being?
Matt: Yeah. I mean, first of all, I am definitely a technophile. But that notwithstanding, I will say that no technology is either all good or all bad. I think it’s in the eyes of the beholder—or the wielder, if you will.
I’ve seen some horrific use cases of generative AI that really put a fear into my heart. But I’ve also seen some amazing implementations that people have used that give me a tremendous amount of hope about the near and far future in brain health broadly, and in mental health specifically.
Just one practical example: in the United States and a lot of the English-speaking countries—the UK, New Zealand, and Australia—there is a loneliness epidemic.
When I say loneliness, I don’t mean people that are alone, that either choose to be alone or live lives that are alone. I actually mean people that have a lower quality of life and are lonely, and as a result, they die earlier and they have more comorbid illness. It’s a problem that needs to be solved.
In these cases, there are a number of either point solutions that are designed specifically using generative AI or just purpose-built generative AI applications that can act both as a companion and as a thought partner for people who are challenged in their contextual environment.
They act in ways where they don’t have other access or resources, and in those times of need, AI can get them to a place where they either catalyze consideration to get back into an environment that they recall being useful at an earlier point.
For example, they find an interest in something that they found utility in earlier—like playing chess, or playing a card game, a strategy game, or getting back to dancing or some other “silly” thing that to them isn’t silly, but might be silly to a listener.
And because they rekindle this interest, they go and find an in-person way of reigniting with a community in the environment. The generative AI platform or application catalyzes that connection.
There are a number of examples like that, and the AI utility case is nearly free. The use of it is zero cost for the person, but it prevents them from slipping down the slippery slope of an actual DSM-5 psychiatric illness—like depression or anxiety—and becoming much, much worse.
They’re kind of rescued by AI, if you will, and they become closer to healthy and well because they either find a temporary pro-social kind of companion or they actually socialize and interact with other humans.
I have seen some kind of scary use cases recently where people who are also isolated—I won’t use the word lonely—don’t have proper access to clinicians.
In many places around the world, there is a significant shortage of licensed professionals trained in mental health and mental illness. In many of these cases, when people don’t have a diagnosed illness or they have a latent personality disorder, they have other challenges coming to the fore and they rely on generative AI for directional implementation.
They do something as opposed to think something, and it can rapidly spiral out of control—especially when people are using GPTs or purpose-built models that reinforce vicious cycles or feedback loops that are negatively reinforcing.
I’ve seen some examples, due to some of the work I do in the lived experience community, where people have these built-in cognitive biases around certain tendencies, and they’ll build a GPT that reinforces those tendencies.
What starts out as a harmless comment from someone in their network—like a boyfriend, employee, or neighbor—suddenly becomes the millionth example of something that’s terrible. The GPT reinforces that belief.
All of a sudden, this person is isolated from the world because they’ve cut off relationships with everyone in their entire circle—not because they really believe those things, but because their GPT has counseled them that they should do these things.
They don’t have anyone else to talk to, and they believe they should do them, and they actually carry those things out. I’ve seen a couple of examples like this that are truly terrifying.
We do some work in the not-for-profit space trying to provide safe harbors and appropriate places for care—where people have considerations of self-harm, where a platform might indicate that someone is at risk of suicide or other considerations.
We try to provide a place where people can go to say, “Is this really what you’re thinking?” If so, there’s a number to call—988—or someone you can reach out to as a clinician.
But I think, like all technologies: you can use a car to drive to the grocery store. You could also use the same car to run someone over.
We have to really think about: what in the technology is innate to the user, and what it was really meant to do?
Ross: Yeah. Well, it’s a fraught topic now, as in there are, as you say, some really negative cases. The commercial models, with their tendency toward sycophancy and encouraging people to continue using them, start to get into all these negative spirals.
We can and have, of course, some clinically designed tools—generative AI tools to assist—but not everybody uses those. One of the other factors, of course, is that not everybody even has the finances, or the finance isn’t available to provide clinicians for everybody. So it’s a bit fraught.
I go back to 15 years ago, I guess—Paro, the robot seal in Japan—which was a very cute, cuddly robot given to people with neurodegenerative diseases. They came out of their shell, often. They started to interact more with other people just through this little robot.
But as you say, there is the potential then for these not to be substitutes. Many people rail against, “Oh, we can’t substitute real human connection with AI,” and that’s obviously what we want.
But it can actually help re-engage people with human connection—in the best circumstances.
Matt: Yeah. I mean, listen, if I was doing this discussion with almost any other human on planet Earth, Ross, I would probably take that bait and we could progress it.
But I’m not going to pick that up with you, because no one knows this topic—of what humans can, should, and will potentially do in the future—better than you, than any other human. So I’m not going to take that.
But let me comment one little thing on the mental health side. The other thing that I think people often overlook is that, in addition to being a tool, generative AI is also a transformative force.
The best analogy I have comes from a friend of mine, Connor Brennan, who’s one of the top AI experts globally. He’s the Chief AI Architect at NYU here in New York City.
He says that AI is like electricity in this regard: you can electrify things, you can build an electrical grid, but it’s also a catalyst for major advances in the economy and helps power forward the industry at large.
I think generative AI is exactly like that. There are point solutions built off generative AI, but also—especially in scientific research and in the fields of neurotechnology, neuroscience, cognition, and psychology—the advances in the field have progressed more in the last three years post–generative AI, post–ChatGPT, than in the previous 30 years.
And what’s coming—and I’ve seen this in National Academy of Medicine presentations, NIH, UK ARIA, and other forums—what’s coming in the next couple of years will leapfrog even that.
It’s for a couple of reasons. I’m sure you’re familiar with this saying: back in the early 2000s, there was a saying in the data science community, “The best type of machine learning is no machine learning.”
That phrase referred to the fact that it was so expensive to build a machine learning model, and it worked so infrequently, that it was almost never recommended. It was a fool’s errand to build the thing, because it was so expensive and worked so rarely.
When I used to present at conferences on the models we would build, people always asked the same questions: What was the drift? How resilient was the model? How did we productionize it? How was it actually going to work?
And it was—frankly—kind of annoying, because I didn’t know if it was going to work myself. We were just kind of hoping that it would.
Now, over the last couple of years, no one asks those questions. Now people ask questions like: “Are robots going to take my job?” “How am I going to pay my mortgage?” “Are we going to be in the bread lines in three years?” “Are there going to be mass riots?”
That’s what people ask about now. The conversation has shifted over the last five years from “Will it work?” to “It works too well. What does it mean for me—for my human self?”
“How am I going to be relevant in the future?”
I think the reason why that is, is because it went from being kind of a tactical tool to being a transformative force.
In the scientific research community, what’s really accelerating is our ability to make sense of a number of data points that, up until very recently, people saw as unrelated—but that are actually integrated, part of the same pattern.
This is leading to major advances in fields that, up until recently, could not have been achieved.
One of those is in neuroelectronics. I’m very excited by some of the advances in neurotechnology, for example—and we have an equity interest in a firm in this space.
Implantable brain considerations is one major place where mental illness can advance. AI is both helping to decipher the language of communication from a neuroplasticity standpoint, and making it possible for researchers and clinicians to communicate with the implant in your brain when you’re not in the clinic.
So, if you go about your regular life—you go to work, you play baseball, you do anything during your day—you can go about your life, and because of AI, it makes monitoring the implant in your brain no different than having a continuous glucose monitor or taking a pill.
The advances in AI are tremendous—not just for using ChatGPT to write a job description—but for allowing things like bioelectronic medicine to exist and be in the clinic in four or five years from now.
Whereas, 40 years ago, it would have been considered magic to do things like that.
Ross: So, we pull this back, and I’d like to come back to where we started. Before we started recording, we were chatting about the big picture of brain capital.
So I just want to think about this idea of brain capital. What are the dimensions to that? And what are the ways in which we can increase it? What are the potential positive impacts? What is the big picture around this idea of brain capital?
Matt: Yeah. I mean, the big picture is that every human on Earth deserves to live a life worth living. It’s really that simple. Every person on planet Earth deserves to have a life that they enjoy, that they find to be meaningful and happy, and that they can live their purpose—every person, regardless of who they’re born to, their religion, their race, their creed, their region.
And they should be free of strife—mental strife, physical strife, and the strife of war. For some reason, we can’t seem to get out of these cycles over the last 100,000 years.
The thesis of brain capital is that the major reason why that’s been the case is that a sixth of the world’s population currently has mental illness—diagnosed or undiagnosed. About a quarter of the world’s population is living under what the World Health Organization calls a “brain haze” or “brain fog.”
We have a kind of collective sense of cognitive impairment, where we know what we should do, but we don’t do it—either because we don’t think it’s right, or there are cultural norms that limit our ability to actually progress forward.
And then the balance of people are still living with a kind of caveman mindset. We came out of the caves 40,000–60,000 years ago, and now we have iPhones and generative AI, but our emotions are still shaped by this feeling of scarcity—this deficit mindset, where it feels like we’re never going to have the next meal, we’re never going to have enough resources.
It’s like less is more all the time.
But actually, right around the corner is a mindset of abundance. And if you operate with an abundance mindset, and believe—as Einstein said—that everything is a miracle, the world starts responding appropriately.
But if you act like nothing is a miracle, and that it’s never going to be enough, that’s the world through your eyes.
So the brain capital thesis is: everyone is mentally well, everyone is doing what’s in the best collective interest of society, and everyone is able to see the world as a world of abundance—and therefore, a life worth living.
Ross: That is awesome. No, that’s really, really well put. So, how do we do it? What are the steps we need to take to move towards that?
Matt: Yeah. I mean, I think we’re already walking the path. I think there are communities—like the ones that we’ve been together on, Ross—and others that are coming together to try to identify the ways of working, and putting resources and energy and attention to some of these challenges.
Some of these things are kind of old ideas in new titles, if you will. And there are a number of trajectories and considerations that are progressing under new forms as well.
I think one of the biggest things is that we really need both courage to try new ways of working, and also—to use a Napoleon expression—Napoleon said that a leader’s job is to be a dealer in hope.
We really need to give people the courage to see that the future is brighter than the past, and that nothing is impossible.
So our considerations in the brain capital standpoint are that we need to set these moonshot goals that are realistic—achievable if we put resources in the right place.
I’ve heard folks from the World Economic Forum, World Health Organization, and others say things like: by this time next decade—by the mid-2030s—we need to cure global mental illness completely. No mental illness for anyone.
By 2037–2038, we need to prevent brain health disorders like Alzheimer’s, Parkinson’s, dystonia, essential tremor, epilepsy, etc.
And people say things like, “That’s not possible,” but when you think about other major chronic illnesses—like Hepatitis C or breast cancer—when I was a kid, either of those things were death sentences. Now, they’re chronic illnesses or they don’t exist at all.
So we can do them. But we have to choose to do them, and start putting resources against solving these problems, instead of just saying, “It can’t be done.”
Ross: Yeah, absolutely. So, you’ve got a venture in this space. I’d love to round out by hearing about what you are doing—with you and your colleagues.
Matt: So, we’re not building anything—we’re helping others build. And that’s kind of a lesson learned from experience.
To use another quote that I love—it’s a Gandhi quote—which is, “I never lose. I only win or I learn.”
So we tried our hand at digital mental health for a time, and found that we were better advisors and consultants and mentors and coaches than we were direct builders ourselves.
But we have a firm. It’s the first AI-native venture studio for brain capital, and we work with visionary entrepreneurs, CEOs, startups—really those that are building brain capital firms.
So think: mental illness, mental health, brain health, executive function, mindset, corporate learning, corporate training—that type of thing. Where they have breakthrough ideas, they have funding, but they need consideration to kind of help scale to the ecosystem.
We wrap around them like a halo and help support their consideration in the broader marketplace.
We’re really focused on these three things: mental health, mindset, and mental skills.
There are 12 of us in the firm. We also do a fair amount of public speaking—workshops, customer conferences, hackathons. The conference we were just at last week in San Francisco was part of our work.
And then we advise some other groups, like not-for-profits and the government.
Ross: Fantastic. So, what do you hope to see happen in the next five to ten years in this space?
Matt: Yeah, I’m really optimistic, honestly. I know it’s a very tumultuous time externally, and a lot of people are suffering. I try to give back as much as possible.
We, as an organization, we’re a public benefit corporation, so we give 10% of all our revenue to charity. And I volunteer at least a day a month directly in the community. I do know that a lot of people are having a very difficult time at present.
I do feel very optimistic about our mid- and long-term future. I think we’re in a very difficult transition period right now because of AI, the global economic environment, and the rest. But I’m hopeful that come the early 2030s, human potential broadly will be optimized, and many fewer people on this planet will be suffering than are suffering at present.
And hopefully by this time next decade, we’ll be multi-planetary, and we’ll be starting to focus our resources on things that matter.
I remember there was a quote I read maybe six or seven years ago—something like: “The best minds of our generation are trying to get people to click on ads on Facebook.” When you think about what people were doing 60 years ago—we were building the space shuttle to the moon.
The same types of people that would get people to click on ads on Meta are now trying to get people to like things on LinkedIn. It’s just not a good use of resources.
I’ve seen similar commentary from the Israeli Defense Forces. They talk about all the useless lives wasted on wars and terrorism. You could think about not fighting these battles and start thinking about other ways of helping humanity.
There’s so much progress and potential and promise when we start solving problems and start looking outward, if you will.
Ross: Yeah. You’re existing in the world that is pushing things further down that course. So where can people find out more about your work?
Matt: Right now, LinkedIn is probably the best way.
We’re in the midst of a merger of equals between my original firm, Elemental, and my business partner John Nelson’s firm, John Nelson Advisors. By Labor Day (U.S.), we’ll be back out in the world as iLIVD—i, L, I, V, D—with a new website and clout room and all the rest.
But it’s the same focus: AI-native venture studio for brain health—just twice the people, twice the energy, and all the consideration.
So we’re looking forward to continuing to serve the community and progressing forward.
Ross: No, it’s fantastic. Matt, you are a force for positive change, and it’s fantastic to see not just, obviously, the underlying attitude, but what you’re doing. So, fantastic. Thank you so much for your time and everything you’re doing. Thank you again.
Matt: Thank you again Ross, I really appreciate you having me on, and always a pleasure speaking with you.
The post Matt Lewis on augmenting brain capital, AI for mental health, neurotechnology, and dealing in hope (AC Ep8) appeared first on Humans + AI.