
Humans + AI (formerly Amplifying Cognition)
Exploring and unlocking the potential of AI for individuals, organizations, and humanity
Latest episodes

Jun 25, 2025 • 34min
Matt Lewis on augmenting brain capital, AI for mental health, neurotechnology, and dealing in hope (AC Ep8)
“The big picture is that every human on Earth deserves to live a life worth living… free of mental strife, physical strife, and the strife of war.”
– Matt Lewis
About Matt Lewis
Matt is CEO, Founder and Chief Augmented Intelligence Officer of LLMental, a Public Benefit Limited Liability Corporation Venture Studio focused on augmenting brain capital. He was previously Chief AI Officer at Inizio Health, and contributes in many roles including as a member of OpenAI’s Executive Forum, Gartner’s Peer Select AI Community and faculty at the World Economic Forum’ New Champions’ initiative.
Website:
Matt Lewis
LinkedIn Profile:
Matt Lewis
What you will learn
Using AI to support brain health and mental well-being
Redefining mental health with lived experience leadership
The promise and danger of generative AI in loneliness
Bridging neuroscience and precision medicine
Citizen data science and the future of care
Unlocking human potential through brain capital
Shifting from scarcity mindset to abundance thinking
Episode Resources
Transcript
Ross Dawson: Matt, it’s awesome to have you on the show.
Matt Lewis: Thank you so much for having me. Ross, it’s a real pleasure and honor. And thank you to everyone that’s watching, listening, learning. I’m so happy to be here with all of you.
Ross: So you are focusing on using AI amongst other technologies to increase brain capital. So what does that mean?
Matt: Yeah. I mean, it’s a great question, and it’s, I think, the challenge of our time, perhaps our generation, if you will.
I’ve been in artificial intelligence for 18 years, which is like an eon in the current environment, if you will. I built my first machine learning model about 18 years ago for Parkinson’s disease, under a degenerative condition where people lose the ability to control their body as they wish they would.
I was working at Boehringer Ingelheim at the time, and we had a drug, a dopamine agonist, to help people regain function, if you will. But some small number of people developed this weird side effect, this adverse event that didn’t appear in clinical trials, where they became addicted to all sorts of compulsive behaviors that made their actual lives miserable. Like they became shopping addicts, or they became compulsive gamblers. They developed proclivities to sexual behaviors that they didn’t have before they were on our drug, and no one could quite figure out why they had these weird things happening to them.
And even though they were seeing the top academic neurologists in this country, United States, or other countries, no one can say why Ross would get this adverse event and Matt wouldn’t. It didn’t appear in the studies, and there’s no way to kind of figure it out.
The only thing that kind of really sussed out what was an adverse event versus what wasn’t was advanced statistical regression and later machine learning. But back in the days, almost 20 years ago, you needed massive compute, massive servers—like on trucks—to be able to ship these types of considerations to actually improve clinical outcomes.
Now, thankfully, the ability to provide practical innovation in the form of AI to help improve people’s actual lives through brain health is much more accessible, democratisable, almost in a way that wasn’t available then.
And if it first appeared for motor symptoms, for neurodegenerative disease, some time ago, now we can use AI to help not just the neurodegenerative side of the spectrum but also neuropsychiatric illness, mental illness, to help identify people that are at risk for cognition challenges.
Here in Manhattan, it’s like 97 degrees today. People don’t think the way they normally do when it’s 75. They make decisions that they perhaps wish they hadn’t, and a lot of the globe is facing similar challenges.
So if we can kind of partner with AI to make better decisions, everyone’s better off.
That construct—where we think differently, we make better decisions, we are mentally well, and we use our brains the way that was intended—all those things together are brain capital. And by doing that broadly, consistently, we’re better off as a society.
Ross: Fantastic. So that case, you’re looking at machine learning—so essentially being able to pull out patterns. Patterns between environmental factors, drugs used, background, other genetic data, and so on.
So this means that you can—is this, then, alluding, I suppose, to precision medicine and being able to identify for individuals what the right pharmaceutical regimes are, and so on?
Matt: Yeah. I mean, I think the idea of precision medicine, personalized medicine, is very appealing. I think it’s very early, maybe even embryonic, kind of consideration in the neuroscience space.
I worked for a long time for companies like Roche and Genentech, others in that ecosystem, doing personalized medicine with biomarkers for oncology, for cancer care—where you knew a specific target, an enzyme, a protein that was mutated and there was a degradation, and identified which enzyme was a bit remiss.
Then tried to build a companion diagnostic to find the signal, if you will, and then help people that were suffering.
It’s a little bit more—almost at risk of saying—straightforward in that regard, because if someone had the patient, you knew that the drug would work.
Unfortunately, I think there’s a common kind of misconception—I know you know this exceptionally well, but there are people out there, perhaps listening, that don’t know it as well—that the state of cognitive neuroscience, that is what we know of the brain or how the brain works and how it works in the actual world in which we live, on planet Earth and terra firma, is probably about as far advanced as the state of the heart was when Jesus Christ walked the Earth about 2,000 years ago.
That is, we probably have about 100 years of knowledge truly about how the brain truly works in the world, and we’re making decisions about how to engineer personalized medicine for a very, very, very young, nascent science called the brain—with almost no real kind of true, practical, contextual understanding of how it really works in the world.
So I think personalized medicine has tremendous possible promises. The reality of it doesn’t really pan out so well.
There are a couple of recent examples of this from companies like Nomura, Alto Neuroscience, and the rest, where they try to build these kind of ex post facto precision medicine databases of people that have benefited from certain psychiatric medicines.
But they end up not being as beneficial as you’d like them to be, because we just don’t know really a lot about how the brain actually works in the real world.
There even still is the debate for people—but even if you extend past the brain and mind debate—I think it’s hard to find the number of people that are building in the space that will recognize contextual variables beyond the brain and mind.
Including things like the biopsychosocial continuum, the understanding of spirituality and nature, all the rest.
All these things are kind of moving and changing and dynamic at a constant equilibrium.
And to try to find a point solution that says Matt or Ross are going to be beneficial at this one juncture, and they’re going to change it right now—it’s just exceptionally difficult. Important, but exceptionally difficult.
So I think the focus is more about how do we show up in the real world today, using AI to actually help our actual life be meaningful and beneficial, rather than trying to find this holy grail solution that’s going to be personalized to each person in 2026.
I’m not very optimistic about that, but maybe by 2036 we’ll get a little closer.
Ross: Yeah. So, I mean, I guess, as you say, a lot of what people talk about with precision medicine is specific biomarkers and so on, that you can use to understand when particular drugs would be relevant.
But back to the point where you’re starting with this idea of using machine learning to pick up patterns—does this mean you can perhaps be far more comprehensive in seeing the whole person in their context, environment, background, and behaviors, and so on, to be able to understand what interventions will make sense for that individual, and all of the whole array of patterns that the person manifests?
Matt: Yeah, I think it’s a great question. I think the data science and the kind of health science of understanding, again, kind of what might be called the inactive psychiatry of the person—how they make meaning in the world—is just now starting to catch up with reality.
When I did my master’s thesis 21 years ago in health services research, there were people trying to figure out: if you were working in the world, how do we understand when you’re suffering with a particular illness, what it means to you?
It might mean to the policy wonks that your productivity loss is X, or your quality-adjusted life years is minus Y. Or to your employer, that you can’t function as much as you used to function. But to you—does it really matter to you that your symptom burden is A or Z? Or does it really matter to you that you can’t sleep at night?
If you can’t sleep at night, for most people, that’s really annoying. And if you can’t sleep at night six, seven, ten nights in a row, it’s catastrophic because you almost can’t function. Whereas on the quality score, it doesn’t even register—it’s like a rounding error.
So the difference between the patient-reported outcomes for what matters for real people and what it matters to the decision-makers—there’s a lot of daylight between those things, and there has been for a long time.
In the neuropsychiatric, mental health, brain health space, it’s starting to catch up, for I think a couple of reasons.
One, the lived experience movement. I chair the One Mind Community Advisory Network here in the States, which is a group of about 40 lived experience experts with deep subject matter expertise, all of whom suffer from neuropsychiatric illness, neurodivergence, and the rest. These are people that suffer daily but have turned their pain into purpose.
The industry at large has seen that in order to build solutions for people suffering from different conditions, you need to co-create with those people. I mean, this seems intuitive to me, but for many years—for almost all the years, 100 years—most solutions were designed by engineers, designed by scientists, designed by clinicians, without patients at the table.
When you build something for someone without the person there, you get really pretty apps and software and drugs that often don’t work. Now, having the people actually represented at the table, you get much better solutions that hopefully actually have both efficacy in the lab and effectiveness in the real world.
The other big thing I think that’s changing a lot is that people have more of a “citizen data scientist” kind of approach. Because we’re used to things like our Apple Watch, and our iPads, and our iPhones, and we’re just in the world with data being in front of us all the time, there’s more sensitivity, specificity, and demand for visibility around data in our life.
This didn’t exist 20 years ago.
So just to be in an environment where your mental health, your brain health, is being handed to you on a delivery, if you will—and not to get some kind of feedback on how well it’s working—20 years ago, people were like, “Okay, yeah, that makes sense. I’m taking an Excedrin for my migraine. If it doesn’t work, I’m clearing to take a different medicine.”
But now, if you get something and you don’t get feedback on how well it’s working, the person or organization supporting it isn’t doing their job.
There’s more of an imprimatur, if you will, of expectation on juxtaposing that data analytics discipline, so that people understand whether they’re making progress, what good looks like, are they benchmarking against some kind of expectation—and then, what the leaderboard looks like.
How is Ross doing, versus how Matt’s doing, versus what the gold standard looks like, and all the rest. This didn’t exist a generation ago, but now there’s more to it.
Ross: That’s really interesting. This rise of citizen science is not just giving us data, but it’s also the attitude of people—that this is a normal thing to do: to participate, to get data about themselves, to share that back, to have context.
That’s actually a really strong positive feedback loop to be able to develop better things.
So I think, as well as this idea of simply just getting the patients at the table—so we’ve talked quite a bit, I suppose, from this context of machine learning—of course, generative AI has come along.
So, first of all, just a big picture: what are the opportunities from generative AI for assisting mental well-being?
Matt: Yeah. I mean, first of all, I am definitely a technophile. But that notwithstanding, I will say that no technology is either all good or all bad. I think it’s in the eyes of the beholder—or the wielder, if you will.
I’ve seen some horrific use cases of generative AI that really put a fear into my heart. But I’ve also seen some amazing implementations that people have used that give me a tremendous amount of hope about the near and far future in brain health broadly, and in mental health specifically.
Just one practical example: in the United States and a lot of the English-speaking countries—the UK, New Zealand, and Australia—there is a loneliness epidemic.
When I say loneliness, I don’t mean people that are alone, that either choose to be alone or live lives that are alone. I actually mean people that have a lower quality of life and are lonely, and as a result, they die earlier and they have more comorbid illness. It’s a problem that needs to be solved.
In these cases, there are a number of either point solutions that are designed specifically using generative AI or just purpose-built generative AI applications that can act both as a companion and as a thought partner for people who are challenged in their contextual environment.
They act in ways where they don’t have other access or resources, and in those times of need, AI can get them to a place where they either catalyze consideration to get back into an environment that they recall being useful at an earlier point.
For example, they find an interest in something that they found utility in earlier—like playing chess, or playing a card game, a strategy game, or getting back to dancing or some other “silly” thing that to them isn’t silly, but might be silly to a listener.
And because they rekindle this interest, they go and find an in-person way of reigniting with a community in the environment. The generative AI platform or application catalyzes that connection.
There are a number of examples like that, and the AI utility case is nearly free. The use of it is zero cost for the person, but it prevents them from slipping down the slippery slope of an actual DSM-5 psychiatric illness—like depression or anxiety—and becoming much, much worse.
They’re kind of rescued by AI, if you will, and they become closer to healthy and well because they either find a temporary pro-social kind of companion or they actually socialize and interact with other humans.
I have seen some kind of scary use cases recently where people who are also isolated—I won’t use the word lonely—don’t have proper access to clinicians.
In many places around the world, there is a significant shortage of licensed professionals trained in mental health and mental illness. In many of these cases, when people don’t have a diagnosed illness or they have a latent personality disorder, they have other challenges coming to the fore and they rely on generative AI for directional implementation.
They do something as opposed to think something, and it can rapidly spiral out of control—especially when people are using GPTs or purpose-built models that reinforce vicious cycles or feedback loops that are negatively reinforcing.
I’ve seen some examples, due to some of the work I do in the lived experience community, where people have these built-in cognitive biases around certain tendencies, and they’ll build a GPT that reinforces those tendencies.
What starts out as a harmless comment from someone in their network—like a boyfriend, employee, or neighbor—suddenly becomes the millionth example of something that’s terrible. The GPT reinforces that belief.
All of a sudden, this person is isolated from the world because they’ve cut off relationships with everyone in their entire circle—not because they really believe those things, but because their GPT has counseled them that they should do these things.
They don’t have anyone else to talk to, and they believe they should do them, and they actually carry those things out. I’ve seen a couple of examples like this that are truly terrifying.
We do some work in the not-for-profit space trying to provide safe harbors and appropriate places for care—where people have considerations of self-harm, where a platform might indicate that someone is at risk of suicide or other considerations.
We try to provide a place where people can go to say, “Is this really what you’re thinking?” If so, there’s a number to call—988—or someone you can reach out to as a clinician.
But I think, like all technologies: you can use a car to drive to the grocery store. You could also use the same car to run someone over.
We have to really think about: what in the technology is innate to the user, and what it was really meant to do?
Ross: Yeah. Well, it’s a fraught topic now, as in there are, as you say, some really negative cases. The commercial models, with their tendency toward sycophancy and encouraging people to continue using them, start to get into all these negative spirals.
We can and have, of course, some clinically designed tools—generative AI tools to assist—but not everybody uses those. One of the other factors, of course, is that not everybody even has the finances, or the finance isn’t available to provide clinicians for everybody. So it’s a bit fraught.
I go back to 15 years ago, I guess—Paro, the robot seal in Japan—which was a very cute, cuddly robot given to people with neurodegenerative diseases. They came out of their shell, often. They started to interact more with other people just through this little robot.
But as you say, there is the potential then for these not to be substitutes. Many people rail against, “Oh, we can’t substitute real human connection with AI,” and that’s obviously what we want.
But it can actually help re-engage people with human connection—in the best circumstances.
Matt: Yeah. I mean, listen, if I was doing this discussion with almost any other human on planet Earth, Ross, I would probably take that bait and we could progress it.
But I’m not going to pick that up with you, because no one knows this topic—of what humans can, should, and will potentially do in the future—better than you, than any other human. So I’m not going to take that.
But let me comment one little thing on the mental health side. The other thing that I think people often overlook is that, in addition to being a tool, generative AI is also a transformative force.
The best analogy I have comes from a friend of mine, Connor Brennan, who’s one of the top AI experts globally. He’s the Chief AI Architect at NYU here in New York City.
He says that AI is like electricity in this regard: you can electrify things, you can build an electrical grid, but it’s also a catalyst for major advances in the economy and helps power forward the industry at large.
I think generative AI is exactly like that. There are point solutions built off generative AI, but also—especially in scientific research and in the fields of neurotechnology, neuroscience, cognition, and psychology—the advances in the field have progressed more in the last three years post–generative AI, post–ChatGPT, than in the previous 30 years.
And what’s coming—and I’ve seen this in National Academy of Medicine presentations, NIH, UK ARIA, and other forums—what’s coming in the next couple of years will leapfrog even that.
It’s for a couple of reasons. I’m sure you’re familiar with this saying: back in the early 2000s, there was a saying in the data science community, “The best type of machine learning is no machine learning.”
That phrase referred to the fact that it was so expensive to build a machine learning model, and it worked so infrequently, that it was almost never recommended. It was a fool’s errand to build the thing, because it was so expensive and worked so rarely.
When I used to present at conferences on the models we would build, people always asked the same questions: What was the drift? How resilient was the model? How did we productionize it? How was it actually going to work?
And it was—frankly—kind of annoying, because I didn’t know if it was going to work myself. We were just kind of hoping that it would.
Now, over the last couple of years, no one asks those questions. Now people ask questions like: “Are robots going to take my job?” “How am I going to pay my mortgage?” “Are we going to be in the bread lines in three years?” “Are there going to be mass riots?”
That’s what people ask about now. The conversation has shifted over the last five years from “Will it work?” to “It works too well. What does it mean for me—for my human self?”
“How am I going to be relevant in the future?”
I think the reason why that is, is because it went from being kind of a tactical tool to being a transformative force.
In the scientific research community, what’s really accelerating is our ability to make sense of a number of data points that, up until very recently, people saw as unrelated—but that are actually integrated, part of the same pattern.
This is leading to major advances in fields that, up until recently, could not have been achieved.
One of those is in neuroelectronics. I’m very excited by some of the advances in neurotechnology, for example—and we have an equity interest in a firm in this space.
Implantable brain considerations is one major place where mental illness can advance. AI is both helping to decipher the language of communication from a neuroplasticity standpoint, and making it possible for researchers and clinicians to communicate with the implant in your brain when you’re not in the clinic.
So, if you go about your regular life—you go to work, you play baseball, you do anything during your day—you can go about your life, and because of AI, it makes monitoring the implant in your brain no different than having a continuous glucose monitor or taking a pill.
The advances in AI are tremendous—not just for using ChatGPT to write a job description—but for allowing things like bioelectronic medicine to exist and be in the clinic in four or five years from now.
Whereas, 40 years ago, it would have been considered magic to do things like that.
Ross: So, we pull this back, and I’d like to come back to where we started. Before we started recording, we were chatting about the big picture of brain capital.
So I just want to think about this idea of brain capital. What are the dimensions to that? And what are the ways in which we can increase it? What are the potential positive impacts? What is the big picture around this idea of brain capital?
Matt: Yeah. I mean, the big picture is that every human on Earth deserves to live a life worth living. It’s really that simple. Every person on planet Earth deserves to have a life that they enjoy, that they find to be meaningful and happy, and that they can live their purpose—every person, regardless of who they’re born to, their religion, their race, their creed, their region.
And they should be free of strife—mental strife, physical strife, and the strife of war. For some reason, we can’t seem to get out of these cycles over the last 100,000 years.
The thesis of brain capital is that the major reason why that’s been the case is that a sixth of the world’s population currently has mental illness—diagnosed or undiagnosed. About a quarter of the world’s population is living under what the World Health Organization calls a “brain haze” or “brain fog.”
We have a kind of collective sense of cognitive impairment, where we know what we should do, but we don’t do it—either because we don’t think it’s right, or there are cultural norms that limit our ability to actually progress forward.
And then the balance of people are still living with a kind of caveman mindset. We came out of the caves 40,000–60,000 years ago, and now we have iPhones and generative AI, but our emotions are still shaped by this feeling of scarcity—this deficit mindset, where it feels like we’re never going to have the next meal, we’re never going to have enough resources.
It’s like less is more all the time.
But actually, right around the corner is a mindset of abundance. And if you operate with an abundance mindset, and believe—as Einstein said—that everything is a miracle, the world starts responding appropriately.
But if you act like nothing is a miracle, and that it’s never going to be enough, that’s the world through your eyes.
So the brain capital thesis is: everyone is mentally well, everyone is doing what’s in the best collective interest of society, and everyone is able to see the world as a world of abundance—and therefore, a life worth living.
Ross: That is awesome. No, that’s really, really well put. So, how do we do it? What are the steps we need to take to move towards that?
Matt: Yeah. I mean, I think we’re already walking the path. I think there are communities—like the ones that we’ve been together on, Ross—and others that are coming together to try to identify the ways of working, and putting resources and energy and attention to some of these challenges.
Some of these things are kind of old ideas in new titles, if you will. And there are a number of trajectories and considerations that are progressing under new forms as well.
I think one of the biggest things is that we really need both courage to try new ways of working, and also—to use a Napoleon expression—Napoleon said that a leader’s job is to be a dealer in hope.
We really need to give people the courage to see that the future is brighter than the past, and that nothing is impossible.
So our considerations in the brain capital standpoint are that we need to set these moonshot goals that are realistic—achievable if we put resources in the right place.
I’ve heard folks from the World Economic Forum, World Health Organization, and others say things like: by this time next decade—by the mid-2030s—we need to cure global mental illness completely. No mental illness for anyone.
By 2037–2038, we need to prevent brain health disorders like Alzheimer’s, Parkinson’s, dystonia, essential tremor, epilepsy, etc.
And people say things like, “That’s not possible,” but when you think about other major chronic illnesses—like Hepatitis C or breast cancer—when I was a kid, either of those things were death sentences. Now, they’re chronic illnesses or they don’t exist at all.
So we can do them. But we have to choose to do them, and start putting resources against solving these problems, instead of just saying, “It can’t be done.”
Ross: Yeah, absolutely. So, you’ve got a venture in this space. I’d love to round out by hearing about what you are doing—with you and your colleagues.
Matt: So, we’re not building anything—we’re helping others build. And that’s kind of a lesson learned from experience.
To use another quote that I love—it’s a Gandhi quote—which is, “I never lose. I only win or I learn.”
So we tried our hand at digital mental health for a time, and found that we were better advisors and consultants and mentors and coaches than we were direct builders ourselves.
But we have a firm. It’s the first AI-native venture studio for brain capital, and we work with visionary entrepreneurs, CEOs, startups—really those that are building brain capital firms.
So think: mental illness, mental health, brain health, executive function, mindset, corporate learning, corporate training—that type of thing. Where they have breakthrough ideas, they have funding, but they need consideration to kind of help scale to the ecosystem.
We wrap around them like a halo and help support their consideration in the broader marketplace.
We’re really focused on these three things: mental health, mindset, and mental skills.
There are 12 of us in the firm. We also do a fair amount of public speaking—workshops, customer conferences, hackathons. The conference we were just at last week in San Francisco was part of our work.
And then we advise some other groups, like not-for-profits and the government.
Ross: Fantastic. So, what do you hope to see happen in the next five to ten years in this space?
Matt: Yeah, I’m really optimistic, honestly. I know it’s a very tumultuous time externally, and a lot of people are suffering. I try to give back as much as possible.
We, as an organization, we’re a public benefit corporation, so we give 10% of all our revenue to charity. And I volunteer at least a day a month directly in the community. I do know that a lot of people are having a very difficult time at present.
I do feel very optimistic about our mid- and long-term future. I think we’re in a very difficult transition period right now because of AI, the global economic environment, and the rest. But I’m hopeful that come the early 2030s, human potential broadly will be optimized, and many fewer people on this planet will be suffering than are suffering at present.
And hopefully by this time next decade, we’ll be multi-planetary, and we’ll be starting to focus our resources on things that matter.
I remember there was a quote I read maybe six or seven years ago—something like: “The best minds of our generation are trying to get people to click on ads on Facebook.” When you think about what people were doing 60 years ago—we were building the space shuttle to the moon.
The same types of people that would get people to click on ads on Meta are now trying to get people to like things on LinkedIn. It’s just not a good use of resources.
I’ve seen similar commentary from the Israeli Defense Forces. They talk about all the useless lives wasted on wars and terrorism. You could think about not fighting these battles and start thinking about other ways of helping humanity.
There’s so much progress and potential and promise when we start solving problems and start looking outward, if you will.
Ross: Yeah. You’re existing in the world that is pushing things further down that course. So where can people find out more about your work?
Matt: Right now, LinkedIn is probably the best way.
We’re in the midst of a merger of equals between my original firm, Elemental, and my business partner John Nelson’s firm, John Nelson Advisors. By Labor Day (U.S.), we’ll be back out in the world as iLIVD—i, L, I, V, D—with a new website and clout room and all the rest.
But it’s the same focus: AI-native venture studio for brain health—just twice the people, twice the energy, and all the consideration.
So we’re looking forward to continuing to serve the community and progressing forward.
Ross: No, it’s fantastic. Matt, you are a force for positive change, and it’s fantastic to see not just, obviously, the underlying attitude, but what you’re doing. So, fantastic. Thank you so much for your time and everything you’re doing. Thank you again.
Matt: Thank you again Ross, I really appreciate you having me on, and always a pleasure speaking with you.
The post Matt Lewis on augmenting brain capital, AI for mental health, neurotechnology, and dealing in hope (AC Ep8) appeared first on Humans + AI.

Jun 18, 2025 • 34min
Amir Barsoum on AI transforming services, pricing innovation, improving healthcare workflows, and accelerating prosperity (AC Ep7)
“Successful AI ventures are those that truly understand the technology but also place real human impact at the center — it’s about creating solutions that improve lives and drive meaningful change.”
– Amir Barsoum
About Amir Barsoum
Amir Barsoum is Founder & CEO of InVitro Capital, a venture studio that builds and funds companies at the intersection of AI and human-intensive industries, with four companies and over 150 professionals. He was previously founder of leading digital health platform Vezeeta and held senior roles at McKinsey and AstraZeneca.
Website:
InVitro Capital
LinkedIn Profile:
Amir Barsoum
X profile:
Amir Barsoum
What you will learn
Understanding the future of AI investment
Exploring the human impact of technology
Insights from a leading AI venture capitalist
Balancing risk and opportunity in startups
The evolving relationship between humans and machines
Strategies for successful AI entrepreneurship
Unlocking innovation through visionary thinking
Episode Resources
Transcript
Ross Dawson: I’m here. It’s wonderful to have you on the show.
Amir Barsoum: Same here, Ross. Thank you for the invite.
Ross: So you are an investor in fast-moving and growing companies. And AI has come along and changed the landscape. So, from a very big picture, what do you see? And how is this changing the opportunity landscape?
Amir: So, actually, we’re InVitro Capital. We actually started because we have seen the opportunity of AI.
We actually started with the sort of the move. And a big part of the reason of what we started is we think that the service industry—think about healthcare and home repair, even some service providers today—they’re going to be hugely disrupted by AI. Whether there will be automation, replacement as a bucket, or augmentation as a bucket, or at least facilitation.
And we’ve seen a huge opportunity that we can build. We can build AI technology that could do the service. Instead of being a software-as-a-service provider, we basically build the service provider itself.
So that’s what excites us about what we’re trying to do and what we’re building.
Ross: So what’s the origin of the word InVitro Capital? Does this mean test tubes?
Amir: So, I think it originates from there. I think the idea is we’re building companies under controlled conditions. And it’s kind of the in vitro—in vitro fertilization, like the IVF.
We keep on building more companies under these controlled conditions. That’s the idea, and because we come from a healthcare background, so it kind of resonated.
Ross: All right, that makes sense. So, there’s a lot of talk going around—SaaS is dead. So this kind of idea, you talk about services and the way services are changing.
And so that’s—yeah, absolutely—service delivery, whether that’s service by humans, whether it’s service by computers, whatever the nature of that, is changing. So does this mean that we are fundamentally restructuring the nature of what a service is and how it is delivered?
Amir: I think, yes. I think between the service industry and the software industry, both of them are seeing a categorical change in how they’re going to be provided to the users. And, I mean, the change is massive. I’m not sure about the word “dead,” but we’re definitely seeing a huge, huge change.
Think about it from a service perspective, from a software perspective. In software, I used to sell software to a company. The company needs people to be smart enough, educated enough, trained enough to use the software and give you value out of it. They used to be called the system of records with some tasks, but really it’s a system of record that has a lot of records, and then somebody—some employee—who sits there and does the job.
In the service, it’s kind of, you think this is going to be very difficult, and they’re going to do somebody as an outsource to do the service for me. Think about, I’m going to go and hire someone who’s going to help us do marketing content, or someone who would do even legal—and I’m going to the extreme.
And I think both are seeing categorical change. The software and the employee, both together, could become one, or at least 80% of the job could be done now by AI technologies. And the service—the same thing. So we’re definitely seeing a massive change in these aspects. And talk legal, talk content marketing—all of them.
Ross: I’d like to dig into that a little bit more. But actually, just one question is around pricing.
Are you looking at or exploring ways in which fee structures or pricing of services change? I mean, that’s classically where services involved humans—there was some kind of correlation to the cost of the human plus the margin.
Now there is AI, which has taken often an increasing proportion of the way the service is delivered. So—and different perceptions where clients, customers think, “Oh, you must be able to do it really cheap now, so give it to me cheaper.” Or maybe there’s more value created.
So are you thinking about how fees and pricing models change for service?
Amir: I have a strong concept when it comes to pricing and innovation.
Think about ride-hailing when it has been introduced in the market. It was introduced at the price-competitive advantage compared to the yellow cab, right? Yes, you can come in with many other better benefits, like it’s coming with security and safety, but the reality is, it’s there. That’s when you hit the mass market—you need to play on pricing. And I think that’s the beauty of innovation.
And I think AI as a technology, and with its very, very wide use cases, it’s going to make every single thing around us significantly cheaper. Let’s take the same ride-hailing example. If you introduce the auto-driving ride-hailing, we are literally taking almost 70% of the cost today off of the table.
So if you’re not going to introduce a significantly cheaper price, I don’t think it’s going to find the mass market. So that’s from a form of the absolute value of pricing—how to think of pricing.
So where I would split this into two categories—we tried going and we basically say, “You know what, if you hire a person, it will cost you X. Hire an AI person, and it will cost you Y.” I find this not working very well.
Where where seeing the pay-as-you-go model is the easier way, the more comprehensible way. So, you think about the SaaS pricing, if you think about the service pricing—and that’s a continuum—I think we’re somewhere in the middle.
And I think the best is closer a bit to the SaaS pricing, but more about—you use more, you pay more—kind of a game, rather than feature-based pricing. So consumption-based pricing, but less related to the FTE.
Because, for example, when you say, “We have a recruiter, AI recruiter,” and you say it’s $800 a month instead of paying for a full-time recruiter, human recruiter, who is $7,000 a month—then what is the capacity of this AI recruiter? Is it equal to the human recruiter? Or it’s an unlimited capacity?
So we find this is not working really, really well. What works is really use-based, not feature-based events.
Ross: Right. So moving away from necessarily time-based subscription to some kind of consumption—
Amir: Consumption-based, yeah. Yeah, you could time it. You can time it a little bit as to timing, but really, it’s a consumption-based.
Ross: Yeah, and there is also the new models like outcome-based, such as Sierra, where they price—if you get an outcome, where you get a customer resolution, you pay. If you don’t, then you don’t.
Amir: Which is—this one is actually—so we have a company that we’re going to put in the market that is related. That is related to AI GTM—so AI go-to-market solution.
We’re going to go with the model of, “You know what? Pay when you get.” Which I think is a very, very interesting model. It’s a super good and easy way to acquire customers.
But you also need them to be a little bit engaged in some input so you can do a great job for them. But if they haven’t paid, then you’re going to find it—the engagement component—I think the funnel drops a little bit there.
We haven’t fixed that yet, but I think somehow it mixes between the consumption-based, but very, very small, and then more of the pay-per-outcome. I think this would be the fascinating solution.
Ross: Yeah, yeah. Well, it’s certainly evolving fast, and we’ll experiment, see what works, and work on that.
Amir: So I’ll tell you about it. I’ll tell you what’s going to happen.
Ross: So you have your healthcare background, you’re InVitro Capital, you are investing in the healthcare sector.
So I’d like to sort of just pull back to the big picture. So there’s a human-plus-AI perspective. And thinking more—there are humans involved, there’s AI, which is now complementing us.
Of course, there’s been many things in healthcare where AI can do things better and more effectively than people—just doing things which are not inspiring. And there’s also a lot of abilities to complement AI in how we deliver services, the quality of those services, and so on.
So best you can, just sort of take a big picture and say, what are that—where in this flow of healthcare do you see opportunities for humans and AI to do things better, faster, more effectively than they’ve done before?
Amir: So, healthcare—because the technicalities of the technical component of healthcare—is a very sensitive topic. When you start getting into the clinical decisions of it, it’s a very sensitive topic.
But in reality, healthcare is written in books, right? Especially the non-intervention healthcare. You think about the primary care—most of the non-intervention is written in books. And the LLM models know them. And even with many other data models you have—and even the big healthcare systems—they have tons of this data.
So you can actually today go straight away with some of the clinical solutions. You know, if you take a picture now as a consumer of something on your skin and put it on, it can kind of give you a very, very good answer.
But is this something that we think is ready to be commercialized and go to market? The answer is: no, not today.
But we’re seeing, on the other side, every single thing until the clinical decisions is seeing massive, massive augmentation.
And we think about it from a patient journey perspective. And in the patient journey, there are anchors as well. You can see it with the provider side, but see the accessibility component—where can patients access healthcare?
And I mean having the conversation and the scheduling and the difficulty of the scheduling and getting third parties involved. And this is not a typical administrative task—there are some medical people who used to be involved in this.
So, for example, the patient can’t see the diagnostic center unless you get an approval. But when you try to get an approval from the insurance firm, the insurance firm declines. So you need to get one more comment here, one more writing here, to get the insurance firm to approve.
Can you do these kinds of things—which is not the billing part, it’s still accessibility? And we are—we’re seeing AI technology playing a significant role in this.
Take it to the next step: billing, for example, which is really getting the provider to be paid for this visit, and maybe start diverting what is the copay and what is not. A lot of people are involved in this, and we’re seeing massive, massive implementation in that space, and workflow automation in that space as well.
Ross: So just coming back to that. So this idea of workflow, I think, is really critical. Where you mentioned this idea of approvals.
And so, yes, part of it is just flow—okay, this thing is done, it then goes on to the next person, machine, whatever system or organization. But this comes back to these decisions where essentially AI can provide recommendations, AI can basically be automated with some kind of oversight. There may be humans involved with providing input.
So in healthcare—particularly pointed—I mean, I suppose even in that patient experience process. So how are you thinking about the ways in which AI is playing a role at decision points, in terms of how we shift from what might have been humans making decisions to now what are either AI-delegated or AI-with-humans-in-the-loop or any other configuration?
So how are we configuring and setting up the governance and the structures so these can be both efficient and effective?
Amir: In very simple terms, there are the workflows and there are the AI workflows, which are very different—very different from how technology is designed and built, and very, very different in their outcome.
I think every single thing that we tried to do before in healthcare using workflows was, at best, not working. It even could look nice, but it just didn’t work. That’s the fact.
Because you start building algorithms and start putting rules in your code—if this happens, do that; if this happens, do that—you never cover enough rules that would make it really solid. And if you do, then the system collapses.
I think now we are at the stage where there are data models that you keep on indicating—this data model on whether this worked or not, the satisfaction level of the patient, whether this ended up in crazy billing and payment or not, whether this ended up in actually losing money for the provider or not losing money for the provider, the amount of time that has been lost, whether we have utilized the provider’s time or not—which is the most expensive component until today.
We talk AI, but still, we need healthcare providers.
So there, you build these data models that make the AI take decisions on: shall I book Amir tomorrow at 2 p.m., or I’d rather book Amir the day after tomorrow?
There are many, many data points that need to be considered in this intervention—Amir’s timeline, the doctor’s timeline, availability. These are the easy parts.
But the not-easy part is what the data models tell us—that makes the AI think like a human and on its feet, and saying, “You know what, I would book Ross tomorrow, but Amir the day after tomorrow,” because of the tons and tons of things: utilization, expectation, what’s the time that you’re going to take—leveraging on history of data that could work.
And the more you move into the billing component—and by the way, I know most of the people in healthcare think more about the clinical decisions—but in reality, healthcare is decided by payment and billing. These are the two biggest points, right?
Ross: So one of the interesting things here—I guess, pointing to the fact—we’ve got far more data than ever before, and hopefully we’re able to do things like measure emotional responses and so on, which is important if we’re going to build experience.
I mean, just massive things that can feed into models. But one of the points is that healthcare is multiparty. There’s a whole wealth of different players involved.
And there are some analogies to supply chain, except supply chains are far easier than healthcare. You have multiple players, and you have data from many different participants. And there is value to optimizing across the entire system, but you’ve got segregated data and then segregated optimization algorithms.
And in fact, if you optimize within different segments of the entire piece, then the entire thing as a whole may, in fact, end up being worse off than it was before.
So do you have a vision for how we can get to, I suppose, healthcare-wide optimization based on data and effective AI?
Amir: That’s a very, very, very good question, honestly—and quite deep.
So, healthcare has been—there are the payers, the insurance—the guys that pay the money. There’s the healthcare providers. And those—think about it—healthcare providers as entities, the organization. And healthcare providers as individuals—the doctors, the nurses, the pharmacists, right?
And then there’s the patient. And there’s the employer.
So there are all of these components together.
And we have been seeing a trial of creating vertical integration in healthcare in the past—a payer buying hospitals, buying clinics—and thinking that this is going to be cheaper for him, and it is. And to do it—but it has been slow because it’s very difficult to run a complete insurance firm and a healthcare provider—hiring doctors and managing workflows and payroll and the quality of patients, and making sure that the patient liked your hospital so they can come again, or your clinic, and not leaving or walking away.
And then what we are seeing is—there’s a very well-known concept in AI, which is the flattening of the org structure.
I think we’re going to see this in healthcare.
It becomes easier to do this vertical integration—the clinic, the scan centers, the pharmacists, the hospitals. It’s becoming way easier with time, by basically automating and using AI and augmenting what we do today—and shrinking, kind of running it together.
I think we’re going to see this more and more in the future.
Ross: The existing players start to do roll-ups or consolidate. So that becomes quite capital intensive in terms of being able to build this vertical integration. So either you build it out, or you buy it.
Amir: Or you build it out without being super capital intensive, because you’re using tech that is, again, you don’t need to be as capital intensive as you used to be before.
You know, for example, the working capital of people involved is going to be significantly less than what you used to see before. I’m less talking about the hospitals at this stage, but I think the outpatient setup will definitely see this.
I’ll give you an example. In the pharmacy business, we have automated—not fully automated, but augmented. In the pharmacy business, think about it. It’s a small factory. You get the prescription, somebody needs to type the prescription—we call them typists. Then somebody needs to fill it, and then a pharmacist needs to check it.
So we’ve automated many, many of those, using even some machines there until the filling component. Then the pharmacist would show up and do the checking.
So the working capital is shrinking, the time is becoming way and way more leaner, and hence it’s way more efficient.
Ross: So let’s pull back to the big picture of how does AI impact work, society, investments—everything. So what are any macro thoughts you have around how some of the big shifts—and how we can basically make these as positive as possible?
Amir: So I will give you my answer from what I’m seeing in the AI and the service intersection, because we are doing a lot of work in that space.
I think many, many of the jobs gonna vanish and cease to exist. But also, very interesting, we’re seeing a massive uptake in the EdTech, where people are jumping in to kind of elevate their skill sets.
And I think the time is there. It’s not a crazy, gloomy picture. I think there’s some time they can actually get that and fill up the space.
The level of shortage we’re seeing in healthcare is unheard of, and we are aging as a population. And the reality of the matter—we need those. So I need less people working on the reception and the billing department, and I want more people who could provide some level of healthcare component.
And we’re seeing this happening. Think about the home repair. I need less people who do administrative work, and I need more electricians, and I need more plumbers.
And I think we’re seeing more jumping into the EdTech components, getting these preparation to the exams and the test to kind of elevate themselves into this role.
And I think AI is definitely accelerating the elimination of the jobs, but also accelerating access to education so that you can capture the new job. And we’re definitely seeing these two pieces happening, I would say, in a very, very fast pace.
So that’s one thing we’re seeing.
From an investment perspective, we look at investment into three categories. We look at them into:
Category A: Investing into foundational models like the LLMs. And I think foundation models—and I would say the whole foundation thing about AI—there’s the foundation model, then the infrastructure game there. I think it’s a very, very interesting space. It’s the power law game—applies very strongly.
And I think the choice there is, I would say, the biggest factor. And obviously access at the right time. So this is category A.
Category B: The application layer. And I think in the application layer, I personally believe—and I think there’s a lot of belief that’s happening—we’re seeing less of the power law exercise.
We’re not expecting people to exit $10 billion in that space. I would say there’s democratization of the application layer. And I think the best there is how you can build with a very cost-efficient manner, so that the return on capital is as good as people expected.
And that’s what we operate, actually, as a venture studio and a venture builder.
Category C: What’s going to happen in the businesses, and the mom-and-pop shops, and the street businesses in the service industry.
And I think for this category and the second category, we’re going to see a lot of kind of merging together—roll-ups between the second category and the third category.
Either the big ones of the third buy some of the second, or the big ones of the second buy some of the third. We’re seeing this—even Silicon Valley start to talk about the roll-ups in the VC space and the like.
So that’s how we think about it.
Ross: So actually, I want to dig into the venture studio piece. All right, so you’ve got a venture studio, you’re running a bunch of things in parallel. You’re doing that as efficiently as possible. So what? How? Right? Well, what are your practices? How do you amplify your ability to build multiple ventures simultaneously, given all of the extraordinary technologies? What are the insights which are really driving your ability to scale faster than ever before?
Amir: So, usually, when we try to scale, we think about whether there is recyclability of customers. That’s the first thing we think about.
I think we use, you know, if you look at our first deck, there was recyclability of customer, recyclability of technology. Honestly, if we talk about a segment of technology now, it’s a joke. You know, it’s going to take, what, a month to build? So we took this out.
Really, it’s distribution. And this has become, again—think about the application layer—distribution is the most difficult thing to capture, because everybody would come and tell you, “I’m a sales AI, I’m a sales AI.” Okay, I’ve got 20 emails about sales AI, 20 emails about AI recruiters. And distribution is a very big component.
So the recyclability of customers is a very big part. The second part is availability of data, because if you don’t build your own data models, train your own solutions, so you create a very, very unique quality of the product.
The product wouldn’t be good enough for the expectations, because today, the consumer and the business expectations when you say AI have been super high, because they think it’s going to produce the same value as if you are a consumer using ChatGPT, which in most of the cases, that doesn’t happen unless you have a very unique data model that helps fix a very unique solution.
And again, think about the diagnostics that we’re doing in the home repair space. We collected millions and millions of pictures and images, and we spend—we even keep our model trained ourselves.
So we go into the—we provide the—we do the service, and we take it and we make sure that we can start providing feedback, and then we feed back to the system so that we can start creating these data models that would make sense. Otherwise, the solution is not as great.
And if you think about the solution that we launched in the market three months ago, I would say bad at best. Now, I would say it’s significantly better. And I still think that we have a way to go adding more and more data to what we are building to fix that.
So, recyclability of consumers and of customers is a big thing, and availability of data—I would say these are the two big components that, if we find them, say, “That is something to do.”
I’m also not going to say all the clichés, you know, find the pain in the market. I think it’s a standard.
Ross:Yeah, yeah that’s not new. That’s all.
Amir: Yeah, yeah.
Ross: Fabulous. That’s, that’s really interesting. So to round out, I mean, what’s most exciting to you about the potential of humans plus AI — around where it is we can go from here?
Amir: Oh, I’m, I’m very excited to see the—so I will say something that is, I’m not sure how contrarian it is.
I think we’re going to see the quality of products and service around us in a totally different level. Totally different level. We’re gonna—I think our generation has been living in prosperity significantly better than the previous generation. And the previous generation is that, you know, if you—a very, very rich man 100 years ago lives way worse of a life than a poor man who lives today, right?
I know if you compare it—just get this one and get this one—and you see the level of comfort, the day-to-day work and so on. But this takes 100 years to see major, major difference.
I think we’re going to see this now in much shorter periods, I would say, 10 years. And that’s the positive and the good part of it. But also that comes with a little bit of, you know, it’s a scary belief, okay, what’s going to happen tomorrow? Am I fast enough as an investor, as a human being? What’s our kids going to do?
So I think these questions also pop up and make us think about it. But I’m quite excited, generally, about how the quality of life is going to move significantly in an upward trajectory. Hopefully, we, as humans, we’re going to mitigate the risks that we’re seeing potentially going to happen—security risks, cyber security risks, and tons of others.
Ross: So where can people go to find out more about your work in ventures?
Amir: InVitroCapital.com. I’m on LinkedIn. Just writing the name “Amir Barsoum,” you’re going to find me there. On LinkedIn, our team is there. So all of these are the best ways to reach out.
Ross: Fantastic. Thank you for your time and your insights, Amir.
Amir: Ross, that was great. Thank you very much for the deep questions.
The post Amir Barsoum on AI transforming services, pricing innovation, improving healthcare workflows, and accelerating prosperity (AC Ep7) appeared first on Humans + AI.

Jun 4, 2025 • 34min
Minyang Jiang on AI augmentation, transcending constraints, fostering creativity, and the levers of AI strategy (AC Ep6)
What are the goals I really want to attain professionally and personally? I’m going to really keep my eye on that. And how do I make sure that I use AI in a way that’s going to help me get there—and also not use it in a way that doesn’t help me get there?
– Minyang Jiang (MJ)
About Minyang Jiang (MJ)
Minyang Jiang (MJ) is Chief Strategy Officer at business lending firm Credibly, leading and implementing the company’s growth strategy. Previously she held a range of leadership positions at Ford Motor Company, most recently as founder and CEO of GoRide Health, a mobility startup within Ford.
Website:
Minyang “MJ” Jiang
Minyang “MJ” Jiang
LinkedIn Profile:
Minyang “MJ” Jiang
What you will learn
Using ai to overcome human constraints
Redefining productivity through augmentation
Nurturing curiosity in the modern workplace
Building trust in an ai-first strategy
The role of imagination in future planning
Why leaders must engage with ai hands-on
Separating the product from the person
Episode Resources
Transcript
Ross Dawson: MJ, it’s a delight to have you on the show.
Minyang “MJ” Jiang: I’m so excited to be here, Ross.
Ross: So I gather that you believe that we can be more than we are. So how do we do that?
MJ: Absolutely I’m an eternal optimist, so I’m always—I’m a big believer in technology’s ability to help enable humans to be more if we’re thoughtful with it.
Ross: So where do we start?
MJ: Well, we can start maybe by thinking through some of the use cases that I think AI, and in particular, generative AI, can help humans, right?
I come from a business alternative financing perspective, but my background is in business, and I think there’s been a lot of sort of fear and maybe trepidation around what it’s going to do in this space. But my personal understanding is, I don’t know of a single business that is not constrained, right? Employees always have too much to do. There are things they don’t like to do. There’s capacity issues.
So for me, already, there’s three very clear use cases where I think AI and generative AI can help humans augment what they do. So number one is, if you have any capacity constraints, that is a great place to be deploying AI because already we’re not delivering a good experience. And so any ability for you to free up constraints, whether it’s volume or being able to reach more people—especially if you’re already resource-constrained (I argue every business is resource-constrained)—that’s a great use case, right?
The second thing is working on a use case where you are already really good at something, and you’re repeating this task over and over, so there’s no originality. You’re not really learning from it anymore, but you’re expected to do it because it’s an expected part of your work, and it delivers value, but it’s not something that you, as a human, you’re learning or gaining from it.
So if you can use AI to free up that part, then I think it’s wonderful, right? So that you can actually then free up your bandwidth to do more interesting things and to actually problem-solve and deploy critical thinking.
And then I think the third case is just, there are types of work out there that are just incredibly monotonous and also require you to spend a lot of time thinking through things that are of little value, but again, need to be done, right? So that’s also a great place where you can displace some of the drudgery and the monotony associated with certain tasks.
So those are three things already that I’m using in my professional life, and I would encourage others to use in order to augment what they do.
Ross: So that’s fantastic. I think the focus on constraints is particularly important because people don’t actually recognize it, but we’ve got constraints on all sides, and there’s so much which we can free up.
MJ: Yes, I mean, I think everybody knows, right? You’re constrained in terms of energy, you’re constrained in terms of time and budget and bandwidth, and we’re constrained all the time.
So using AI in a way that helps you free up your own constraints so that it allows you to ask bigger and better questions—it doesn’t displace curiosity. And I think a curious mind is one of the best assets that humans have.
So being able to explore bigger things, and think about new problems and more complicated problems. And I see that at work all the time, where people are then creating new use cases, right? And it just sort of compounds.
I think there’s new kinds of growth and opportunities that come with that, as well as freeing up constraints.
Ross: I think that’s critically important. Everyone says when you go to a motivational keynote, they say, “Curiosity, be curious,” and so on. But I think we, in a way, we’ve been sort of shunned.
The way work works is: just do your job. It doesn’t train us to be curious. So if, let’s say, we get to a job or workplace where we can say—we’re in a position of work where you can say—all right, well, all the routine stuff, all the monotony, we’ve done. Your job is to be curious.
How do we help people get to that thing of taking the blinkers off and opening up and exploring?
MJ: I mean, I think that would be an amazing future to live in, right? I mean, I think that if you can live in a world where you are asked to think—where you’re at the entry level, you’re asked to really use critical thinking and to be able to build things faster and come up with creative solutions using these technologies as assistance—wouldn’t that be a better future for us all?
And actually, I personally would argue and believe that curiosity is going to be in high demand, way higher demand than it’s been in the future, because there is this element of spontaneous—like spontaneous thinking—which AI is not capable of right now, that humans are capable of.
And you see that in sort of—even sort of personal interactions, right? A lot of people use these tools as a way to validate and continue to reinforce how they think. But we all know the best friendships and the best conversations come from being called out and being challenged and discovering new things about yourself and the thing.
And that same sentiment works professionally. I think curiosity is going to be in high demand, and it’s going to be a sort of place of entry in terms of critical thinking, because those are the people that can use these tools to their best advantage, to come up with new opportunities and also solve new problems.
Ross: So I think, I mean, there is this—I say—I think those who are curious will, as you say, be highly valued, be able to create a lot of value. But I think there are many other people that have latent curiosity, as in, they would be curious if they got there, but they have been trained through school and university and their job to just get on with the job and study for the exam and things like that.
So how do we nurture curiosity in a workplace, or around us, or within?
MJ: I mean, I think this is where you do have this very powerful tool that is chat-based, for the most part, that you don’t require super technical skills to be able to access. At least today, the accessibility of AI is powerful, and it’s very democratizing.
You can be an artist now if you have these impulses but never got the training. Or you can be a better writer. You can come up with ideas. You can be a better entrepreneur. You can be a better speaker.
It doesn’t mean you don’t have to put in the work—because I still think you have to put in the work—but it allows people to evolve their identity and what they’re good at.
What it’s going to do, in my mind, rather than these big words like displacement or replacement, is it’s going to just increase and enhance competition.
There’s a great Wharton professor, Stefano Plutoni, who talked about photography before the age of digital photography—where people had to really work on making sure that your shutter speed was correct, you had the right aperture, and then you were in the darkroom, developing things.
But once you had digital photography, a lot of people could do those things. So we got more photographers, right? We actually got more people who were enamored with the art and could actually do it.
And so some of that, I think, is going to happen—there’s going to be a layering and proliferation of skills, and it’s going to create additional competition. But it’s also going to create new identities around: what does it mean to be creative? What does it mean to be an artist? What does it mean to be a good writer?
In my mind, those are going to be higher levels of performance. I think everyone having access to these tools now can start experimenting, and companies should be encouraging their employees to explore their new skills.
You may have someone who is a programmer who is actually really creative on the side and would have been a really good graphic artist if they had the training. So allowing that person to experiment and demonstrate their fluidity, and building in time to pursue these additional skill sets to bring them back to the company—I think a lot of people will surprise you.
Ross: I think that’s fantastic. And as you say, we’re all multidimensional. Whatever skills we develop, we always have many other facets to ourselves.
And I think in this world, which is far more complex and interrelated, expressing and developing these multiple skills gives us more—it allows us to be more curious, enabling us to find more things.
Many large firms are actively trying to find people who are poets or artists or things on the side. And as you say, perhaps we can get to workplaces where, using these tools, we can accelerate the expansion of the breadth of who we are to be able to bring that back and apply that to our work.
MJ: I mean, I’ve always been a very big fan of the human brain, right? I think the brain is just a wonderful thing. We don’t really understand it. It truly is infinite. I mean, it’s incredible what the brain is capable of.
We know we can unlock more of its potential. We know that we don’t even come close to fully utilizing it.
So now having these tools that sort of mimic reasoning, they mimic logic, and they can help you unlock other skills and also give you this potential by freeing up these constraints—I think we’re just at the beginning of that.
But a lot of the people I work with, who are working with AI, are very positive on what it’s done for their lives.
In particular, you see the elevated thinking, and you see people challenging themselves, and you see people collaborating and coming up with new ideas in volume—rewriting entire poorly written training manuals, because no one reads those, and they’re terrible. And frankly, they’re very difficult to write.
So being able to do that in a poetic and explicable way, without contradictions—I mean, even that in itself is a great use case, because it serves so many other new people you’re bringing into the company, if you’re using these manuals to train them.
Ross: So you’ve worked on Jedi, Jedi projects in the workplace—put this into practice, sort of. So I’d love to hear just, off the top of your mind, what are some of the lessons learned as you did that?
MJ: Yeah, we’ve been deploying a lot of models and working with our employee base to put them into production. We also encourage innovation at a very distributed level.
The biggest thing I will tell you is—change management. For me, the important part is in the management, right? Change—everybody wants change. Everyone can see the future, and I have a lot to say about what that means. But people want change, and it’s the management of change that’s really difficult. That requires thought leadership.
So when companies are coming out with this AI-first strategy, or organizations are adopting AI and saying “we are AI-first,” for me the most important lesson is strategically clarifying for employees what that means.
That actually isn’t the first thing we did. We actually started doing and working and learning—and then had to backtrack and be like, “Oh, we should have a point of view on this,” right?
Because it’s not the first thing. The first thing is just like, “Let’s just work on this. This is fun. Let’s just do it.” But having a vision around what AI-first means, and acknowledging and having deep respect for the complexities around that vision—because you are touching people, right? You’re touching people’s sense of self-worth. You’re touching their identities. You’re touching how they do work today and how they’re going to do work three to five years from now.
So laying that out and recognizing that we don’t know everything right now—but we have to be able to imagine what different futures look like—that’s important. Because a lot of the things I see people talking about today, in my view, is a failure of the imagination. It’s pinning down one scenario and saying, “This is the future we’re going to march towards. We don’t love that future, but we think it’s inevitable.”
As leaders—it’s not inevitable. So doing the due diligence of saying, “Let me think through and spend some time really understanding how this affects my people, and how I can get them to a place where they are augmented and they feel confident in who they are with these new tools”—that are disruptive—that’s the hard work. But that is the work I expect thought leadership and leaders to be doing.
Ross: Yes, absolutely right. And I think this—as you say—this getting any sense of the inevitable is deeply dangerous at best.
And as you say, any way of thinking about the future, we must create scenarios—partly because there are massive uncertainties, and perhaps even more importantly, because we can create the future. There are no inevitabilities here.
So what does that look like? Imagination comes first if we are building the company of the future. So how do we do that? Do we sit down with whiteboards and Post-it notes? What is that process of imagining?
MJ: There’s so many ways to do it, right? I mean, again—I took a class with a Wharton professor, Scott Snyder. He talked about “future-back” scenario planning, which is basically:
First, I think you talk to many different people. You want to bring in as many diverse perspectives as possible. If you’re an engineer, you talk to artists. If you’re a product person, you talk to finance people. You really want to harness everyone’s different perspectives.
And I think, along with the technology, there’s one thing that people should be doing. They should first of all think about defining—for your own function or your own department—what does it mean to be literate, proficient, and a master at AI? What are the skill sets you’re going to potentially need?
Then it’s really up to every company. I myself created a strategic framework where I can say, “Okay, I think there’s a spectrum of use cases all the way from a lot of automation to AI being simply an assistant.” And I ask different people and functions in the company to start binning together what they’re doing and placing them along this spectrum.
Then I would say: you do this many times. You write stuff down. You say, “Okay, perhaps I’m wrong. Let’s come up with an alternate version of this.”
There are several levers that I think a lot of people could probably identify with respect to their industry. In my industry, one of the most important is going to be trust. Another one is going to be regulation. Another one is going to be customer expectation.
So when I lay out these levers, I start to move them to the right and left. Then I say, “Well, if trust goes down in AI and regulations go up, my world is going to look very different in terms of what things can be automated and where humans come in.”
If trust goes up and regulations go down, then we have some really interesting things that can happen.
Once you lay out multiple of these different kinds of scenarios, the thing you want to look for is: what would you do the same in each one of these scenarios? Would you invest in your employees today with respect to AI?
And the answer is always yes—across every single scenario. You will never have less ROI. You will always be investing in employees to get that ROI.
So now you look at the things and say, “What am I going to do in my AI-first strategy that’s going to position me well in any future—or in a majority of futures?”
Those are the things you should be doing first, right now.
Then you can pick a couple of scenarios and say, “Okay, now I need to understand: if this were to change, my world is going to be really different. If that were to change, my world is going to be really different.”
How do I then think through what are the next layer of things I need to do?
Just starting with that framework—to say, what are the big levers that are going to move my world? Let’s assume these things are true. Let’s assume those things are true. What do my worlds look like?
And then, is there any commonality that cuts across the bottom? The use cases I gave earlier—around training, freeing up capacity—that cuts across every single scenario. So it makes sense to invest in that today.
I’m a big believer in employee training and development, because I always think there’s return on that.
Ross: That’s really, really good. And I can just, I can just imagine a visual framework laid out just as you’ve described. And I think that would be extremely useful for any organization.
So you mentioned trust. There’s obviously multiple layers of trust. There’s trust in institutions. There’s trust in companies—as you mentioned, in financial customer service, very relevant. There’s trust in society. There’s trust in AI. There’s trust in your peers.
And so this is going to be fundamental. Of course, your degree of trust—or appropriate trust—in AI systems is a fundamental enabler or determinant of how you can get value from them. Absolutely.
So how do we nurture appropriate trust, as it were, within workplaces with technology in order to be able to support something which can be as well-functioning as possible?
MJ: Yeah. I mean, I think trust is foundationally going to remain the same, right? Which is: do you know what is the right thing to do, and do people believe that you’re going to consistently execute on that right thing, right?
So companies that have values, that have principles that are well-defined, are going to continue to capitalize on that. There’s no technology that’s going to change that.
Trust becomes more complicated when you bring in things like AI that can create—that’s very, very persuasive—and is mimicking sort of the human side so well that people have difficulties differentiating, right?
So, for example, I run a sales team. And in sales, often people use generative AI to overcome objections. That is a great usage of generative AI. However, where do you draw the line between that—between persuasion and manipulation—and between manipulation and fraud, right?
I don’t think we need technology to help us draw the line. I think internally, you have to know that as a business. And you have to train your employees to know where the line is, right?
Ethics is always going to be something that the law can’t quite contain. The law is always what’s legal, and it’s sort of the bottom of the ethics barrel, in my opinion, right? So ethics is always a higher calling.
So having that view towards what is the use of ethical or accountable or responsible AI in your organization—having guardrails around it, writing up use cases, doing the training, having policies around what does that look like in our industry.
In many industries, transparency is going to be a very big factor, right? Do people know and do they want to know when they’re talking to a human versus talking to generative AI, right?
So there’s customer expectations. There’s a level of consistency that you have to deliver in your use cases. And if the consistency varies too much, then you’re going to create mistrust, right?
There’s also bias in all of the data that every single company is working with. So being able to safeguard against that.
So there are key elements of trust that are foundationally the same, but I think generative AI adds in a layer of complexity. And companies are going to be challenged to really understand: how have they built trust in the past, and can they continue to capitalize and differentiate that?
And those that are rushing to use generative AI use cases that then have the byproduct of eroding trust—including trust from their own employees—that’s where you see a lot of the backlash and problems.
So it pays to really think through some of these things, right? Where are you deploying use cases that’s going to engender credibility and trust? And where are you deploying use cases that may seem like it’s a short-term gain—until a bad actor or a misuse or something happens on the internet?
Which now, with deepfakes, it’s very easy to do. Now your reputation is very brittle because you don’t have a good foundational understanding of: do you have the credibility of your customers, of employees, that they trust, that you know what to do on what’s right, and then you can lead them there.
Ross: Yeah, that’s obviously—in essence—trust can be well-placed or misplaced. And generally, people do have a pretty good idea of whether people, technology, institutions are trustworthy or not.
And especially the trustworthiness is ultimately reflected in people’s attitudes and ultimately that which flows through to business outcomes.
So I think the key here is that you have to come from the right place. So having the ethical framework—that will come through. That will be visible. People will respond to it.
And ultimately, customers will go to those organizations that are truly trustworthy, as opposed to those that pretend to be trustworthy.
MJ: And I think there’s—and I think trust is about—there’s a time dimension here. There’s a time dimension with any technology, which is: you have to do things consistently, right?
Integrity is not a one-day game. It’s a marathon. It’s not a sprint. And so if you continue to be consistent, you can explain yourself when you make mistakes, right?
You know how to own up to it. You know what to say. You know how to explain it to people in a real way that they can understand.
That’s where trust—which is hard—trust is earned over time, and it can be depleted very quickly. And I think many, many companies have been burned through not understanding that.
But overall, it is still about doing the right thing consistently for the majority of the time and owning up to mistakes.
And to the extent that having an ethical AI framework and policy can help you be better at that, then I think those use cases and organizations and companies will be more successful.
And to the extent that you’re using it and it creates this downstream effect of eroding that trust, then it is extremely hard to rebuild that again.
Ross: Which takes us to leadership and leadership development. Of course, one foundation of leadership is integrity. There’s many things about leadership which aren’t changing. There are perhaps some aspects of leadership that are changing in a—what is—a very, very fast-moving world.
So what are your thoughts around how it is we can develop effective leaders, be they young or not so young, into ones that can be effective in this pretty, pretty wild world we live in?
MJ: I think with leadership, as it is, always a journey, right? There’s two things that in my mind leadership sort of comes back to. One is experience, right? And the other is the dimension we already mentioned, which is time.
As a leader, first of all, I encourage all senior leaders of companies—people who are in the highest seats of the companies—to really get in the weeds involved with generative AI. Don’t outsource that to other people. Don’t give it to your youngest employees. Don’t give it to third-party vendors. Really engage with this tool.
Because they actually have the experience and the expertise to understand where it’s working and where it’s not working, right? You actually recognize what a good product looks like, what’s a good outcome, what seems like it’s not going to work.
A great marketing leader lives in the minds of their customers, right? So you’re going to know when it produces something which is like, this is not hitting the voice, this is not speaking with my customers, I’m going to continue to train and work. A new marketing analyst is not going to have any idea, right?
And also as a great leader, once you actually get into the guts of these tools and start to learn with it, then it is, as we mentioned before, your role to think about:
How do I create the strategy around where I’m going to augment my company—the growth, the business, the profit, and the people? What am I going to put in place to help foster that curiosity? Where am I going to allow for use cases to break those constraints, to create this hybrid model where both AI can be used and humans can be more successful?
What does being more successful mean outside of just making more money, right? Because there’s a lot of ways to make more money, especially in the short term. So defining that after having learned about the tool—that’s really the challenge that every leader is going to face.
You have this vastly changing landscape. You have more complexity than you’re dealing with, right? You have people whose identities are very much shaped by technology and who are dealing with their own self-worth with respect to these tools.
Now you have to come in and be a leader and address all of these dimensions. And exactly what you mentioned before, this idea of being a multidimensional leader is starting to become very important, right?
You can’t just say, “I’m going to take the company to this.” Now I have to think about: how do I do it in a way that’s responsible? And how do I do it in a way that guarantees long-term success for all of the stakeholders that are involved?
These questions have never really changed for leadership, but they certainly take on a new challenge when it comes to these tools that are coming in.
So making strategic decisions, envisioning the future, doing scenario planning, using your imagination—and most of all, having a level of humility—is really important here.
Because this idea of being able to predict the future, settle into it, and charge in—really, that looks fun on paper. That’s very flashy. And I understand there’s lots of press releases, that’s a great story.
The better story is someone who journals, takes time, really thinks about what this means, and recognizes that they don’t know everything. And we are all learning. We’re all learning. There’s going to be really interesting things that come up, and there’s going to be new challenges that come up.
But isn’t that what makes leadership so exciting, though, right? If everyone could do it, then that would be easy, right?
This is the hard thing. I want leaders to go and do the hard thing, because that’s what makes it amazing. And that’s what makes AI suitable for you. It’s supposed to free up your constraint and help you do harder, more difficult things—take on more challenges, right?
And that’s where I think we can truly all augment ourselves.
Ross: Yes, it is. Any effective leader is on a path of personal growth. They are becoming more. Otherwise, they would not be fulfilling the potential of the people in the organization they lead—let alone themselves, right?
So to round out, what are a few recommendations or suggestions to listeners around how they can help augment themselves or their organizations—and grow into this world of more possibilities than ever before?
MJ: Yeah. So my best advice is asking people to separate the product from the person, right? You can use AI to create a better product, but in doing so, understand—is that making you a better person, right? Is that making you better at the thing that you actually want to do?
We know about people actually having to understand the product. But even so—if your goal is to be a better writer, for example, and you use Generative AI to create beautiful pieces—is that helping you be a better writer?
Because if it’s not, that may not be the best use case. Maybe you use it for idea generation or for copy editing. So being able to separate that and really understanding that is going to be important.
The other thing is: understand what parts of your identity you really value, that you want to protect, right? And don’t then use these tools that are going to slowly chip away at that identity. Really challenge yourself.
The thing about AI—until we get to AGI—that is interesting is that it is always going to validate you. It is always going to support what you want it to do. You’re going to give it data, and it’s going to do what you tell it to do. So it’s not going to challenge you, right?
It’s not going to make you better by calling you out on stuff that your friends would—unless you prompt it, right? Unless you say, “Critique how I can be better. Help me think through how I can be better.”
And using it in that way is going to help you be a better leader. It’s going to help you be a better writer, right? So making sure that you’re saving room to say, “Hey, yes, I’m talking to this machine,” but using it to make you better—and separating the product you’re going to create and the person you want to become.
Because no one is going to help you be a better person unless you really want to make an effort to do that. And so that, I think, is really key—both in your professional and personal life—to say:
What are the goals I really want to attain professionally and personally? I’m going to really keep my eye on that. And how do I make sure that I use AI in a way that’s going to help me get there—and also not use it in a way that doesn’t help me get there?
Ross: I think that’s really, really important, and not everyone recognizes that. That yes—how do we use this to make me better? Better at what I do? Better person?
And without intent, you can achieve it. So that’s very important.
So where can people follow you and your work?
MJ: Well, I post a lot on LinkedIn, so you should always look me up on LinkedIn.
I do work for Credibly, and we recently launched a credibly.ai webpage where we constantly are telling stories about what we’re doing.
But I’m very passionate about this stuff, and I love to talk to people about it. So if you just look me up on LinkedIn and connect with me and want to get into a dialog, I’m more than happy to just share ideas.
I do think this is one of the most interesting, seismic shifts in our society. But I’m a big believer in its ability—when managed correctly—to unlock more human potential.
Ross: Fantastic. Thank you so much for your time, your insight, and your very positive energy and how we can create the future.
MJ: Thanks, Ross.
The post Minyang Jiang on AI augmentation, transcending constraints, fostering creativity, and the levers of AI strategy (AC Ep6) appeared first on Humans + AI.

May 28, 2025 • 36min
Sam Arbesman on the magic of code, tools for thought, interdisciplinary ideas, and latent spaces (AC Ep5)
Code, ultimately, is this weird material that’s somewhere between the physical and the informational… it connects to all these different domains—science, the humanities, social sciences—really every aspect of our lives.
– Sam Arbesman
About Sam Arbesman
Sam Arbesman is Scientist in Residence at leading venture capital firm Lux Capital. He works at the boundaries of areas such as open science, tools for thought, managing complexity, network science, artificial intelligence, and infusing computation into everything. His writing has appeared in The New York Times, The Wall Street Journal, and The Atlantic. He is the award-winning author of books including Overcomplicated, The Half Life of Facts, and The Magic of Code, which will be released shortly.
Website:
Sam Arbesman
Sam Arbesman
LinkedIn Profile:
Sam Arbesman
Books
The Magic of Code
The Half-Life of Facts
Overcomplicated
What you will learn
Rekindling wonder through computing
Code as a universal solvent of ideas
Tools for thought and cognitive augmentation
The human side of programming and AI
Connecting art, science, and technology
Uncovering latent knowledge with AI
Choosing technologies that enrich humanity
Episode Resources
Books
The Magic of Code
As We May Think
Undiscovered Public Knowledge
People
Richard Powers
Larry Lessig
Vannevar Bush
Don Swanson
Steve Jobs
Jonathan Haidt
Concepts and Technical Terms
universal solvent
latent spaces
semantic networks
AI (Artificial Intelligence)
hypertext
associative thinking
network science
big tech
machine-readable law
Transcript
Ross Dawson: Sam, it is wonderful to have you on the show.
Sam Arbesman: Thank you so much. Great to be talking with you.
Ross: So you have a book coming out. When’s it coming out?
Sam: It comes out June 10. So, yeah, so it comes out June 10. The name of the book is The Magic of Code, and it’s about, basically, the wonders and weirdness of computing—kind of viewing computation and code and all the things around computers less as a branch of engineering and more as almost this humanistic liberal art.
When you think of it, it should not just talk about computer science, but should also connect to language and philosophy and biology and how we think, and all these different areas.
Ross: Yeah, and I think these things are often not seen in the biggest picture. Not just, all right, this is something that draws my phone or whatever, but it is an intrinsic part of thought, of the universe, of everything.
So I think you—indeed, code, in as many manifestations—does have magic, as you have revealed. And one of the things I love, love very much—just the title Magic—but also you talk about wonder.
I think when I look at the change, I see that humans are so quick to take things for granted, and that takes away from the wonder of what it is we have created. I mean, what do you see in that? How do we nurture that wonder, which nurtures us in turn?
Sam: Yeah. I mean, I completely agree that we are—I guess the positive way to think about it is—we adapt really quickly. But as a result, we kind of forget that there are these aspects of wonder and delight.
When I think about how we talk about technology more broadly, or certain aspects of computing, computation, it feels like we kind of have this sort of a broken conversation there, where we focus on it as an adversary, or we are worried about these technologies, or sometimes we’re just plain ignorant about them.
But when I think about my own experiences with computing growing up, it wasn’t just that. It was also—it was full of wonder and delight. I had, like, my early experiences—like my family’s first computer was the Commodore VIC-20—and kind of seeing that.
And then there was my first experience using a computer mouse with the early Mac and some of the early Macintoshes or earlier ones. And then my first programming experiences, and thinking about fractals and screensavers and SimCity and all these things.
These things were just really, really delightful and interesting. And in thinking about them, they drew together all these different domains. And my goal is to kind of try to rekindle that wonder.
I actually am reminded—I don’t think I mentioned this story in the book—but I’m reminded of a story related to my grandfather. So my grandfather, he lived to the age of 99. He was a lifelong fan of science fiction, and he read—he basically read science fiction since, like, the modern dawn of the genre.
Basically, I think he read Dune when it was serialized in a magazine. And I remember when the iPhone first came out, I went with my grandfather and my father. We went to the Apple Store, and we went to check it out. We were playing with the phone.
And my grandfather at one point says, “This is it. Like, this is the object I’ve been reading about all these years in science fiction.”
And we’ve gone from that moment to basically complaining about battery life or camera resolution. And it’s fair to want newer and better things, but we kind of have to take a beat and say, no, no—the things that we have created for ourselves are quite spectacular.
And so my book tries to rekindle that sense of wonder. And as part of that process, tries to show that it’s not just this kind of constant march of better camera resolution or whatever it is. It’s also this process of touching upon all these different areas that we think about—whether it’s the nature of life or art or all these other things.
And I think that, hopefully, is one way of kind of providing this healthier approach to technology, rekindling this wonder, and ultimately really trying to connect the human to the machine.
Ross: Yes, yes, because we have—what I always point out is that we are inventors, and we have created extraordinary things. We are the creators, and we have created things in our own image. We have a relationship with them, and that relationship is evolving.
These are human artifacts. Why they matter, and how they matter, is in relationship to us, which, of course, goes to— You, sorry, go on.
Sam: Oh no, I was just gonna agree with you. Yeah. I feel like, right, these are human artifacts, so therefore we should think about how can they make us the best versions of humans, or the best versions of ourselves, as opposed to sometimes the worst versions of ourselves.
Right? So there’s a sense of—we have to be kind of deliberate about this, but also remember, right, we are the ones who built these things. They’re not just kind of achieving escape velocity, and then we’re stuck with the way in which they make us feel or the way in which they make us act.
Ross: All right. Well, you’re going to come back in a moment, and I’m going to ask you precisely that—how do we let technology make us the best we can be?
But sort of on the way there, there are a couple of very interesting phrases you use in the book. “Connection machines”—these are connection machines. Also “universal solvent.” You use this phrase both at the beginning and the end of the book.
So what do you mean by “universal solvent”? In what way is code a universal solvent? What does that mean?
Sam: Yeah, so the idea is—it’s by analogy with water. Water is kind of a universal solvent; it is able to dissolve many, many different things within itself.
I think about computing and code and computation as this universal solvent for many aspects of our lives—kind of going back to what I was saying before, when we think about language. It turns out that thinking about code actually can provide insight into how to think about language.
If we want to think about certain ideas around how ancient mythological tales are transmitted from generation to generation—it turns out, maybe with a little bit of stretching, but you can actually connect it to code and computation and software as well.
And the same kind of thing with biology, or certain aspects of trying to understand reality through simulation. All these things have the potential to be dissolved within computing.
Now, it could be that maybe I’m just being overly optimistic with code, like, “Oh, code can do this, but no other thing can do that.” It could be that lots of other fields have the ability to connect.
Certainly, I love this kind of interdisciplinary clashing of different ideas. But I do think that the ideas of computation and computing—they are beyond just what we would maybe categorize as computer science or programming or software development or engineering.
When we think about these ideas—and it turns out there’s a lot of really deep ideas within the theory of computation, things like that—when we think about those ideas or the areas that they connect with, it really does impinge upon all these different domains: of science, of the humanities, of the social sciences, of really just every aspect of our lives.
And so that’s kind of what I’m talking about.
And then you also mentioned this kind of, like, this supreme connection machine. And so I quote this from—it was, I believe, the novelist Richard Powers. He’s talking about the power of the novel—like, certain novels can really, in the course of their plot and their story, connect so many different ideas.
And I really agree with that. But I also think that we can think the same thing about computing as well.
Ross: You know, if we think about physics as the various layers of science—where physics is the study of nature and the universe—and that is basically a set of equations. It is maths. And these are things which are essentially algorithms which we can express in code.
But this goes to the social layers of the algorithms that drive society. And I also recall Larry Lessig’s book Code, back from 25 years ago, with the sort of parallels between essentially the code as law and code as software.
In fact, a recent innovation in New Zealand has released machine-readable law—legislation basically embedding legislation in code—so that this can now be unambiguous and then read by machines, and so they can implicitly obey what they do.
So there’s a similar multiple facets of code, from social structures down to the nature of the universe.
Sam: I love that, yeah. And where I do think, yeah, there is something deep there, right? That when we think about—because code, ultimately, it is this very weird thing.
We think of it as kind of text, like on a screen, but it is only really code when it’s actually able to be run. And so it’s this kind of thought stuff—these words—but they’re very precise, and they also are then able to act in the world.
And so it’s kind of this weird material that’s somewhere between the physical and the informational. It’s definitely more informational, but it kind of hinges on the real world. And in that way, it has this kind of at least somewhat unique property.
And as a result, I think it can connect to all these other different domains.
Ross: So the three major sections of your book—in the middle one is Thought. So, of course, we can have code as a manifestation of thought. We can have code which shapes thought.
And one of the chapters is titled Tools for Thought, which has certainly been a lot of what we’ve looked at in this podcast over a long period of time.
So, let’s start to dig into that. At a high level, what do you describe as—what do you see as—tools for thought?
Sam: Yeah, I mean, so tools for thought—I mean, certainly, there’s a whole domain of software within this kind of thing.
And I actually think that there’s a really long history within this, and this is one of the things I also like thinking about, and I do a lot in the book as well, which is kind of try to understand the deeper history of these technologies—trying to kind of understand where they’ve come from, what are the intellectual threads.
Because one of the other interesting things that I’ve noticed is that a lot of interesting trends now—whether it’s democratizing software development or tools for thought or certain cutting-edge things in simulation—these things are not that new.
It turns out most of these ideas were present, if not at the very advent of the modern digital computer, then they were at least around relatively soon after. But it was the kind of thing where these ideas maybe were forgotten, or they just took some time to really develop.
And so, like, for example, one of the classic beginnings of tools for thought—well, I’ll take a step back. The way to kind of think about tools for thought is probably the best way to think about it is in the context of the classic Steve Jobs line, “the bicycle for the mind.”
And so the idea behind this is—I think he talked about it in the 1970s, at least initially—I think it was based on a Scientific American article he read in the ’70s, where there was a chart of, I guess, like the energy efficiency for mobility for different animals.
And I think it was, like, the albatross was really efficient, or whatever it was, and some other ones were not so efficient. And humans were pretty mediocre.
But then things changed—if you put a human on a bicycle, suddenly they were much, much more energy efficient, and they were able to be extremely mobile without using nearly as much energy.
And his argument is that in the same way that a bicycle provides this efficiency and power for mobility for humans, computers can be these bicycles for the mind—kind of allowing us to do this stuff of thought that much more efficiently.
Ross: Well, but I guess the thing is, though, is that—yeah, that’s, it’s a nice concept. I think, yeah,
Sam: Oh yeah, it’s very popular.
Ross: The question is, how?
Sam: Yes, yeah. So, how does it, how does it work?
So the classic place—and I actually discuss even a deeper prehistory—but like, the classic place where people start a lot of this is with Vannevar Bush, his essay in The Atlantic, I think in 1945, As We May Think.
And within it—he’s discussing a lot of different things in this article—but within it, he describes this idea of a tool called the Memex, which is essentially a thought experiment. And the way to think about it is, it’s kind of like a desk pseudo-computer that involves, I think, microfilm and projections.
But basically, he’s describing a personalized version of the web, where you can connect together different bits of information and articles and things you’re reading and traverse all of this information. And he kind of had this idea for the web—or at least, if you squint a lot. It was not a reality; there was not the technology really quite there yet, although he describes it using the current cutting-edge technology of microfilm or whatever it was.
And then people kind of proceeded with lots of different things around hypertext or whatever. But in terms of one of the basic ideas there, in terms of what is that tool for thought—it is ultimately the idea of being able to stitch together and interconnect lots of different kinds of information.
Because right now—or I wouldn’t say right now—in the early days of computing, I think a lot of people thought about computers from the perspective of just either managing large amounts of information or being able to step through things in a linear fashion.
And there was this other trend saying, no, no—things should be interconnected, and it should be able to be accessed non-linearly, or based on similar topics, or based on, ultimately, the way in which our brains operate. Because our brains are very associative. Like, we associate lots of different things. You’ll say one thing, it’ll spark a whole bunch of different ideas in my mind, and I’ll go off in multiple different directions and get excited about lots of different things.
And we should have a way, ultimately, of using computers that enhances that kind of ability—that associative ability. Sometimes maybe complement it, so it’ll make things a little bit more linear when I want to go very associative.
But I think that’s ultimately the kinds of tools for thought that people have talked about.
But then there’s other ones as well. Like, using kind of more visual methods to allow you to manipulate information, or see or visualize or see things in a different way that allows you to actually think different thoughts.
Because ultimately, one of the nice things about showing your work or writing things down on paper is it allows you to have some spatial representation of the ideas that you’re exploring, or write all the things down that maybe you can’t immediately remember in your short-term memory.
And ultimately, what it comes down to is: humans are limited creatures. Our memories are not great. We’re distractible. We associate things really well, but it’s not always nearly as systematic as we want.
And the idea is—can a computer, as a tool for thought, augment all these things? Make the way in which we think better, as well as offset all the limitations that we have?
Because we’re pretty bad when it comes to certain types of thinking. And so I think that is kind of the grand vision.
And I can talk about how certain trends with AI are kind of helping actually cash a lot of these promissory notes that people have tried to do for many, many years.
But I think that’s kind of one broad way of thinking about how to think of this broad space of tools for thought—which is recognizing humans are finite, and how can we do what we want to do already better, which is think.
And to be clear, I don’t want computers to act as sort of a substitute for thought. I enjoy thinking. I think that the process of thought itself is a very rewarding thing. And so I want these kinds of tools to allow me to feel like the best version of the thinking Sam—as opposed to, “Oh no, this kind of thing can think for me. I don’t have to do that.”
Ross: So you mentioned—you start off from looking around the sense of how it is you can support or augment the implicit semantic networks of our thinking.
These are broad ideas where, essentially, we do think in semantic networks of various kinds. And there are ways in which technology can support it.
So I suppose, coming to the present, as you say, AI has been able to bring some of these to fruition. So what specifically have you seen, or do you see emerging around how AI tools can support us in specifically that richer, more associative or complementary type of prostheses?
Sam: Yeah, so one basic feature of AI is this idea of being able to embed huge amounts of information in these kind of latent spaces, where there are some massively high-dimensional representations of articles or essays or paragraphs—or just information in general.
And the locations of those different things often are based on proximity in some sort of high-dimensional semantic space.
And so the way I think about this is—well before a lot of these current AI advances, there was this information scientist by the name of Don Swanson. And I think he wrote this paper—I think it was like the mid-1980s—it was called…
Oh, and I’m blanking on it, give me a moment. Oh—it was called Undiscovered Public Knowledge. And the idea behind it is: imagine some scientific paper somewhere in the vast scientific literature that says “A implies B.”
Then somewhere else in the literature—could be in the same subfield, could be in a totally different field—there’s another paper that says “B implies C.” And so, if you were to read both papers and combine them, you would know that perhaps “A implies C” by virtue of combining these two papers together.
But because the scientific literature is so vast, no one has actually ever read both of these papers. And so there is this knowledge that is kind of out there, but it’s undiscovered—this kind of undiscovered public knowledge.
He was not content to leave this as a thought experiment. He actually used the cutting-edge technology of the day, which was—I think—keyword searches and online medical databases. Or I don’t know if it was even online at the time.
And he was actually able to find some interesting medical results. I think he published them in a medical journal, which is kind of exciting.
This is kind of a very rudimentary thing of saying, “Okay, can we find relationships between things that are not otherwise connected?” Now, in this case, it required keyword searches, and it was pretty limited.
Once you eliminate some of those barriers, the ability to stitch together knowledge that might otherwise never be connected is enormously powerful and completely available.
And I think AI, through this kind of idea of embedding information within latent spaces, allows for this kind of thing.
So the way I think about this is—if you know the specific terms, maybe you can find those specific papers you need. But oftentimes, people are not specifying things in the exact same way.
Certainly, if they are in different domains and different fields, there are jargon barriers that you might have to overcome.
For example, back when I was a postdoc—I worked in the general field of network science—and I was part of this interdisciplinary email list. I feel like every week, someone would email and say, “Oh, how do I do this specific network metric?”
And someone else would invariably email back and say, “Oh, this has been known for 30 years in physics or sociology,” or whatever it was.
And it was because people just didn’t even know what to search for. They couldn’t find the information that was already there.
And with these much more fuzzy latent spaces, a lot of these jargon barriers are just entirely eliminated.
And so I think we now have an unbelievable possibility for being able to stitch together all this information—which will potentially create new hypotheses that can be tested in science, new ideas that could be developed—because these different fields are stitched together.
Yeah, there’s so many things. And so that is certainly one area that I think a lot about.
Ross: Yeah, so just one—I mean, in that domain, absolutely, there’s extraordinary potential to, as you say, reveal the latent connections between knowledge—complementary knowledge—which is from our vast knowledge we’ve created as humanity.
There are many more connections between those to explore, which will come to fruition.
This does come to the humans-plus-AI piece, where, on one level, the AI can surface all of these connections which might not have been evident, but then come to the fore. So that is now a critical part of the scientific process.
I mean, arguably, a lot of science is collecting what was already there before, and now we’re able to supercharge that.
So in this humans-plus-AI world, where’s the role of the human there?
Sam: So that’s a good question. I mean, I would say, I’m hesitant to say that there’s any specific task that only a human can do forever. It seems to be—any time you say, “Oh, only humans can do this,” we are invariably proven wrong, sometimes almost instantly.
So I kind of say this a lot with a lot of humility. That being said, I do think in the near term, there is a great deal of space for humans to act in this almost managerial role—specifically in terms of taste.
Like, what are the interesting areas to focus on? What are the kinds of questions that are important?
And then, once you aim this enormously powerful tool in that direction, then it kind of goes off, and it’s merciless in connecting things and providing hypotheses and suggestions and ideas and potential discoveries and things to work on.
But knowing the kinds of questions and the kinds of things that are important or that will unlock new avenues—it seems right now (maybe this will no longer be the case soon), but at least right now, I still think there’s an important role for humans to provide that sense of taste or aim, in terms of the directions that we should be focusing on.
Ross: So going back to that question we touched on before—how do we as humans be the best we possibly can be?
Now that we have—well, I suppose this is more a general, broader question—but also now that we have extraordinary tools, including ones of code in various guises, to assist us, how do we be the best we can be?
Sam: Yeah, I think that is the singular question of this age, in this moment.
And in truth, I think we should always be asking these questions about, okay, being the best versions of ourselves. How do we create meaning and purpose and things like that?
I do think a lot of the recent advances with AI are sharpening a lot of these kinds of things.
Going back to what I was saying before—at many moments throughout history, we’ve said, “Oh, humans are distinct from animals in certain ways,” and then we realized, “Oh, maybe animals can actually do some of those kinds of things.”
And now, we are increasingly doing the same kind of thing with AI—saying, “Oh, AI can maybe recommend things to purchase, but it can never write crappy poetry,” and guess what? Oh, it actually can write pretty mediocre poetry too.
So for me, I kind of view it as—by analogy, there’s this idea, somewhat disparagingly, within theology, of how you define the idea of God. Some people will say, “Oh, it’s simply anything that science cannot explain yet.”
This is called the “God of the gaps.”
And of course, science then proceeds forward, explaining various things in astronomy, cosmology, evolution, all these different areas. And suddenly, if you ascribe to this idea, your conception of God gets narrower and narrower and might eventually vanish entirely.
And I feel like we are doing the same kind of thing when it comes to how we think about AI and humanity. Like, “Oh, here are the things that AI can do, but these are the things that humans can do that AI can never do.”
And suddenly, that list gets shorter and shorter.
So for me, it’s less about what is uniquely human—because that uniqueness is sort of a moving target—and more about what is quintessentially human.
What are the things—and this goes back to exactly your question—what are the things that we truly want to be focusing on? What are the things that really make us feel truly human—like the best versions of ourselves?
And those answers can be very different for many people. Maybe you want to spend your time gardening, or spending time with your family, or whatever it is.
But certainly, one aspect of this—related to tools for thought—is the idea that I do think that certain aspects of thought and thinking are a quintessentially human activity.
Not necessarily unique, because it seems as if AI can actually do, if not real thought, then a very accurate simulacrum of thought.
But this is something that does feel quintessentially human—that we actually want to be doing ourselves, as opposed to outsourcing entirely.
So I think, as a society, we have to say, “Okay, what are the things that we do want to spend our time doing?” and then make sure that our technologies are giving us that space to do those kinds of things.
And I don’t have all the answers of what that kind of computational world will look like exactly, or even how to bend the entire world of big tech toward those ends. I think that is a very large and complicated issue.
But I do think that these kinds of questions—the ones you asked me and the ones I’m talking about—these are the kinds of questions we need to really be asking as a society.
You’re seeing hints of that, even separate from AI, in terms of how we’re thinking about smartphone usage—especially smartphone usage among children.
Like, Jonathan Haidt has been talking about these things over the past several years, and really caused—at least in the United States—kind of a national conversation around, “Okay, when should we be giving phones to children? Should we be giving them phones? What kinds of childhoods do we want our children to have?”
And I feel like that’s the same kind of conversation we should be having more broadly for technology: What are the lives we want to have?
If so, how can we pick and choose the kinds of technologies we want?
And I do think—even though some of these things are out of our hands, in the sense that I cannot unilaterally say, “Oh, large social media giant, change the way your algorithm operates”—they’re not going to listen to me.
But I can still say, “Oh, in the absence of you doing the kinds of things that I want, I don’t have to play your game. I don’t have to actually use social media.”
So there is still some element of agency in terms of picking and choosing the kinds of technologies you want.
Now, it’s always easier said than done, because a lot of these things have mechanisms built in to make you use them in a certain way that is sometimes against your better judgment and the better angels of our nature.
But I still think it is worth trying for those kinds of things.
So anyway, that’s a long way of saying I feel like we need to have these conversations. I don’t necessarily have all the answers, but I do think that the more we talk about what are the kinds of things that make us feel quintessentially human, then hopefully we can start picking and choosing the kinds of technologies that work for that.
So, like, if we love art, what are the technologies that allow us to make better art—as opposed to just creating sort of, I don’t know, AI slop, or whatever people talk about?
Depending on the specific topic you’re focusing on, there’s lots of practicalities. But I do think we need to be having this conversation.
Ross: So just rounding out, in terms of looking at the ideas in your book—sort of very wide-ranging—what is your advice, or what are your suggestions for people in terms of anything that they could do which will enhance themselves or make them better versions of themselves, or more better suited to the world in which we are living?
Sam: That is a great question.
And I think I would say it’s related to kind of just being deliberate—whether it’s being deliberate in the technologies you adopt or being deliberate in terms of the kinds of things that you want to be spending your time on.
And it’s even beyond technology. It’s more about, okay, what is the—it involves saying, “Okay, what are the kinds of things I want to do, or the kind of life I want to live?” And then pick and choose technology, and the kinds of technology, that really feel like they enhance those kinds of things as opposed to diminish them.
Because, I mean, as much as I talk about computation as this universal solvent that touches upon lots of different things—computing, it is not all of life.
As much as I think there is the need for reigniting wonder and things like that, not everything should be computational. I think that’s fine—to have spaces where we are a little bit more deliberate about that.
But going back to the sense of wonder, I also think ultimately it is about trying to find ways of rekindling that wonder when we use certain aspects of our technologies.
Like, if we feel like, “Oh, my entire technological life is spent in this, I don’t know, fairly bland world of enterprise software and social media,” there’s not much wonder there. There’s maybe anger or rage or various other kind of extreme emotions, but there’s usually not delight and wonder.
And so I would say, on the practical sense, probably a good rule of thumb for the kinds of technologies that are worth adopting are the ones that spark that sense of wonder and delight.
Because if they do that, then they’re probably at least directionally correct in terms of the kinds of things that are maybe a little bit more humane or in line with our humanity.
Ross: Fantastic. So where can people go to find out more about your work and your book?
Sam: So my website—it’s just my last name, Arbesman. So arbesman.net is my website. And on there, you can read about the book.
I actually made a little website for this new book The Magic of Code. It’s just themagicofcode.com. So if you go to that, you can find out more about the book.
And if you go on arbesman.net, you can also find links to subscribe to my newsletter and various other sources of my writing.
Ross: Fantastic. Loved the book, Sam. Wonderful to have a conversation with you. Thanks so much.
Sam: Thank you so much. This was wonderful. I really appreciate it.
The post Sam Arbesman on the magic of code, tools for thought, interdisciplinary ideas, and latent spaces (AC Ep5) appeared first on Humans + AI.

May 21, 2025 • 26min
Bruce Randall on energy healing and AI, embedding AI in humans, and the implications of brain-computer interfaces (AC Ep4)
I feel that the frequency I have, and the frequency AI has, we’re going to be able to communicate based on frequency. But if we can understand what each is saying, that’s really where the magic happens.
– Bruce Randall
About Bruce Randall
Bruce Randall describes himself as a tech visionary and Reiki Master who explores the intersection of technology, human consciousness, and the future of work. He has over 25 years of technology industry experience and is a longtime practitioner of energy healing and meditation.
Website:
Bruce Randall
LinkedIn Profile:
Bruce Randall
What you will learn
Exploring brain-computer interfaces and human potential
Connecting reiki and AI through frequency and energy
Understanding the limits and possibilities of neural implants
Balancing intuition, emotion, and algorithmic decision-making
Using meditation to sharpen awareness in a tech-driven world
Navigating trust and critical thinking in the age of AI
Imagining a future where technology and consciousness merge
Episode Resources
Companies & Organizations
Neuralink
Synchron
MIT
Technologies & Technical Terms
Brain-computer interfaces
AI (Artificial Intelligence)
Agentic AI
Neural implants
Hallucinations (in AI context)
Algorithmic trading
Embedded devices
Practices & Concepts
Reiki
Meditation
Sentience
Consciousness
Critical thinking
Transcript
Ross Dawson: Bruce, it’s a delight to have you on the show.
Bruce Randall: Well, Ross, thank you. I’m pleased to be on the show with you.
Ross: So you have some interesting perspectives on, I suppose, humanity and technology. And just like to, in brief, hear how you got to your current perspectives.
Bruce: Sure. Well, when I saw Neuralink put a chip in Nolan’s head and he could work the computer mouse with his thoughts, and he said, sometimes it goes where it moves on its own, but it always goes where I want it to go.
So that, to me, was fascinating on how with the chip, we can do things like sentience and telecommunications and so forth that most humans can’t do. But with the chip, all of a sudden, all these doors are open now, and we’re still human. That’s fascinating to me.
Ross: It certainly extends, extending our capabilities. It’s done in smaller ways in the past and now in far bigger ways. So you do have a deep technology background, but also some other aspects to your worldview.
Bruce: I do. I’ve sold cloud, I’ve been educated in AI at MIT, and I built my first AI application. So I understand it from, I believe, from all sides, because I’ve actually done the work instead of read the books.
And for me, this is fascinating because AI is moving faster than anything that we’ve had in recent memory, and it directly affects every person, because we’re working with it, or we can incorporate it in our body to make us better at what we do. And those possibilities are absolutely fascinating.
Ross: So you describe yourself as a Reiki Master. So what is Reiki and how does that work? What’s its role been in your life?
Bruce: Well, Reiki Master is you can connect with the universal energy that’s all around us, and it means I have a bigger pipe to put it through me, so I can direct it to people or things.
And I’ve had a lot of good experiences where I’ve helped people in many different ways. The Reiki and the meditation came after that, and that brought me inside to find who I truly am and to connect with everything that has a vibration that I can connect with.
That perspective, with the AI and where that’s going—AI is a hardware, but it produces software-type abilities, and so does the energy work that I do. They’re similar, but they’re very different.
And I believe that everything is a vibration. We vibrate and so forth. So that vibration should be able to come together at some point. We should be able to communicate with it at some level.
Ross: So if we look at the current state of research, scientific research into Reiki, there seems to be some potential low-level and small-population results. So it doesn’t seem to be a big tick.
It doesn’t—there’s—there does appear to be something, but I think it’s fair to say there’s widespread skepticism in mainstream science about Reiki. So what’s your, I suppose, justification for this as a useful perspectival tool?
Bruce: Well, I mean, I’ve had an intervention where I actually saved a life, which I won’t go into here. But my body moved, and I did that, and I said, I don’t know why I’m doing this, but I went with the body movement and ended up saving a life.
To me, that proved to me, beyond a shadow of a doubt, that there’s something there other than just what humans can see and feel. And that convinced me.
Now, it’s hard to convince anybody else. It’s experiential, so I really can’t defend it, other than saying that I have enough experiences where I know it’s real.
Ross: Yeah, and I think that’s reasonable. So let’s come back to that—the analogy or linkage you are painting between the energy, underlying energy and Reiki that you experience, and the AIs, I suppose, augmentation of humans and humanity.
Bruce: Well, everything has a vibration or frequency. So music has a frequency. People have a frequency. And AI has a frequency.
So when you put AI in somebody, there’s the ability at some point for them to communicate with that AI beyond the electrical signal communication. And if that can be developed with the electrical signal from the AI chip, that person can grow leaps and bounds in all areas—instead of just intelligence—but they have to develop that first to do that.
Now, AI is creating—or is potentially creating—another class of people. Whereas Elon said in the first news conference, if you’re healthy and you can afford it, you too can have a chip. So that’s a form of commercialization.
You may not need to be a quadriplegic to get a chip. If you can afford it, then you can have a chip potentially too. So that puts commercialization at a very high level.
But when it gets normalized and the price becomes more affordable, I see that as being something that more mainstream people can get if they choose to.
Now, would there be barriers or parentheses on that—where you can only go so far with it? Or if you get a chip, you can do whatever you want?
And those are some of the things that I look at as saying we’re moving forward, but we have to do it thoughtfully, because we have to look at all areas of implications, instead of just how fast can we go and how far can we go.
Ross: Yeah, well, I mean, for a long time, I’ve said that the very look at the advancement of brain-computer interfaces—first phase, of course, they’re used to assist those who are not fully abled.
And then there’s a certain point when, through safety and potential advantages, people who are not disabled will choose to use them. And so that’s still not a point which we’ve reached—or probably not even close to at this point.
But still, the massive constraint is the input-output bandwidths of the brain-computer interfaces of today. Still, the “1000 bits per second” rule, which is very similar—so it’s very low bandwidth—and there’s potential to be able to expand that. But that still is essentially bits.
It is low levels of information—input, output. So that’s a different thing to what you are pointing to, where there are things beyond simple information in and out. So, for example, the ability to control the computer mouse with your brain…
Bruce: Right. But that’s the first step. And the fact that we got to the first step and we can do that—it’s like we had the Model A, and all of a sudden, a couple decades later, we’ve got these fancy cars.
That’s a huge jump in a relatively short period of time. And with all the intelligence of the people and the creativity of the scientists that are putting this together, I do believe that we’re going to get advances in the short and medium-long term that are really going to surprise people.
On what we can do as humans with AI—either embedded or connected in some way or fashion—because you can also cut the carotid and put a capsule in, and you’ve got AI bots running throughout your body.
Now that’s been proven—that that works—and that’s something that hasn’t gotten a lot of press. But we’ve got other ways that we can access the body with AI, and it’s a matter of: we have to figure out which is best, what the risks are, what the parameters are, and how we best move forward with that.
Ross: Yeah, it sounds like you’re referring to Synchron, which is able to insert something into the brainstem through the carotid. But that’s not what’s through the body—that’s simply just an access point to the brain for Synchron.
Which is probably a stronger approach than the—well, can be—than the Neuralink swarm, which is directly interfacing with the brain tissue.
So what do you—so, one of the—if you think about it as an input-output device, that’s quite simple, as in the sense of, we can take information into our brain, whatever sense. So that’s still being used a bit less.
And we can also output it—as in, we can basically take thoughts or directions and use that as outputs to devices. So what specifically—can you point to specific use cases that you would see as the next steps for using BCIs, brain-computer interfaces, with AI?
Bruce: Yeah, I think that we’re just in the very beginning of that. And I think that there are ways to connect the human with the AI that can increase where we are right now.
I just don’t think we know the best way to do that yet. We’re experimenting in that. And I think there are many other ways that we can accomplish the same thing.
It’s in the development stages. We’re really on the upward curve of the bell curve for AI, and we’ve got a long way to go before we get to the top.
Ross: Yeah, I asked for specifics. So what specifically do you see as use cases for next steps?
Bruce: Well, for specifics, I see in people with science and medical, I think there are significant use cases there where they can process faster and better with AI than we can process right now. That’s pure information.
And then they can take their intelligence they have as a human, and analyze that quickly and get it faster. In an ER situation, there is a certain amount of error in that area from mistakes that are made. With AI, that can fine-tune that so you have fewer errors and you can make better choices going forward.
There are many other cases like that. You could be on the floor trading, and everything is a matter of ratios and so forth. Or you could be in an office trading in real time on the machines. At that point, you’re looking at a lot of different screens and trying to make a decision.
If you had AI with you, that would be able to process—speed your processing time—and you could make better decisions faster, because time is of the essence in both of those examples. And AI could help in that.
Now, is that a competitive and comparative advantage? I would say so, but it’s in a good way—especially in the medical field.
Ross: Yes, so you’re talking about AI very generically, so in this idea of humans plus AI decision-making.
So, essentially, you can have human-only decisions, you can have AI decisions. In many cases, the trading—algorithmic trading—is fully delegated to AI because the humans can’t make the decisions fast enough.
So are there any particular structures? What specific ways do you see that AI can play a role in those kinds of decision-making?
I mean, you mentioned the things of being able to point to potential errors or flag those, and so on. What are other ways in which you can see decisions being made in medical or financial or other perspectives where there is an advantage to the human and AI collaboration—as opposed to having them both separate—and the ways in which that would happen?
Bruce: Well, in the collaboration, AI is getting to the point where it has hallucinations right now, so you have to get around that in order to get this in a more reliable fashion.
But once you train AI for a specific vertical, that AI is going to work better in that vertical than in an untrained vertical. So that’s really the magic in how you get that to work better.
And then AI, with the genetic, has the ability to make decisions. And you have to gauge that with the human ability to make decisions to make sure that that’s in line.
You could always put a check and balance in place where, if the AI wanted to do something in a fast-moving environment and you weren’t comfortable with that, you could say no, or you could let it go.
That’s something that could be in an earpiece. It can be embedded. There are many different ways to do that. It could be on a speaker where they’re communicating—that’s an easy way to do it.
As far as other ways to do it, I mean, we are auditory—so we see, we hear, and we speak—and that’s how we take in information. That’s what it’s going to be geared to.
And those devices are coming on right now to be developed where it all works together. But we’re not there yet.
But this is where I see it going in both those environments, where you can have a defined benefit for AI working with humans.
Ross: So one of the things which is deeply discussed at the moment is AI’s impact on critical thinking.
Many people are concerned that because we are delegating complex thinking to AI, in many cases we become lazy or we become less capable of doing some of that critical thinking.
Whereas in other domains, some people are finding ways to use AI to be able to sharpen, or to find different perspectives, or find other ways to add to their own cognition.
So what are your perspectives or beliefs, specifically on how it is we can best use AI as a positive complement to our cognitive thinking and critical thinking and our ability to develop it?
Bruce: Well, we think at a very fast rate, and scientists don’t understand the brain yet in its full capacity, and we don’t understand AI to its full capacity.
So I would say with that, we need to work in both areas to better understand them, to find out how we can get to the common denominator where both are going to work together.
Because you’ve got—it’s like having two people—you’ve got, for example, the Agentic AI, which has got somewhat of a personality with data, and then you’ve got us with data and with emotions.
Those are hard to mix when you put the emotions in it, right? We also have a gut feel, which is pretty accurate. When you put all that together, you’ve got conflicts here, and you have to figure out how you’re going to overcome that to work in a better system.
Now, once you get trust with it, you can just rely on it and move forward. But as humans, we have a hard time giving trust to something when it’s important. We rely on our own abilities more than a piece of technology.
So that bridge has to be crossed, and we haven’t crossed that yet. And at the same time, humans have done a pretty good job in some very trying situations.
AI hasn’t been tested in those yet, because we’re very early in the stages of AI. When we get to that point, then we’re going to start working together and comparing—and really answer your question.
Because right now, you’ve got both sides. They both have valid points, but we don’t yet know who’s right.
Ross: Yeah, there’s definitely a pathway to a few elements you raised there. One is in trust.
So how do we get justified trust in systems so they can be useful? Conflicts around decision-making, and to what point do we trust in our own validation of our own decision-making or thinking in a way that we can effectively, essentially, patch the better decision-makers through that external perspective or addition?
So you have deep practice or meditation, amongst other things. And we have a deluge of information which we are living in, which is certainly continuing to increase.
So what would your advice be for how to stay present and sharp and connected and be able to deal with the very interesting times we live in?
Bruce: Well, that’s a big question, but I’ll give you a short answer for that.
My experience with meditation is I’ve gotten to know myself much better, and it’s fine-tuned who I am.
Now, you can listen to a tape and you can make incremental movies with that to relax, but I suggest meditation is a great way to expand in all areas—because it’s expanded in all areas for me.
And it’s a preference. It’s an opinion based on experience. And everybody has different paths and would have different experiences in that. It’s an option.
But what I tell everybody is—because there are a lot of people that still aren’t into AI to the extent that they need to be—I say take 20 minutes to 30 minutes a day in the vertical that you’re in and understand AI and how it can enable you.
Because if you don’t do that, in two years, you’re going to be looking from behind at the people that have, and it’s going to be very hard to catch up.
Ross: So, slice of time for studying AI and slice of time for meditation, right?
Bruce: Yeah, I do. I do 30 minutes twice a day, and I fit it in for 12 years in a busy schedule. So it’s doable. May not be easy, but it’s doable.
Ross: Yes, yes. Well, I personally attest to the benefits of meditation, though I’m not as consistent as you are.
But I think, yeah, and that’s where there is some pretty solid evidence—well, very solid evidence—that meditation is extremely beneficial on a whole range of different fronts, including physical health, as well as mental well-being and ability to focus, and many other things that are extremely useful to us in the busy world that we learn….
Bruce: Scientific explanation is correct.
Ross: Yeah, yeah. And it’s very, very, very well validated for those that have any doubts.
So to round out, I mean, we’ll just paint a big picture. So I’d like to let you go wild. Where—what is the potential? Where can we go? What should we be doing? What’s the future of humanity?
Bruce: Well. That’s a huge question. And AI is not there yet.
But humans—I see, because I’ve been able to do some very unusual things with my combination—I feel that the frequency I have, and the frequency AI has, we’re going to be able to communicate based on frequency.
But if we can understand what each is saying, that’s really where the magic happens.
And I see people—their consciousness increasing—just because humanity is increasing.
And I think in—I mean, they’re discussing sentience and AI. I don’t know. I mean, I understand it, but I don’t know where they’re going with this.
Because if you weren’t born with a soul, you don’t have the sentience that a piece of software has. I mean, it can be very intelligent, but it’s not going to have that, in my opinion.
Now, will a hybrid come out with person and AI? Doubtful, but it’s possible.
There are a lot of possibilities without a lot of backup for them for the future. But I know that if you promote yourself with meditation and getting to know yourself better, everything else happens much easier than if you don’t.
And I think with AI—I mean, the sky’s the limit. What does the military have that we don’t have with AI, right?
I mean, there’s a lot of smart people working that aren’t in public with AI, and we don’t know where they are. But we know that they’re making progress, because every once in a while we hear something.
And I was watching a video on LinkedIn—they mapped the mouth area, and this person could go through seven different languages while he’s walking and talking, and his lips match the words.
That point right there, which was a month ago—I said, now I can’t—I’m not sure if I’m watching somebody actually saying something, or if it’s AI.
So we make advancements, and then we look at it and say, who can I believe now? Because it’s hard to tell.
Ross: Yes.
Bruce: So I hope that gives what I think is possible in the future. Where we go—who knows?
Ross: Yeah, the future is always unpredictable, but a little bit more now than it ever has been.
And one of the aspects of it is, indeed, the blurring of the boundaries of reality and knowing what is real and otherwise. And so I think this still comes back to—we do know that we exist.
There still is a little bit of the “I think, therefore I am,” as Descartes declared, where we still feel that’s valid. And beyond that, all the boundaries of who we are as people, individuals, who we are as humanity, are starting to become a lot less clear than they have been.
Bruce: And it will get more or less clear, I think, until it gets clearer.
Ross: So thanks, Bruce, for your time and your perspectives. I enjoyed the conversation.
Bruce: Thank you, Ross. I appreciate your time, and I enjoyed it also.
The post Bruce Randall on energy healing and AI, embedding AI in humans, and the implications of brain-computer interfaces (AC Ep4) appeared first on Humans + AI.

May 7, 2025 • 33min
Nisha Talagala on the four Cs of AI literacy, vibe coding, critical thinking about AI, and teaching AI fundamentals (AC Ep2)
“The floor is rising really fast. So if you’re not ready to raise the ceiling, you’re going to have a problem.”
– Nisha Talagala
About Nisha Talagala
Nisha Talagala is the CEO and Co-Founder of AIClub, which drives AI literacy for people of all ages. Previously, she co-founded ParallelM where she shaped the field of MLOps, with other roles including Lead Architect at Fusio-io and CTO at Gear6. She is the co-author of Fundamentals of Artificial Intelligence – the first AI textbook for Middle School and High School students.
Website:
Nisha Talagala
Nisha Talagala
LinkedIn Profile:
Nisha Talagala
What you will learn
Understanding the four C’s of AI literacy
How AI moved from winter to wildfire
Teaching kids to build their own AI from scratch
Why professionals must raise their ceiling
The role of curiosity in using generative tools
Navigating context and motivation behind AI models
Embracing creativity as a key to future readiness
Episode Resources
People
Andrej Karpathy
Organizations & Companies
AIClub
AIClubPro
Technical Terms
AI
Artificial General Intelligence
ChatGPT
GPT-1
GPT-2
GPT
Neural network
Loss function
Foundation models
AI life cycle
Crowdsourced data
Training data
Iteration
Chatbot
Dark patterns
Transcript
Ross Dawson: Nisha, it’s a delight to have you on the show.
Nisha Talagala: Thank you. Happy to be here. Thanks for having me.
Ross: So you’ve been delving deep, deep, deep into AI for a very long time now, and I would love to hear, just to start, your reflections on where AI is today, and particularly in relation to humans.
Nisha: Okay, absolutely. So I think that AI has been around for a very long time. And there was a long time which was actually called AI winter, which is effectively that very few people working on AI—only the true believers, really.
And then a few things kind of happened. One of them was that the power of computers became so much greater, which was really needed for AI. And then the data also, with the internet and our ability to store and track all of this stuff, the data also became really plentiful.
So when the compute met the data, and then people started developing software and sharing it, that created kind of like a perfect storm, if you will. That enabled people to really see that AI could do things. Previously, AI experiments were very small, and now suddenly companies like Google could run really big AI experiments.
And often what happened is that they saw that it worked before they truly knew why it worked. So this entire field of AI kind of evolved, which is, “Hey, it works. We don’t actually know why. Let’s try it again and see if it works some more,” kind of thing.
So that has been going on now for about a decade. And so, AI has been all around you for quite a long time.
And then came ChatGPT. And not everyone knows, but ChatGPT is actually not the first version of GPT. GPT-1 and GPT-2 were pretty good. They were just very hard to use for someone who wasn’t very technical.
And so, for those who are technical—one thing is, you had to—actually, it was a little bit like Jeopardy. You had to ask your question in the form of an incomplete sentence, which is kind of fun in the Jeopardy sort of way. But normally, we don’t talk to people with incomplete sentences hoping that they’ll finish that sentence and give us something we want to know.
So ChatGPT just made it so much easier to use, and then suddenly, I think it just kind of burst on the mainstream. And that, again, fed on itself: more data, more compute, more excitement—going to the point that the last few years have really seen a level of advancement that is truly unprecedented, even in the past history of AI, which is almost already pretty unprecedented.
So where is it going? I mean, I think that the level—so it’s kind of like—so people talk a lot about AGI and generalized intelligence and surpassing humans and stuff like that. I think that’s a difficult question, and I’m not sure if we’ll ever know whether it’s been reached. Or I don’t know that we would agree on what the definition is there, to therefore agree whether it’s been reached or not reached.
There are other milestones, though. For example, standardized testing has already been taken over by AI. AI’s outperform on just about every level of standardized test, whether it’s a college test or a professional test, like the US medical licensing exam. It’s already outperforming most US doctors in those fields. And it’s scoring well on tests of knowledge as well.
And also making headway in areas that are traditionally considerably challenged—areas like mathematics and reasoning have become issues.
So I think you’re dealing with a place where, what I can tell you is that the AIs that I see right now in the public sphere rival the ability of PhD students I’ve worked with.
So it’s serious. And I think it’s a really interesting question of—I think the future that I see is that we have to really be prepared for tools that are as capable, if not in some areas more capable than we are. And then figure out: What is the problem that we are trying to solve in that space? And how do we work collaboratively with the tools?
I think picking a fight with the tools is unwise.
Ross: Yeah, yeah. And I guess my broader view is that the intent of being able to create an AI of humans as a reference point was always misguided. I mean to say, all right, we want to create intelligence. Well, the only intelligence we know is human, so let’s try to mimic that and to replicate what it does as much as possible.
But this goes to the point, as you mentioned, of augmentation, where on one level, we can say, all right, we can compare humans versus AI on particular tests or so on. But there are, of course, a multitude of ways in which AIs can augment humans in their capabilities—cognitive and intellectual and otherwise.
So where are you seeing the biggest potentials in augmenting intelligence or cognition or thinking or positive intent?
Nisha: Absolutely. So I think, honestly, the examples sort of—I feel like if you look for them, they’re kind of everywhere.
So, for example, just yesterday—or the day before yesterday—I wrote an article about vibe coding. Vibe coding is a term coined by Andrej Karpathy, which is essentially the way he codes now. And he’s a very famous person who, obviously, is a master coder. So he has alternatives—lots of ways that he could choose to write code.
And his basic point is that now he talks to the machine, and he basically tells it what he wants. Then it presents him with something. And then he says, “I like it. Change this, change that, keep going,” right?
And I definitely use that model in my own programming, and it works really well.
So really, it comes down to: you have something to offer. You know what to build. You know when you don’t like something, right? You have ideas. This is the machine that helps you express them, and so on and so forth.
So if you do that, that’s a very good way of doing augmented. So you’re creating something, and sometimes, when you see a lot of options presented to you, you’re able to create something better just because you can see it. Like, “Oh, it didn’t take me three weeks to create one. Suddenly I have fifteen, and now I know I have more cycles to think about which one I like and why.”
So that’s one example—just of creation collaboratively.
Examples in medicine just abound. The ability to explore molecules, explore fits, find new candidates for drugs—it’s just unbelievable.
I think in the next decade, we will see advancements in medicine that we cannot even imagine right now, just because of that ability to really formulate a problem, give a machine a task, have it come back, and then you iterate on it.
And so I think if we can just tap humans into that cycle and make that transition—so that we can kind of see a bigger problem—then I think there’s a lot of opportunity.
Ross: So, which—that leads us to the next thing. So the core of your work is around AI literacy and learning. And so it goes to the question of: AI is extraordinarily competent in many domains. It can augment us.
So what is—what are the foundational skills or knowledge that we require in this world? Do we need to understand the underlying architectures of AI? What do we need to understand—how to engage with generative AI tools?
What are the layers of AI literacy that really are going to be important in coming years?
Nisha: Very good question. So I can tell you that kind of early on in our work, we defined AI literacy as what we call the four C’s. We call them concepts, context, capability, and creativity.
Ross: Sorry, could you repeat this?
Nisha: Yes—concepts, context, capability, and creativity.
Ross: Awesome.
Nisha: So, concept is—you really should know something about the way these tools are created. Because as delightful as they are, they are not perfect. And a good user who’s going to use it for their own—who’s going to have a good experience with it—is going to be able to pick where and how to interact with it in ways that are positive and productive, and also be able to pick out issues, and so forth.
And so what I mean by concept is: the reliance of AI on data and being able to ask critical questions. “Okay, I’m dealing with an AI. Where did it get its data? Who built it? What was their motivation?”
Like these days, AIs are so complex that what I tell my students is: you don’t know what it’s trying to do. What is its goal? It’s sitting there talking to you. You didn’t pay for it—so what is it trying to accomplish?
And the easiest way to find out is: figure out who paid for it and figure out what it is they want. And that is what the AI is trying to accomplish. Sometimes it’s to engage you. Sometimes it’s to get information from you. Sometimes it’s to provide you with a service so that you will pay, in which case the quality of its service to you will matter, and such like that.
But it’s really important, when you’re dealing with a computer or any kind of service, that you understand the motivations for it. What is it being optimized for? What is it being measured on? And so forth.
So there’s kind of concepts like that—about how these tools are created. That does not mean everyone has to understand the nuances of how a neural network gets trained, or what it means to have a loss function, or all these things. That’s suitable for some people, but not necessarily for everyone.
But everyone should have some conceptual understanding.
Then context.
Ross: Or just gonna say, those interesting patterns on dark patterns. A paper in dark patterns on AI, which came out last week, I think, in one of the domains was second fancy, where, essentially, as you suggest, AI can say, “You’re wonderful” in all sorts of guises, which, amongst other things, makes you like it or more to use it more.
Nisha: Oh yes, they definitely have. They definitely want you to keep coming back, right?
You suddenly see that. And it’s funny, because I was having some sort of an interaction with—I’m not gonna name which company wrote the model—and it said something like, “Yeah, we have to deal with this.” And I’m like, there’s no we here. It’s just me. When did we become we?
You’re just trying just a little too hard to get on my good side here. So I just kind of noticed that. I’m like, not so good.
But so concepts, to me, effectively means that—underlying the fundamental ways that these programs are built, how they rely on data, what it means for an AI to have a brain—and then the depth depends entirely on the domain.
Context, for me, is really the fact that these things are all around us, and therefore you truly do want to know that they are behind some of the tooling that you use, and understand how your information is shared, and so forth.
Because there’s a lot of personal decisions to be made here, and there are no right answers. But you should feel like you have the knowledge and the agency to make your own choices about how to handle tools.
So that’s what I mean by context. It’s particularly important for young people to appreciate—context.
Ross: And I think for professionals as well, because their context is, you know, making decisions in complex situations. And if they don’t really appreciate the context—and the context of the AI—then that’s, that’s not a good thing.
Nisha: Absolutely.
And then capability—really, it varies very much on domain. But capability is really about: are you going to be able to function, right? Are you going to be able to do a project using these tools? Or do you need to build a tool? Do you need to merge the tools? Do you need to create your own tools?
So in our case, for young people, for example—because they don’t have a domain yet—we actually teach them how to build AI from scratch. So one of the very common things that we do is: almost in every class, starting from third grade, they build an AI in their first class completely from scratch. And they train it with their own data, and they see for themselves how its opinions change with the information they give it.
And that’s a very powerful exercise because—so what I typically ask students after that exercise is, I ask them two questions.
First question is: did it ever ask you if what you were teaching it was true? And the answer is always, no. You can teach it anything, and it will believe you. Because they keep teaching it information, and children being children, will find all sorts of hilarious things to teach a machine, right?
And then—but then—they realize, oh, truth is not actually a part of this.
And then the next question, which is really important, is: so what is your responsibility in this whole thing?
Your responsibility is to guide the machine to do the right thing, because you already figured out it will do anything you ask.
Ross: That’s really powerful. Can you tell me a little bit more about precisely how that works, and when you say, getting them to build their own AI?
Nisha: So we have built a tool. It’s called Navigator, and it’s effectively a web-based front end to industry standard tools like TensorFlow and scikit-learn. And it runs on the cloud.
Then we give each of our students accounts on it, and depending on how we do it, they can either—anonymized accounts, whatever we need to protect their privacy. At large-scale installations with schools, for example, it’s always anonymous.
Then what happens is they go in, and they’re taken through the steps of building an AI. We give them a few datasets that are kid-friendly. So one other thing to remember when you’re teaching young people is a lot of the data that’s out there is not friendly to young people, so we maintain a massive repository of kid-friendly datasets.
A very common case that they run is a crowdsourced dataset that we crowdsourced from children, which are sentences about happiness and sadness. So a child’s view—like chocolate might be happy, broccoli might be sad, things like that. But nothing sad—children can relate to.
So they start teaching about happy and sad. And one of the first things that they notice is—those of them that have written programs before—this is kind of hard to write a program for.
What word would you be looking for? There’s so many words. Like, I can’t use just the word happy. I might say, “I feel great.” I didn’t use the word happy, but I’m clearly happy. So they’re like, “Oh, so there’s something here—more than just looking for words. You have to find a pattern somehow.”
And if you give it enough examples, a pattern kind of emerges. So then they train the AI—it takes about five minutes. They actually load up the data, they train an AI, they deploy it in the cloud, and it presents itself as a little chatbot, if you will, that they can type in some sentences and ask it whether it thinks they’re happy or sad.
And when it’s wrong, they’re like, “Oh, it’s wrong now.” Then there’s a button they can press that says, “I don’t think you’re right.” And then it basically says, “Oh, interesting. I will learn some more.”
They can even teach it new emotions. So they teach it things like, “I’m hungry,” “I’m sleepy,” “I’m angry,” whatever it is. And it will basically pick up new categories and learn new stuff.
So after the first five minutes, when they interact with it—within about 15 minutes—every child has their own entire, unique AI that reflects whatever emotions they chose to teach and whatever perspective.
So if you want to teach the AI that your little brother is the source of all evil, then it will do that. And stuff like that.
And then after a while, they’re like, “Oh, I know how this was created. I can see its brain change.” And now you can ask it questions about what does this even mean when we have these programs.
Ross: That is so good.
Nisha: So that’s what I mean. And it has a wonderful reaction in that it takes away a lot of the—it makes it tangible. Takes away a lot of the fear that this is some strange thing. “I don’t know how it was made.”
“I made it. I converted it into what it is. Now I understand my agency and my responsibility in this situation.”
So that’s capability—and it’s also creativity in an element—because every single one of our projects, even at third grade, we encourage a creative use of their own choosing.
So when the children are very young, they might teach an AI to learn all about an animal that they care about, like a rabbit. In middle school, they might be looking more at weather and pricing and stuff like that.
In high school, they’re doing essentially state-of-the-art research. At this point, we have a massive number of high school students who are professionally published. They go into conferences and they speak next to PhDs and professors and others, and their work is every bit as good and was peer-reviewed and got in entirely on merit.
And that, I think, tells me what is possible, right? Because part of it is that when the tools get more powerful, then the human brain can do more things. And the sooner you put—
And the beautiful thing about teaching K–12 is they are almost fearless. They have a tremendous amount of imagination. They start getting a little scared around ninth grade—kicks in: “Oh, maybe I can’t do this. Maybe this isn’t cool. I’m going to be embarrassed in front of my friends.”
But before that, they’re almost entirely fearless. They have fierce imagination, and they don’t really think anything cannot be done. So you get a tool in front of them, and they do all sorts of nifty things.
So then I assume these kids, I’m hoping, will grow up to be adults who really can be looking at larger problems, because they know the tools can handle the simpler things.
Ross: That is, that is wonderful. So this is a good time just to pull back to the big picture of your initiatives and what you’re doing, and how all of these programs are being put into the world?
Nisha: Yeah, absolutely. So we do it in a number of different ways.
Of course, we offer a lot of programs on our own. We engage directly with families and students. We also provide curriculums and content for schools and organizations, including nonprofits. We provide teacher training for people who want to launch their own programs.
We have a professional training program, which is essentially—we work with both companies and individuals. In our companies, it’s basically like they run a series of programs of their choosing through us. We work both individually with the people in the company—sometimes in a more consultative manner—as well as providing training for various employees, whether they’re product managers, engineers, executives. We kind of do different things.
And then individuals—there are many individuals who are trying to chart a path from where they are to where—first of all, where should they be, and then, how can they get there? So we have those as well.
So we actually do it kind of in all forms, but we also have a massive content base that we provide to people who want to teach as well.
Ross: And so what’s your geographical scope, primarily?
Nisha: So we’re actually worldwide. The company—we started out in California. We went remote due to COVID, and we also then started up an office in Asia around that time. So now we’re entirely remote—everywhere in the world.
We have employees primarily in the US and India and in Sri Lanka, and we have a couple of scattered employees in Europe and elsewhere. And then most of our clients come from either the US or Asia. And then it’s a very small amount in Europe. So that’s kind of where our sweet spots are.
Ross: Well, I do hope your geographical scope continues to increase. These are wonderful initiatives.
Nisha: Thank you.
Ross: So just taking that a step further—I mean, this is obviously just this wonderful platform for understanding AI and its role in having development capabilities.
But now looking forward to the next five or ten years—what are the ways in which, for example, people who have not yet exposed themselves to that, what are the fundamental capability sets in relation to work?
So, I mean, part of this is, of course, people may be applying their capabilities directly in the AI space or technology. But now, across the broader domain of life, work—across everything—what are the fundamental capabilities we need?
I mean, building on this understanding of the layers of AI, as you’ve laid out?
Nisha: Yeah, so I think that, you know, a general sort of—so if we follow this sort of the four C’s model, right—a general, high-level understanding of how AI works is helpful for everyone.
And I mean, you know, and I mean things like, for example, the relationship between AI and data, right? How do AI models get created?
One of the things I’ve learned in my career is that—so there’s some sort of thing as an AI life cycle, like, you know, how does an AI get built? And even though there are literally thousands of different kinds of AI, the life cycle isn’t that different. There’s like this relationship between data, the models, the testing, the iteration.
It’s really helpful to know that, because that way you understand—when new versions come out—what happened. Yeah, what can you expect, and how does information and learning filter through?
You know, context is very critical—of just being aware. And these days, context is honestly not that complicated. Just assume everything that you’re—everything that you interact with—has an AI in it. Doesn’t matter how small it is, because it’s mostly, unfortunately, true.
The capability one is interesting. What I would suggest for the most broad-based audience is—really, it is a good idea to start learning how to use these foundation models. So I’m talking about the—you know—these models that are technically supposed to be good at everything.
And one of the things—the one thing I’ve kind of noticed, dealing with particularly professionals, is—sometimes they don’t realize the tool can do something because it never occurred to them to ask, right?
It’s one of those, like—if somebody showed you how to use the tool to, you know, improve your emails, right? You know the tool can do that.
But then you come along and you’re looking for, I don’t know, a recipe to make cookies. Never occurs to you that maybe the tool has an opinion on recipes for cookies. Or it might be something more interesting like, “Well, I just burned a cookie. Now, what can I do? What are my options? I’ve got burnt cookies. Should I throw out the burnt cookies? Should I, you know, make a pie out of them?”
Whatever it is, you know. But you can always drop the thing and say, “Hey, I burnt a cookie. Burned cookies.” And then it will probably come back and say, “Okay, what kind of cookies did you burn? How bad did you burn them?” You know, and this and that. “And here are 10 things you can do with them.”
So I think the simplest thing is: just ask. The worst thing it’ll do is, you know, it will come back with a bad answer. And you will know it’s a bad answer because it will be dumb.
So some of it is just kind of getting used to this idea that it really might actually take a shot at doing anything. And it may have kind of a B grade in almost anything—any task you give it.
So that’s a very mental shift that I think people need to get used to taking. And then after that, I think whatever they need to know will sort of naturally evolve itself.
Then from a professional standpoint, I think—I kind of call it surfing the wave. So sometimes people would come to me and say, “Hey, you know, I’m so behind. I don’t even know where to begin.”
And what I tell them is: the good news is, whatever it is that you forgot to look up is already obsolete. Don’t worry about it. It’s totally gone. You know, it doesn’t matter. You know, whatever’s there today is the only thing that matters. You know, whatever you missed in the last year—nobody remembers it anymore anyway.
So just go out there. Like, one simple thing that I do is—if you use, like, social media and such—you can tailor your social media feed to give you AI inputs, like news alerts, right, or stuff that’s relevant to you.
And it’s a good idea to have a feel for: what are the tools that are appropriate in your domain? What are other people thinking about the tools?
Then just, you know, pick and choose your poison.
If you’re a professional working for a company—definitely understand the privacy concerns, the legal implications. Do not bring a tool into your domain without checking what your company’s opinions are.
If the company has no opinions—be extra careful, because they don’t know, but they don’t know. So just—there’s a concern about that.
But, you know, just be normal. Like, just think of the tool like a stranger. If you’re going to bring them into the house, then, you know, use your common sense.
Ross: Well, which goes to the point of attitude. And part of it’s how—this—how do we inculcate that attitude of curiosity and exploration and trying things, as opposed to having to take a class, go in a classroom before you know what to do?
And you have to find your own path by—learn by doing. But that takes us to that fourth step of creativity, where—I mean, obviously—you need to be creative in how you try to use the tools and see what you learn from that.
But also, it goes back to this idea of augmenting creativity. And so, we need to be creative in how we use the tools, but also there are ways where we can hopefully create this feedback loop, where the AI can help us augment or expand our creativity without us outsourcing to it.
Nisha: Absolutely.
And I think part of this is also recognizing that—here’s the problem. If you’re—particularly if you’re a professional—this is less an issue for students because their world is not defined yet. But if you’re a professional, there is a ceiling of some kind in your mind, like “this is what I’m supposed to do,” right?
And the floor is wherever you’re standing right now. And your value is in the middle. The floor is rising really fast. So if you’re not ready to raise the ceiling, you’re going to have a problem.
So it’s kind of one of those things that is not just about the AI. You have to really have a mental shift—that I have to be looking for bigger things to do. Because if you’re not looking for bigger things to do, unfortunately, AI will catch up to whatever you’re doing. It’s only a matter of time.
So if you don’t look for bigger things—that’s why the areas that feel like medicine are flourishing—is because there are so many bigger problems out there.
And so, some of it is also looking at your job and saying, “Okay, is this an organization where I can grow? So if I learn how to use the AI, and I’m suddenly 10x more efficient at my job, and I have nothing left to do—will they give me more stuff to do?”
If they don’t, then I think you might have a problem.
And so forth. So it’s one of those—you have to find—there’s always a gap. Because, look, we’re a tiny little planet in the middle of a massive universe that we don’t know the first thing about. And as far as we know, we haven’t seen anyone else.
There are bigger problems. There are way, way bigger problems. It’s a question of whether we’ve mapped them.
Ross: Yeah, we always need perspective.
So looking forward—I mean, you’re already, of course, having a massive positive impact through what you are doing—but if you’re thinking about, let’s say, the next five years, since that’s already pretty much beyond what we can predict, what are the things that we need to be doing to shape a better future for humans in a world where AI exists, has extraordinary capabilities, and is progressing fast?
Nisha: I think really, this is why I focus so much on AI literacy.
I think AI literacy is critical for every single human on the planet, regardless of their age or their focus area in life. Because it’s the beginning. It’s going away from the fear and really being able to just understand just enough.
And also understanding that this is not a case where you are supposed to become—everyone in the world is going to become a PhD in mathematics. That’s not what I mean at all.
I mean being able to realize that the tool is here to stay. It’s going to get better really fast. And you need to find a way to adapt your life into it, or adapt it into you, or whichever way you want to do it.
And so if you don’t do that, then it really is not a good situation.
So I think that’s where I put a lot of my focus—on creating AI literacy programs across as many different dimensions as I can, and providing—
Ross: With an emphasis on school?
Nisha: So we have a lot of emphasis on schools and professionals. And recently, we are now expanding also to essentially college students who are right in the middle tier.
Because college students have a very interesting situation—that the job market is changing very, very rapidly because of AI. So they will be probably the first ones who see the bleeding edge. Because in some ways, professionals already have jobs—yes—whereas students, prior to graduating from college, have time to digest.
It’s this year’s and next year’s college graduates who will really feel the onslaught of the change, because they will be going out in the job market for the first time with a set of skills that were planned for them before this happened.
So we do focus very much on helping that group figure out how to become useful to the corporate world.
Ross: So how can people find out more about your work and these programs and initiatives?
Nisha: Yeah, so we have two websites.
Our website for K–12 education is aiclub.world. Our website for professionals and college students—and very much all adults—is aiclubpro.world. So you can look there and you can see the different kinds of things we offer.
Ross: Sorry, could you repeat the second URL?
Nisha: It’s aiclubpro.world.
Ross: aiclubpro.world. Got it?
That’s fantastic. So thank you so much for your time today, but also your—the wonderful initiative. This is so important, and you’re doing a marvelous job at it. So thank you.
Nisha: Really appreciate it. Thank you for having me.
The post Nisha Talagala on the four Cs of AI literacy, vibe coding, critical thinking about AI, and teaching AI fundamentals (AC Ep2) appeared first on Humans + AI.

Apr 30, 2025 • 13min
HAI Launch episode
“This is about how we need to grow and develop our individual cognition as a complement to AI.”
– Ross Dawson
About Ross Dawson
Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload.
Website:
Ross Dawson
Advanced Human Technologies
LinkedIn Profile:
Ross Dawson
Books
Thriving on Overload
Living Networks 20th Anniversary Edition
Living Networks
Implementing Enterprise 2.0
Developing Knowledge-Based Client Relationships: Leadership in Professional Services
Developing Knowledge-Based Client Relationships, The Future of Professional Services
Developing Knowledge-Based Client Relationships
What you will learn
Tracing the evolution of the podcast name and vision
How chatgpt shifted the AI conversation overnight
Why humans plus AI is more than just a rebrand
The mission to amplify human cognition through AI
Exploring collective intelligence and team dynamics
Rethinking work, strategy, and value creation with AI
Envisioning a co-evolved future for humans and machines
Episode Resources
Books
Thriving on Overload
Technologies & Technical Terms
AI agents
Artificial intelligence
Intelligence amplification
Cognitive evolution
Collective intelligence
Strategic thinking
Strategic decision-making
Value creation
Organizational structures
Transhumanism
AI governance
Existential risk
Critical thinking
Attention
Awareness
Skill development
Transcript
Ross Dawson: This is the launch episode of the Humans Plus AI podcast, formerly the Amplifying Cognition podcast, and before that, the Thriving on Overload podcast.
So in this brief episode, I will cover a bit of the backstory and a bit of where we got to where we are today, and calling this Humans Plus AI now—why I think it is so important, what it is we are going to cover, and framing a little bit this idea of Humans Plus AI.
So the backstory is that the podcast started off as Thriving on Overload. It was the interviews I did for my book Thriving on Overload. The book came out in September 2022.
By then, I was still continuing with the Thriving on Overload podcast, continuing to explore this idea of how we can amplify our thinking in a world of unlimited information. Essentially, our brains are finite, but in a world of infinite information, we need to learn the skills and the capabilities to be as effective as possible.
And COVID—we’ll come back to that—but that is a fundamental issue today, which is the reason I wrote the book.
Just three months after the book came out was what I call the ChatGPT moment, when there’s crystallizing progress in AI where I think just about every single researcher and person who’d been in the AI space was surprised or even amazed by the leap in capabilities that we achieved with that model—and of course, so much more since then.
So I quickly wanted to consolidate my thinking, and immediately came on this phrase Humans Plus AI, which reflects a lot of my work over the years.
I have been literally writing about AI, the role of AI agents, and particularly AI and work—for, well, in some ways, a couple of decades. But this was a moment where I felt I had to bring all of my work together.
So fairly soon, I decided I needed to rebrand the podcast to be not just Thriving on Overload. But I still was tied to that theme. So I decided, let’s make this Amplifying Cognition, trying to get that middle ground with integrating the ideas of Humans Plus AI.
How could humans and AI together be as wonderful as possible, but also this idea of Thriving on Overload—this individual cognition—how do we amplify our possibilities?
There was a long list of different names that I was playing with, and one of the other front runners was, in fact, Amplifying Humanity.
And in a way, that’s really what my mission is all about. And what all of these podcasts—the podcast and its various names—is about: how do we amplify who we are, our capabilities, our potential?
Of course, the name Amplifying Humanity sounds a bit diffused. It’s not very clear. So it wasn’t the right name. Or not—there was certainly no right title at the time.
But now, when I take this and say, well, we’re going to call this Humans Plus AI, in a way, I think that the Thriving on Overload piece of that is still as relevant—or even more relevant. That is part of the picture as we bring humans and AI together.
This is about how we need to grow and develop our individual cognition as a complement to AI.
So in fact, when I talk Humans Plus AI, Thriving on Overload, and Amplifying Cognition are really baked into that idea.
So the broad frame of Humans Plus AI is simply: we have humans. We are inventors. We have created extraordinary technologies for many years, and the culmination of that at this point is something that is analogous to our own intelligence and cognitive capabilities.
So this could be seen as challenging, and I think there are, of course, many things that we have to navigate through this. But it is also very much about: what could we do together? The originator, the creator—which is us—and that which we have created. We need to find how these together can be integrated, can be complementary to, can create more possibilities than ever before.
There are many earlier thinkers—prominently Doug Engelbart—who talk about intelligence amplification. And again, that’s really what AI should be about: amplifying our capabilities and possibilities.
There are, of course, many, many risks and challenges with AI, including in governance—conceivably existential risk—in terms of all sorts of ethical issues that we need to address. And I think it’s wonderful there are many people focusing on that.
My particular mission is to be as positive as possible, to be able to focus singularly not on the negatives, whilst acknowledging and understanding those, but looking at what could be possible—who we could become in terms of our capabilities as well as our humanity—and moving forward and trying to provide some kind of a beacon or a light or something to look to in this positive vision for what is possible from humans and AI together.
So this starts with the individual, where we can use AI to develop our skills and our capabilities. We need skill to be able to use that. We want to cover some of the attitudes, what education is required, what are the tools we can use, but also look at other ways to augment ourselves which aren’t necessarily tied to technology.
Still coming back to issues such as awareness, attention, critical thinking—these are all the things that will keep us complements to the technologies as well as possible.
In organizations, there’s many potentials for organizations to reshape, to reform, and bring together humans and AI. Looking at how teams form, looking at ideas of collective intelligence—which, of course, the podcast has looked at for a long time.
To look at the impact of AI, particularly in professional services, the impact of AI on business models and value creation and new organizational structures. And while many people talk about the one-person billion-dollar company, that’s interesting—what’s more interesting is how you get a group of people, small or large, complemented by AI, to create more value than ever before.
This also will look at strategic thinking. So I’ve been focusing very much on AI and strategic decision-making. AI for strategy. Also looking at AI and investment processes. How do we use AI to allocate capital better than ever before, making sure that we are making the right decisions?
So one of the core themes of the podcast will be using these—AI for strategy, strategic thinking, investment—sort of the bigger picture thinking, and being quite specific around that: the approaches, the tactics, the strategies, the techniques whereby everyone from individual entrepreneurs to boards to organizations can be more effective.
We will certainly be delving into work and the nature of how work evolves with both humans and AI involved—what are the structures for how that can happen effectively, what are the capabilities required, how we will see that evolution, and what are some of the structures for sharing value amongst people.
And looking at this bigger, broader level of society—this cognitive evolution. How will our evolution evolve? What is the co-evolution of humans and AI? How can we build this effective collective intelligence at a species level? How can we indeed build collective wisdom?
How can AI support us in being wiser and being able to shape better pathways for ourselves, for communities, for nations, for societies, for humanity?
And also looking at the future—what is the future of intelligence? What is the future of humanity? What is the future of what comes beyond this?
And just the reality—of course, we are moving closer to a transhuman world, where we are going beyond what we have been as humans to who we will be, not least through being complemented by AI.
So that’s some of the many themes that we’ll be exploring. All of them fascinating, deeply important, where this is all the frontiers—where there are no guidelines, there are no established practices and books and things that we can look at. This is being created as we go.
So this is a forum where we will try as much as possible to uncover and to share the best of the thinking and the ideas that are happening in the world in creating the best positive potential from humans and AI together.
So if you want to keep on listening to some of these wonderful conversations I’m having, then please make sure to subscribe to the podcast. Love to hear any feedback you have.
One way is where I spend most of my online time—is LinkedIn, my own personal profile. Or we have the page, LinkedIn page, which we’re just renaming from Amplifying Cognition to Humans Plus AI.
If you really want to engage, then please join the community. There will always be free sections of the community. In fact, all of it is still free for now, and you’ll find like-minded people.
If you find any interest at all in these topics, you’ll find lots of other people who are delving deep with lots to share.
So thank you for listening. Thank you for being part of this journey. I think this is a very, very exciting time to be alive, and if we focus on the positive potential, we have a chance of creating a—
So thank you for being part of the journey. Catch you somewhere along the way.
The post HAI Launch episode appeared first on Humans + AI.

Apr 23, 2025 • 34min
Kunal Gupta on the impact of AI on everything and its potential for overcoming barriers, health, learning, and far more (AC Ep86)
“Maybe the goal isn’t to eliminate the task or the human—but to reduce the frustration, the cognitive load, the overhead. That’s where AI shines.”
– Kunal Gupta
About Kunal Gupta
Kunal Gupta is an entrepreneur, investor, and author. He founded and scaled global digital advertising AI company Nova as Chief Everything Officer for 15 years, with teams and clients across 30+ countries. He is author of four books, most recently 2034: How AI Changed the World Forever.
Website:
Kunal Gupta
Kunal Gupta
LinkedIn Profile:
Kunal Gupta
Book:
2034: How AI Changed Humanity Forever
What you will learn
Hosting secret AI dinners to spark human insight
Using personal data to take control of health
Why cognitive load is the real bottleneck
When AI becomes a verb, not just a tool
Reducing frustration through everyday AI
The widening gap between AI capabilities and adoption
Empowering curiosity in an AI-shaped world
Episode Resources
Books
2034: How AI Changed Humanity Forever
Technical Terms & Concepts
AI
AI literacy
Agentic AI
Cognitive load
LLMs (Large Language Models)
Reference ranges
Automation
Browser agents
Voice agents
Data normalization
Longevity-based testing
Health data
Cloud computing
Social media adoption
Generative AI
Transcript
Ross Dawson: Kunal, it is awesome to have you on the show.
Kunal Gupta: Thanks, Ross. Nice to see you.
Ross: So you came out with a book called 2034: How AI Changed Humanity Forever. So love to hear the backstory. Yes, that’s the book. So what’s the backstory? How did this book come about?
Kunal: Yeah, I’ve written a few books, but this is definitely the most fun to write and to read and reread, and at some points, to rewrite.
So back in November 2022, ChatGPT launches. There’s this view—okay, this is going to change our world, not sure how. So in the ensuing months, I had a number of conversations with friends and colleagues asking, “Hey, like, how does this change everything?” I asked people very open-ended questions, and the responses were all over the place.
To me, what I realized was we actually just don’t know, and that’s the best place to be—when we don’t know but are curious.
So I started to host dinners, six to ten people at a time in my apartment. I was in Portugal at the time, and London as well. Over the course of 2023, I hosted over 250 people over a couple dozen dinners.
The setup was really unique in that nobody knew who else was coming. Nobody was allowed to talk about work, nobody was allowed to share what they did, and no phones were allowed either. So that meant really everybody was present. They didn’t need to be anybody, they didn’t need to be anywhere, and they could really open up.
All of the conversations were recorded. All the questions were very open-ended along the lines of—really the subtitle of the book—like, how does AI change humanity? And we got into all sorts of different places.
So over the course of the dinners in the year, recorded everything, had to transcribe it, and working with an editor, we manually went through the transcripts and identified about 100 individual ideas that came out of a human. And it’s usually some idea, inspiration, or some fear or insecurity.
And we turned that into a book which has 100 different ideas, ten years into the future, of how AI might take how we live, how we work, how we date, how we eat, how we walk, how we learn, how we earn—and absolutely everything about humanity.
Ross: So, I mean, there’s obviously far more in the book than we can cover in a short podcast, but what are some of the high-level perspectives? It’s been a bit of time since it’s come out, and people have had a chance to read it and give feedback, and you’ve reflected further on it.
So what are some of the emergent thinking from you since the book has come out?
Kunal: Yeah, I probably hear from a reader or two daily now, sharing lots of feedback. But the most common feedback I hear is that the book has helped change the way they think about AI, and that it’s helped them just think more openly about it and more openly about the possibilities.
And that’s where introducing over 100 ideas across different aspects of society and humanity and industries and age groups and demographics is really meant to help open up the mind.
I think in the face of AI, a lot of parts of society were closed or resistant to its potential impacts, or even fearful. And the book is really designed to open up the mind and drop some of the fear and really to be curious about what might happen.
Ross: So taking this—taking sort of my perennial “humans plus AI” frame—what are some of the things that come to mind for you in terms of the potential of humans plus AI? What springs to mind first?
Kunal: Those that say yes and are open and curious about it—I really think it’s an accelerant in so many different parts of life.
I’ll give an example of AI being used in government. I gave the fictitious example of Tokyo electing the first AI mayor, and how that went and what the implications of that were. I gave examples in Europe of AI being used to reduce bureaucracy and streamline all the processes.
Government is an example of something that touches all of our lives in a very impactful way, and AI being used to help make better decisions—more objective decisions, decisions that aren’t tied to ego or a four-year cycle—I think could lead to better outcomes for the aggregate of any given society or country or city.
That’s one example.
Education is another clear example, in terms of how young people learn, but then also how old people learn. There are a couple of ideas around AI—this idea of AI literacy for not just young people, but also old people—and some interesting ways that comes to life.
So those are a few examples covering a spectrum of how AI and humans can come together.
Ross: So coming back to present and now and here. So what, in what ways are you using AI to amplify what you’re doing? Or where is your curiosity taking you?
Kunal: Absolutely everything. And my fiancée gets annoyed that I’m talking some days to ChatGPT more than I am to her. And we live together.
We call ChatGPT my friend, because it gets embarrassing to just say ChatGPT so much within a single day. So, “as I was talking to my friend,” “I was asking my friend,” etc.
There’s a few areas of my life that I’m very focused on these days. I’d say health is a big one, and optimizing my health, understanding my health, testing. So making sense of kind of my health data beyond the basic blood tests. I’ve done lots of longevity-based testing and take lots of supplements. So going deeper and geeking out on that has been a lot of fun.
Ross: So just digging into that. So do you collect data which you then analyze, or is this text-based, or is this using data to be able to feed into the systems?
Kunal: So my interest on health started probably four years ago. Had some minor health issues that triggered me to start to do a bunch of testing.
And then, being a tech guy, I got fascinated by the data that I was starting to collect in my body. So it happened, but four years of very consistent blood work, gut health, sleep data, with all the fitness and sleep trackers, smart scale, and lots, lots more.
So I’d say that’s one part—is I have a couple years’ worth of data. I think the second part that I found interesting, because I’ve had a lot of data, is to use my own data as the baseline versus some population average, which is a different gene pool and a different geographic location.
So seeing just the changes in my data over time, and then using reference ranges as one comparison point has been helpful.
And then, I see lots of specialists for different health issues that I’ve dealt with over the years. And I have found AI, prompted the right way with the right data, as effective, if not more effective, than the human specialists.
So I do walk into my specialist appointments now with a bunch of printouts, and I essentially fact what they tell me oftentimes in real time with ChatGPT and other AI tools. And that gives me just a lot more confidence in things I’m putting into my body, and things I’m doing to my body.
Ross: How do the doctors respond to that?
Kunal: I’m definitely unique in that sense—at least the specialists I see, they’re not used to it. I would say probably like three to five doctors lean in and ask me how did I collect it, and want copies of the printouts.
And two out of five are a little dismissive. And that’s not surprising, I guess.
Ross: There’s just this recent data showing—comparing the patient-perceived outcomes from doctors—where basically they perceive the quality of the advice from the AI to be a little bit better than the doctors, and the empathy way, way better than doctors.
Kunal: Yeah, yeah, I trust in my experience as well.
Ross: So, but now you’re uploading spreadsheets to the LLMs or other raw data?
Kunal: Spreadsheets and PDF reports. And that’s the annoying part, actually.
I’ve done a couple dozen different tests on different parts of my body and get reports in all these different formats. It’s all in PDFs from all these providers, and they give their own explanations using their own reference data. So it’s hard to make sense of it.
And I live between Australia and Portugal, so even a blood test in Europe versus blood tests in Australia—different metrics, different measurement systems, different reference ranges. So AI has helped me normalize the different formats of data.
Ross: Yeah, but of course, you have to have that antenna into putting it in and asking it to normalize, and then be able to get your baseline out of that.
Kunal: So I’d say it’s just like this theme is—for the listeners or viewers—it’s just feeling empowered. And health is a very sensitive topic, one that oftentimes, when we have issues, we feel helpless for them. And the support to help has helped me feel more empowered and more motivated, frankly, to improve my health.
Ross: Yeah, well, I mean, just as a tiny, tiny example—my father went into some tests a little while ago, and we got back the report. It was going to be interpreted by the specialist when he went to visit them a week or two later.
So I was actually able to get some kind of sense of what this cryptic report meant before waiting to find out the specialist’s interpret for us.
Kunal: Yeah, there’s so much anxiety that could exist in waiting, and the unknown. So even if the known is good or bad, just the known is helpful versus the unknown.
Ross: So in terms of cognition, or thinking, or creating, or ideation—or, I suppose, a lot of the essence of what you do as an entrepreneur and thinker and author—so what… So let’s get tactical here.
What are some of the lessons learned, and tools you use, or how you use them, or approaches which you’ve found particularly useful?
Kunal: I’ll give a very simple example that hopefully is relatable for many people. But it’s figured a much deeper reflection for me—realizing I need to think differently. And as an adult, it’s harder to change the way we think.
So for my partner’s father, who turned 70 earlier this year, we threw and hosted a big party on a boat in the Sydney Harbor.
And three days before the party, I went to my partner. I was like, “We should have a photo booth on the boat.” And she dismissed it, saying, like, “This is three days. We don’t have time. There’s already too much work to do for the party.” She was feeling stressed.
And the creative and entrepreneur in me—I heard it, but I didn’t listen to it.
So then I went to GPT and I said, “Is it actually allowed to have a photo booth on a boat?” And it’s like, “Yes.”
“Okay, can I get a photo booth vendor in three days, in Sydney?” And the answer was yes.
I’m like, “Okay, who are 10 photo booth vendors in Sydney?” And it gave me 10 vendors.
And then I was about to click into the first website, and then I just had this reaction. I was like, “This is too much work.”
So then I said, “How can I contact all of these vendors?” And it gave me their phone numbers and email addresses.
Then I was about to click the email address—and again, I was like, “Still too much work.” I was feeling quite impatient.
So then I paused for a minute, and then I said, “Give me the email addresses, separated by commas.”
And then I opened up Gmail, put the email addresses in BCC, and wrote up just a three-line email saying, “This is the date, this is the location, need a photo booth. Give me a proposal.”
Within three hours, I had four proposals back, showed them to my partner, she picked one that she liked, and it was done.
So the old way of doing that would have taken so many phone calls and missed calls and conversations and just a noise and headache. And this new way literally took probably less than seven minutes of my time, and we got to a solution.
So that’s an example. To abstract it out now—there’s so many perceived barriers to the old way of doing things. And I think in simple daily life tasks, I’m still learning and challenging myself to just think differently of how to approach it.
Ross: So, what you describe is obviously what many people say is the image for agentic AI. You should have an agent where you can just say to them exactly—give them the brief—and it will be able to go and do everything which you described.
But at the same time, speaking in early April 2025, agents are still not quite there—as in, we don’t have any agent right now which could do precisely what you’ve said.
So where do you see that pathway in terms of agents being able to do these kinds of tasks? And how is it we use them? Where does that lead us?
Kunal: This is such an interesting moment because we don’t know that fun part.
So we may end up with browser agents—agents that go, open up a browser, click in the browser, and use it on the user’s behalf. And that might be with like 70% accuracy, and then 80%, and then 90%, and then it gets to “good enough” to schedule and manage things.
We might end up with agents that make phone calls—and there’s lots of demos flying around the internet—that make bookings and coordinate details and appointments on our behalf.
Or it may be just a little simpler than that, which may be more realistic—kind of like the photo booth example I gave—which is an agent to just help us think through how to get the task done. And maybe it’s not eliminating the task, but reducing the task.
And I think we have a role to play there, as the human user, and the AI has a role to play. Understanding how to get the best of both versus the worst of both.
The worst of both is impatience on the human and then incompetence on the AI—and then throwing the whole thing out.
I do think there’s a world where it’s the best of both. And probably reframing the goal, which is not to eliminate the human, it’s not to eliminate the task for the human, but to reduce the frustration, reduce the cognitive load, reduce the overhead—the time it takes to get something done.
And software development—we can get into it, if you’d like—is, I think, an example where that’s starting to show itself. It’s not eliminating the human, but it’s reducing the cognitive load and the time and the headache involved.
Ross: So this goes—it’s a very, very big question, very big and broad question—but this idea of reducing cognitive load, freeing up time so that, you know, the various ways we can put that is that it allows us to move to higher-order, more complex tasks and thinking and creativity, or to give us time to do other things.
And I think there may be other frames around what that does, but if we are freeing up cognitive load, what do you see as the opportunities from that space?
Kunal: Yeah, I see cognitive load as the critical path right now.
I mean, there’s so many ideas to explore and technologies to try, but there’s a cognitive load to learn it. And I think we have a while to go where we won’t find interesting, creative, or productive uses for our excess cognitive load—probably at least another…
We won’t—there won’t be an excess because, even as AI frees us up, there’s going to be more. There’s still such a big backlog of things we’re interested in, curious in, that we want to apply our cognitive load to—whether it’s productive in an economic sense, or productive in a health sense, or productive in a friendship sense, or productive in a learning sense.
So maybe that’s the way to frame it—is that it’ll become multidimensional. It won’t be purely an economic motivation of work. And there may be other motivations that we have, but are often suppressed or not expressed, because the economic one takes place of this.
Ross: Yeah, no. I mean, that goes, I think, to what is one of the greatest fallacies in this—people predicting future techno-unemployment—is that there’s a fixed amount of work. And if we take away work by machines, then there’s not gonna be much left to do with humans.
Well, there’s always more to do, and there’s more to create and spend our time. So there’s no fixed amount of work or ideation or thinking or whatever.
But I think I like this idea that we are—humans are—curious. We are inventors, we are thinkers, and we are… I think this curiosity is a—if AI can help us or guide us or support us in being more curious because we are able to, amongst other things, learn things quickly, which would have previously required taking a degree, or whatever it may be—then that is a massive bonus for humanity.
Kunal: Yeah, yeah, completely.
I am curious—your take. Something I am worried about is if that curiosity becomes of a passive nature versus active.
Passive meaning Netflix and Instagram and TikTok, with the consumption on these more passive platforms growing. And we saw that in the pandemic. We had a bunch of people who were not working, maybe getting some small paychecks from the government, and the response on aggregate was to consume versus create.
And so I do worry—is what if the curiosity just turns into more scrolling and browsing, versus something that’s that, you know.
Ross: This goes to my last chapter of Thriving on Overlord, where I essentially talk about cognitive evolution—essentially saying we’re getting this… it’s evolution or devolution, in the sense of the most default path for our brain is to just continue to get easy stimulus.
And so, essentially, there are plenty of people who start spending all their day scrolling on TikTok, or whatever equivalent they have. Whereas, obviously, there are some who say, “Well, all of this information abundance means that I can do whatever I want, and I will go and explore and learn and be more than I ever could be before.”
And so you get this divergence.
I think there’s a very, very similar path here with AI, where there are people using AI as the… A lot of recent research is pointing to reduced cognitive functioning because we are offloading.
And I often say, the greatest risk with AI is overreliance—where we just sort of say, “Oh, that’s good enough. I don’t need to do anything anymore.” And I think that’s a very real thing.
And of course, many other people are using these as tools to augment themselves, achieve far more, be more productive, learn faster.
But I think one of the differences between the simple information space in which we’ve been living and the AI space we’re now living in is that AI is interactive. We can ask the questions back.
TikTok or TV screen and so on—you, well, you can create your TikTok. Sure, that’s great if you do that. But the AI is inherently interactive.
It doesn’t mean that we use it in a useful way. I mean, the recent Anthropic economic index picked out “directive” as one of what it called “automation,” where it says, “Do this,” and so it’s just doing that—as opposed to a whole array of other ones, which are more around learning, or iterating, and having conversations, and so on, which are more the augmenting style.
And there is still this balance, where quite a few are just getting AI to do things. But now we have far more opportunity than with the old tools to be participatory.
Kunal: Yeah. I, yesterday, was using an AI web app, and I got stuck, and I had my first AI voice agent customer support call.
So I just hit “Call,” was immediately connected—no wait time. And then I described my problem, and it guided me through a few steps.
And then I wasn’t able to resolve it—which I assumed was going to be the case—but at the end, it gave me the email address for the startup behind the product, where I couldn’t find the email address anywhere on the website. They probably do that on purpose.
But it was probably like a two-minute interaction, and it was a very pleasant, friendly, instant conversation. And I didn’t mind it.
After that, I noticed—okay, this is the future. My customer service requests and support requests are going to be with AI and voice agents, and they’ll be instant, and the barriers will come down.
Some will be less shy to ask for help. Where today, the idea of calling for customer support feels so daunting, this actually felt quite effortless.
Fine. It’ll become more interactive.
Ross: Yeah. Well, it is effort to type, and whatever the format people prefer—whether it’s typing or speaking or having a video person to interact with—I mean, these are all ways where we can get through problems or get to resolution faster and faster.
And I think this idea of the personalized tutor—I mean, I’ve always, since way before generative AI, always believed that potentially the single biggest opportunity from AI was personalized education. Because we are all different. We all learn differently, we all have different interests, and we all get stuck.
In classrooms—those who go to school—it’s the same for everyone, with, if you’re lucky, a fraction of a teacher’s time for personalized interaction.
So that’s this—again, that takes the willingness and the desire to learn. But now we have access to what will be, very soon, some of the best, nicest, most interactive tutoring—well, not human.
And I think that is critically different. But that requires simply, then, just the desire.
Kunal: Yeah, I mean, on the desire—I’m curious for your take on this.
I’ve noticed the capabilities of AI are growing at a very fast rate, and it feels like it’s at a faster rate than the adoption of AI. So, like, the capabilities are growing at a faster rate than the adoption of the capabilities. And the gap is getting bigger.
I was part of the smartphone revolution—2007, 2008—and built my business at that moment. And that was an example where the capabilities were higher than the adoption, but we quickly caught up.
And then social media—same thing. Capabilities were ahead of the consumer, but the consumer caught up. Cloud computing—same again. Capabilities grew, and then enterprises caught up pretty fast.
So in previous tech waves, in my lifetime at least, there’s been an initial gap between capabilities and adoption, but it’s narrowed.
And here, this feels like the opposite. It feels like the reverse—where the capabilities and the adoption, the gap is getting bigger.
And I’m curious if you agree with that. And, I guess more importantly, what are the implications of that? And, I guess, opportunities.
Ross: Well, what I think is there’s always been this spectrum of uptake—from internet through to every other technology—and sort of how the early adopter through to the laggards.
And now that is becoming far more accentuated, in that there are plenty of people who have never tried an AI tool at all, and there’s plenty of people that spend their days, like you, interacting with the systems and learning how to use it better.
And this is an amplifier, as in, those who are on the edge are more able to learn more and be able to keep closer to the edge. And those who are not involved are legally getting more behind.
And this is one of the very concerning potentials for augmenting divides that we have in society—between wealth and income and access to opportunity.
So I think it is real. I think that it’s… it is the nature of it, as it starts to increase over time itself.
Kunal: Yeah, yeah. In the book, I talk about AI—this moment when AI goes from being a noun to a verb.
And, like, we’ve learned to speak, to walk, to write, to read, and then to AI—introducing this idea of AI literacy.
And it boggles my mind that in a lot of parts of the world, schools are banning AI for kids. And that horrifies me, knowing that this is going to be as important as reading and writing.
Ross: Yeah, no, I think that’s absolutely true.
So in our recent episode with Nisha Talaga, she runs basically AI literacy programs across schools around the world, and she’s doing some extraordinary work there.
And it’s really inspiring—and doing obviously a very good job at bringing those principles.
But yeah, I think that’s really true, and I think that’s a great sort of conclusion, and bringing that journey from the book and what we’ve looked at—and, I suppose, these next steps of how it is we use these tools, as you say, as a verb, not a noun.
So where can people go to find out more about your work?
Kunal: Yeah. So it’s my book 2034, and my other books—find them all on Amazon, Audible, free on Spotify, like the AI-narrated version of my voice reading them to you.
And then my website, kunalgupta.live, and I have an AI newsletter called pivot5.ai—the number five—and that’s a daily newsletter that goes to a few hundred thousand people and kind of top-line summarized for a business leadership audience.
Ross: Awesome. Thanks so much. Really appreciate your time, your insights.
Kunal: Thank you.
The post Kunal Gupta on the impact of AI on everything and its potential for overcoming barriers, health, learning, and far more (AC Ep86) appeared first on Humans + AI.

6 snips
Apr 16, 2025 • 40min
Lee Rainie on being human in 2035, expert predictions, the impact of AI on cognition and social skills, and insights from generalists (AC Ep85)
In this engaging discussion, Lee Rainie, Director of the Imagining the Digital Future Center, dives into the implications of AI on work and identity. He raises critical points about human traits at risk of obsolescence and the potential for overreliance on machines. Rainie emphasizes the importance of creativity and emotional intelligence amidst technological advancements, urging listeners to reflect on future societal norms. He shares insights on developing a more comprehensive understanding of expert predictions while reminding us that humans inherently seek value and connection.

Apr 9, 2025 • 0sec
Kieran Gilmurray on agentic AI, software labor, restructuring roles, and AI native intelligence businesses (AC Ep84)
“Let technology do the bits that technology is really good at. Offload to it. Then over-index and over-amplify the human skills we should have developed over the last 10, 15, or 20 years.”
– Kieran Gilmurray
About Kieran Gilmurray
Kieran Gilmurray is CEO of Kieran Gilmurray and Company and Chief AI Innovator of Technology Transformation Group. He works as a keynote speaker, fractional CTO and delivering transformation programs for global businesses. He is author of three books, most recently Agentic AI. He has been named as a top thought leader on generative AI, agentic AI, and many other domains.
Website:
Kieran Gilmurray
X Profile:
Kieran Gilmurray
LinkedIn Profile:
Kieran Gilmurray
BOOK: Free chapters from Agentic AI by Kieran Gilmurray
Chapter 1 The Rise of Self-Driving AI
Chapter 2: The Third Wave of AI
Chapter 3 – Agentic AI Mapping the Road to Autonomy
Chapter 4- Effective AI Agents
What you will learn
Understanding the leap from generative to agentic AI
Redefining work with autonomous digital labor
The disappearing need for traditional junior roles
Augmenting human cognition, not replacing it
Building emotionally intelligent, tech-savvy teams
Rethinking leadership in AI-powered organizations
Designing adaptive, intelligent businesses for the future
Episode Resources
People
John Hagel
Peter Senge
Ethan Mollick
Technical & Industry Terms
Agentic AI
Generative AI
Artificial intelligence
Digital labor
Robotic process automation (RPA)
Large language models (LLMs)
Autonomous systems
Cognitive offload
Human-in-the-loop
Cognitive augmentation
Digital transformation
Emotional intelligence
Recommendation engine
AI-native
Exponential technology
Intelligent workflows
Transcript
Ross Dawson: Hey, it’s fantastic to have you on the show.
Kieran Gilmurray: Absolutely delighted, Ross. Brilliant to be here. And thank you so much for the invitation, by the way.
Ross: So agentic AI is hot, hot, hot, and it’s now sort of these new levels of how it is we — these are autonomous or semi-autonomous aspects of AI. So I want to really dig into — you’ve got a new book out on agentic AI, and particularly looking at the future of work. And particularly want to look at work, so amplifying cognition.
So I want to start off just by thinking about, first of all, what is different about agentic AI from generative AI, which we’ve had for the last two or three years, in terms of our ability to think better, to perform our work better, to make better decisions? So what is distinctive about this layer of agentic AI?
Kieran: I was going to say, Ross, comically, nothing if we don’t actually use it. Because it’s like all the technologies that have come over the last 10–15 years. We’ve had every technology we have ever needed to make more work, more efficient work, more creative work, more innovative, to get teams working together a lot more effectively.
But let’s be honest, technology’s dirty little secret is that we as humans very often resist. So I’m hoping that we don’t resist this technology like the others we have slowly resisted in the past, but they’ve all come around to make us work with them.
But this one is subtly different. So when you say, look, agentic AI is another artificial intelligence system. The difference in this one — if you take some of the recent, what I describe as digital workforce or digital labor, go back eight years to look at robotic process automation — which was very much about helping people perform what was meant to be end-to-end tasks.
So in other words, the robots took the bulky work, the horrible work, the repetitive work, the mundane work and so on — all vital stuff to do, but not where you really want to put your teams, not where you really want to spend your time. And usually, all of that mundaneness sucked creativity out of the room.
You ended up doing it most of the day, got bored, and then never did the innovative, interesting stuff.
Agentic is still digital labor sitting on top of large language models. And the difference here is, as described, is that this is meant to be able to act autonomously. In other words, you give it a goal and off it goes with minimal or no human intervention. You can design it as such, or both.
And the systems are meant to be more proactive than reactive. They plan, they adapt, they operate in more dynamic environments. They don’t really need human input. You give them a goal, they try and make some of the decisions.
And the interesting bit is, there is — or should be — human in the loop in this. A little bit of intervention.
But the piece here, unlike RPA — that was RPA 1, I should say, not the later versions because it’s changed — is its ability to adapt and to reshape itself and to relearn with every interaction.
Or if you take it at the most basic level — you look at a robot under the sea trying to navigate, to build pipelines. In the past, it would get stuck. A human intervention would need to happen. It would fix itself.
Now it’s starting to work itself out and determine what to do. If you take that into business, for example, you can now get a group of agentic agents, for example, to go out and do an analysis of your competitors.
You can go out and get it to do deep research — another agentic agent to do deep research, McKinsey, BCG or something else. You can get another agent to bring that information back, distill it, assemble it, get an agent to create it, turn that into an article. Get another agent to proofread it. Get another agent to pop it up onto your social media channels and distribute it.
And get another agent to basically SEO-optimize it, check and reply to any comments that anyone’s making. You’re sort of going, “Here, but that feels quite human.” Well, that’s the idea of this.
Now we’ve got generative AI, which creates. The problem with generative AI is that it didn’t do. In other words, after you created something, the next step was, well, what am I going to do with my creation?
Agentic AI is that layer on top where you’re now starting to go, “Okay, not only can I create — I can decide, I can do and act.” And I can now make up for some of the fragility that exists in existing processes where RPA would have broken.
Now I can sort of go from A to B to D to F to C, and if suddenly G appears, I’ll work out what G is. If I can’t work it out, I’ll come and ask a person. Now I understand G, and I’ll keep going forever and a day.
Why is this exciting — or interesting, I should say? Well-used, this can now make up for all the fragility of past automation systems where they always got stuck, and we needed lots of people and lots of teams to build them.
Whereas now we can let them get on with things.
Where it’s scary is that now we’re talking about potential human-level cognition. So therefore, what are teams going to look like in the future? Will I need as many people? Will I be managing — as a leader — managing agentic agents plus people?
Agentic agents can work 24/7. So am I, as a manager, now going to be expected to do that?
Its impact on what type of skills — in terms of not just leadership, but digital and data and technical and everything else — there’s a whole host of questions. There is as much as there is new technology here Ross.
Ross Dawson: Yeah, yeah, absolutely. And so, I mean, those are some of the questions, though, I want to, want to ask you the best possible answers we have today.
And in your book, you do emphasize this is about augmenting humans. It is around how it is we can work with the machines and how they can support us, and human creativity and oversight being at the center.
But the way you’ve just laid out, there’s a lot of what is human work, which is overlap from what you’ve described.
So just at a first step, thinking about individuals, right? Professionals, knowledge workers — and so they have had, there’s a few layers. You’ve had your tools, your Excels. You’ve had your assistants which can go and do tasks when you ask them. And now you have agents which can go through sequences and flows of work in knowledge processes.
So what does that mean today for a knowledge worker who is starting to have, where the enterprise starts to bring them in? Or they say, “Well, this is going to support it.” So what are the sorts of things which are manifest now for an individual professional in bringing these agentic workforce play? What are the examples? What are ways to see how this is changing work?
Kieran Gilmurray: Yeah, well, let’s dig into that a little bit, because there’s a couple of layers to this.
If you look at what AI potentially can do through generative AI, all of a sudden, the question becomes: why would I actually hire new trainees, new labor?
On the basis that, if you look at any of the studies that have been produced recently, then there’s two roles, two setups. So let me do one, which is: actually, we don’t need junior labor, because junior labor takes a long time to learn something.
Whereas now we’ve got generative AI and other technologies, and I can ask it any question that I want, and it’s going to give me a pretty darned good answer.
And therefore, rather than having three and four and five years to train someone to get them to a level of competency, why don’t I not just put in agentic labor instead? It can do all that low-ish level work, and I don’t need to spend five years learning. I immediately have an answer.
Now that’s still under threat because the technology isn’t good enough yet. It’s like the first scientific calculator version — they didn’t quite work. Now we don’t even think about it.
So there is a risk that all of a sudden, agentic AI can get me an answer, or generative AI can get me an answer, that previously would have taken six or eight weeks.
Let me give you an example.
So I was talking to a professor from Chicago Business School the other day, and he went to one of his global clients. And normally the global client will ask about a strategy item. He would go away — him and a team of his juniors and equals would research this topic over six or twelve weeks. And then they would come back with a detailed answer, where the juniors would have went round, done all the grunt work, done all the searching and everything else, and the seniors would have distilled it off.
He went — he’s actually written a version of a GPT — and he’s fed it past strategy documents, and he fed in the client details.
Now he did this in a private GPT, so it was clean and clear, and in two and a half hours, he had an answer.
It literally — his words, not mine — he went back to the client and said, “There you go. What do you think? By the way, I did that with generative AI and agentics.”
And they went, “No, you didn’t. That work’s too good. You must have had a team on this.”
And he said, “Literally not.” And he’s being genuine, because I know the guy — he’d put his reputation on it.
So all of a sudden, now all of those roles that might have existed could be impacted.
But where do we get then the next generation of labor to come through in five and six and ten years’ time?
So there’s going to be a lot of decisions need made. As to: look, we’ve got Gen AI, we’ve potentially got agentic AI. We normally bring in juniors over a period of time, they gain knowledge, and as a result of gaining knowledge, they gain expertise. And as a result of gaining expertise, we get better answers, and they get more and more money.
But now all of Gen AI is resulting in knowledge costing nothing.
So where you and I would have went to university — let’s say we did a finance degree — that would have lasted us 30 years. Career done. Tick.
Now, actually, Gen AI can pretty much understand, or will understand, everything that we can learn on a finance degree, plus a politics degree, plus an economics degree, plus, plus, plus — all out of the box for $20 a month.
And that’s kind of scary.
So when it comes to who we hire, that opens up the question now: do we have Gen AI and agentic labor, and do we actually need as many juniors?
Now, someone’s going to have to press the buttons for the next couple of years, and any foresighted firm is going to go, “This is great, but people plus technology actually makes a better answer.” I just might not need as many.
So now, when it comes to the actual hiring and decision-making — as to how am I going to construct my labor force inside of an organization — that’s quite a tricky question, if and when this technology, Gen AI and agentics, really ramps through the roof.
Ross Dawson: I mean, these are — I mean, I think these are fundamentally strategic choices to be made. As in, you — I mean, it’s, crudely, it’s automate or augment.
And you could say, well, all right, first of all, just say, “Okay, well, how do we automate as many of the current roles which we have?” Or you can say, “Oh, I want to augment all of the current roles we have, junior through to senior.”
And there’s a lot more subtleties around those strategic decisions. In reality, some organizations will be somewhere between those two extremes — and a lot in between.
Kieran Gilmurray: 100%. And that’s the question. Or potentially, at the moment, it’s actually, “Why don’t we augment currently?”
Because the technology isn’t good enough to replace. And it isn’t — it still isn’t.
And no, I’m a fan of people, by the way — don’t get me wrong. So anyone listening to this should hear that. I believe great people plus great technology equals an even greater result.
The technology, the way it exists at the moment, is actually — and you look at some research out from Harvard, Ethan Mollick, HBR, Microsoft, you name it, it’s all coming out at the moment — says, if you give people Gen AI technology, of which agentic AI is one component:
“I’m more creative. More productive. And, oddly enough, I’m actually happier.”
It’s breaking down silos. It’s allowing me to produce more output — between 10 to 40% — but more quality output, and, and, and.
So at the moment, it’s an augmentation tool. But we’re training, to a degree, our own replacements.
Every time we click a thumbs up, a thumbs down. Every time we redirect the agentics or the Gen AI to teach it to do better things — or the machine learning, or whatever else it is — then technically, we’re making it smarter.
And every time we make it smarter, we have to decide, “Oh my goodness, what are we now going to do?” Because previously, we did all of that work.
Now, that for me has never been a problem. Because for all of the technologies over the decades, everybody’s panicked that technology is going to replace us.
We’ve grown the number of jobs. We’ve changed jobs.
Now, this one — will it be any different?
Actually — and why I say potentially — is you and I never worried, and our audience never worried too much, when an EA was potentially automated. When the taxi driver was augmented and automated out of a job. When the factory worker was augmented out of a job.
Now we’ve got a decision, particularly when it comes to so-called knowledge work. Because remember, that’s the expensive bit inside of a business — the $200,000 salaries, the $1 million salaries.
Now, as an organization, I’m looking at my cost base, going, “Well, I might actually bring in juniors and make them really efficient, because I can get a junior to be as productive as a two-year qualified person within six months, and I don’t need to pay them that amount of money.”
And/or, actually, “Why don’t I get rid of my seniors over a period of time? Because I just don’t need any.”
Ross Dawson: Things that some leaders will do. But, I mean, it comes back to the theme of amplifying cognition. The sense of — the real nub of the question is, yes, you can sort of say, “All right, well, now we are training the machine, and the machine gets better because it’s interacting. We’re giving it more work.”
But it’s really finding the ways in which the nature of the way we interact also increases the skills of the humans.
And so John Hagel talks about scalable learning. In fact, Peter Senge used to talk about organizational learning — and that’s no different today. We have to be learning.
And so, saying, “Well, as we engage with the AI — and as you rightly point out — we are teaching and helping the AI to learn,” we need to be able to build the process and systems and structures and workflows where the humans in it are not static and stagnant as they use AI more, but they’re more competent and more capable.
Kieran Gilmurray: Well, that’s the thing we need to do, Ross.
Otherwise, what we end up with is something called cognitive offload — where now, all of a sudden, I’ll get lazy, I’ll let AI make all of the decisions, and over time, I will forget and not be valuable.
For me, this is a question of great potential with technology. But the real question comes down to: okay, how do we employ that technology?
And to your point a second ago — what do we do as human beings to learn the skills that we need to learn to be highly employable? To create, be more innovative, more creative using technology?
Ross Dawson: I answered the question you just asked.
Kieran Gilmurray: 100%, and this is — this is literally the piece here, so—
Ross: That’s the question. So do you have any answers to that?
Kieran: No, of course. Of course. Well, mine is — it’s that.
So, for me, AI will be — absolutely — and AI is massive. And let me explain that, because everybody thinks it’s been around. If we look at generative AI for the last couple of years — but AI has been around for 80-plus years. It’s what I call an 80-year-old overnight success story.
Everybody’s getting excited about it. Remember, the excitement is down to the fact that I can now interact with — or you interact with — technology in a very natural sense and get answers that I previously couldn’t.
So now, all of a sudden, we’re experts in everything across the world. And if you use it on a daily basis, all of a sudden, our writing is better, our output’s better, our social media is better.
So the first bit is: just learn how to use and how to interact with the technology.
Now, we mentioned a moment ago — but hold on a second here — what happens if everybody uses it all the time, the AI has been trained, there’s a whole host of new skills?
Well, what will I do?
Well, this for me has always been the case. Technology has always come. There’s a lot less saddlers than there are software engineers. There might be a lot less software engineers in the future.
So therefore, what do we do?
Well, my one is this. All of this has been the same, regardless of the technology: let technology do the bits that technology is really good at. Offload to it.
You still need to understand or develop your digital, your AI, your automation, your data literacy skills — without a doubt. You might do a little bit of offloading, because now we don’t actually think about scientific calculators. We get on with it.
We don’t go into Amazon and automatically work out all of our product sets, because it’s got a recommendation engine. So therefore, let it keep doing all its stuff.
Whereas, as humans, I want to develop greater curiosity. I want to develop what I would describe as greater cognitive flexibility. I want to use the technology — now that I’ve got this — how can I produce even better, greater outputs, outcomes, better quality work, more innovative work?
And part of that is now going, “Okay, let the technology do all of its stuff. Free up tons of hours,” because what used to take me weeks takes me days.
Now I can do other stuff, like wider reading. I can partner with more organizations. I can attempt to do more things in the day — whereas in the past, I was just too busy trying to get the day job done.
The other bits I would be saying: companies need to develop emotional intelligence in people.
Because now, if I can get the technology to do the stuff, now I need to engage with tech. But more importantly, I’m now freed up to work across silos, to work across businesses, to bring in different partner organizations.
And statistically, only 36% of us are actually emotionally intelligent.
Now, AI is an answer for that as well — but emotional intelligence should be something I would be developing inside of an organization. A continuous innovation mindset. And I’d be teaching people how to communicate even better.
Notice I’m letting the tech do all the stuff that tech should do regardless. Now I’m just over-indexing and over-amplifying the human skills that we should have developed over the last 10, 15, or 20 years.
Ross Dawson: Yeah. And so, your point — this comes about people working together. And so I think that was one of the — certainly one of the interesting parts of your book is around team dynamics.
So there’s a sense of, yes, we have agentic systems. This starts to change the nature of workflows. Workflows involve multiple people. They involve AI agents as well.
So as we are thinking about teams — as in multiple humans assisted by technology — what are the things which we need to put in place for effective team dynamics and teamwork?
Kieran Gilmurray: Yeah, so — so look, what you will see potentially moving forward is that mixture of agentic labor working with human labor.
And therefore, from a leadership perspective, we need people — we need to teach people — to lead in new ways. Like, how do I apply agentic labor and human labor? And what proportion? What bits do I get agentic labor to do? What bits do I get human labor to do?
Again, we can’t hand everything over to technology. When is it that I step in? Where do I apply humans in the loop?
When you look at agentic labor, it’s going to be able to do things 24/7, but as people, we physically and humanly can’t. So, how — when am I going to work? What is the task that I’m going to perform?
As a leadership or as a business — well, what are the KPIs that I’m going to measure myself on, and my team on? Because now, all of a sudden, my outputs potentially could be greater, or I’m asking people to do different roles than they’ve done in the past, because we can get agentic labor to do it.
So there’s a whole host of what I would describe as current management consideration. Because, let’s be honest — like when we introduced ERP, CRM, factory automation, or something else — it just changed the nature of the tasks that we perform.
So this is thinking through: where is the technology going to be used? Where should we not use it? Where should we put people? How am I going to manage it? How am I going to lead it? How am I going to measure it?
These are just the latest questions that we need to answer inside of work.
And again, from a skillset perspective — from both a leadership and getting my human labor team to do particular work, or how I onboard them — how do I develop them? What are the skills that I’m now looking for when I’m doing recruitment?
What are the career paths that I’m going to put in place, now that we’ve got human plus agentic labor working together?
Those are all conversations that managers, leaders, and team leaders need to have — and strategists need to have — inside of businesses.
But it shouldn’t worry businesses, because again, we’ve had this same conversation for the last five decades. It’s just been different technology at different times, where we had to suddenly reinvent what we do, how we do it, how we measure it, and how we manage it.
Ross Dawson: So what are specifics of how teams, team dynamics might work in using agentic AI in a particular industry or in a particular situation? Or any examples? So let’s ground this.
Kieran Gilmurray: Yeah, so let’s — let me ground it in physical robots before I come into software robots, because this is what this is: software labor, not anything else.
When you look at how factories have evolved over the years — so take Cadbury’s factory in the UK. At one stage, Cadbury’s had thousands and thousands of workers, and everybody ended up engaging on a very human level — managing people, conversations every day, orchestration, organization. All of the division of labor stuff happened.
Now, when you go into Cadbury’s factory, it’s hugely automated — like other factories around the world. So now we’re having to teach people almost to mind the robots.
Now we have far less people inside of our organizations. And hopefully — to God — this won’t happen in what I’d describe as a knowledge worker park, but we’re going to teach people how to build logical, organized, sequential things. Because to break something down into a process to build a machine — it’s the same thing when it comes to software labor.
How am I going to break it and deconstruct a process down into something else? So the mindset needed to actually put software labor into place varies compared to anything else that we’ve done.
Humans were messy. Robots can’t be. They have to be very logical pieces.
In the past, we were used to dealing with each other. Now I’m going to have to communicate with a robot. That’s a very different conversation. It’s non-human. It’s silicon — not carbon.
So how do I engage with a robot? Am I going to be very polite? And I see a lot of people saying, “Please, would you mind doing the following?” No — it’s a damn robot. Just tell it what to do. My mindset needs to change.
So if I take, in the past, when I’m asking someone to do something, I might say, “Give me three things” or “Can you give me three ideas?” Now, I’ve got an exponential technology where my expectations and requests of agentic labor are going to vary.
But I need to remember — I’m asking a human one thing and a bot another.
Let me give you an example. I might say to you, “Ross, give me three examples of…” Well, that’s not the mindset we need to adopt when it comes to generative AI. I should be going, “Give me 15, 50, 5,000,” because it’s a limitless vat of knowledge that we’re asking for.
And then I need to practice and build human judgment — to say, “Actually, I’m not going to cognitively offload and let it think for me and just accept all the answers.” But I’m now going to have to work with this technology and other people to develop that curiosity, develop that challenging mindset, to suddenly teach people how to do deeper research, to fact-check everything that I’m being told.
To understand when I should use a particular piece of information that’s been given to me — and hope to God it’s not biased, not hallucinated, or anything else — but it’s actually a valuable knowledge item that I should be putting into workflow or a project or a particular document or something else.
So again, it’s just working through: what is technology? What’s the technology in front of me? What’s it really good at? Where can I apply it?
And understanding that — where should I put my people, and how should I manage both?
What are the skills that I need to teach my people — and myself — to allow me to deal with all of this potentially fantastic, infinite amount of knowledge and activity that will hopefully autonomously deliver all the outcomes that I’ve ever wanted?
But not unfettered. And not left to its own devices — ever.
Otherwise, we have handed over human agency and team agency — and that’s not something or somewhere we should ever go. The day we hand everything to the robots, we might as well just go to the care home and give up.
Ross Dawson: We’ll be doing that soon. So around now, let’s think about leadership.
So, I mean, you’ve alluded to that in quite a few — I mean, a lot of it has been really talking about some of the questions or the issues or the challenges that leaders at all levels need to engage with. But this changes, in a way, the nature of leadership.
As you say, you’ve got digital labor as well as human labor. The organization has a different structure. It impacts the boundaries of organizations and the flows of information and processes — cross-organizational boundaries.
So what is the shift for leaders? And in particular, what are the things that leaders can do to develop their capabilities for a somewhat different world?
Kieran Gilmurray: Yeah, it’s interesting.
So I think there’ll be a couple of different worlds here. Number one is, we will do what we’ve always done, which is: we’ll put in a bit of agentic labor, and we’ll put in a bit of generative AI, and we’ll basically tweak how we actually operate. We’ll just make ourselves marginally more efficient.
Because anything else could involve the redesign and the restructure of the organization, which could involve the restructure and the redesign of our roles. And as humans, we are very often very change-resistant.
Therefore, I don’t mind technology that I understand, and I don’t mind technology that makes me more productive, more creative. But I do mind technology that could actually disrupt how I lead, where I actually fit inside of the organization, and something else.
So for those leaders, there’s going to be a minimal amount of change — and there’s nothing wrong with that. That’s what I call the “taker philosophy,” because you go: taker, maker, shaper — and I’ll walk through those in a second — which is, I’ll just take another great technology and I’ll be more productive, more creative, more innovative.
And I recommend every business does that at this moment in time. Who wouldn’t want to be happier with technology doing greater things for you?
So go — box number one.
And therefore, the skills I’m going to have to learn — not a lot of difference. Just new skills around AI. In other words, understanding bias, hallucinations, understanding cognitive offloading, understanding where to apply the technology and not.
And by “not,” I mean: very often people put technology at something that has no economic value. Waste time, waste money, waste energy, get staff frustrated — something else. So those are just skills people have to learn. It could be any technology, I’ve said.
The other method of doing this is almost what I describe as the COVID method. I need to explain that statement.
When COVID came about, we all worked seamlessly. It didn’t matter. There were no boundaries inside of organizations. Our mission was to keep our customers happy. And therefore, it didn’t matter about the usual politics, the usual silos, or something else. We made things work, and we made things work fast.
What I would love to see organizations doing — and very few do it — is redesign and re-disrupt how they actually work.
And I’m sitting there going, it’s not that I’m doing what I’m doing and I’ve now got a technology — “Where do I add it on?” — as in two plus one is equal to three.
What I’m sitting going and saying is: How can I fundamentally reshape how I deliver value as an organization?
And working back from the customer — who will pay a premium for this — and therefore, if I work back from the customer, how do I reconstruct my entire business in terms of leadership, in terms of people, in terms of agentic and human labor, in terms of open ecosystems and partnerships and everything else — to deliver in a way that excites and delights?
If we take the difference between bookstore and Amazon — I never, or rarely, go into a bookstore anymore. I now buy Amazon almost every time, not even thinking about it.
If I look at AI-native labor — they’re what I describe as Uber’s children. Their experiences of the world and how they consume are very different than what you and I have constructed.
Therefore, how do I create what you might call AI-native intelligent businesses that deliver in a way that is frictionless and intelligent?
And that means: intelligent processes, intelligent people, using intelligent technology, intelligent leadership — forgetting about silos and breakdowns and everything else that exists politically inside of organizations — but applying the best technology. Be it agentics, be it automation, be it digital, be it CRM, ERP — it doesn’t really matter what it is.
Having worked back from the customer, design an organization to deliver on its promise to customers — to gain a competitive advantage.
And those competitive advantages will be less and less. I can copy all the technology quicker. Therefore, my business strategy won’t be 10 years. It possibly won’t be five. It might be three — or even less.
But my winning as a business will be my ability to construct great teams. And those great teams will be great people plus great technology — to allow me to deliver something digitally and intelligently to consumers who want to pay a premium for as long as that advantage lasts.
And it might be six months. It might be twelve months. It might be eighteen months.
So now we’re getting to a phase of almost fast technology — just like we have fast fashion.
But the one thing we don’t want to do is play loose and fast with our teams. Because ultimately, I still come back to the core of the argument — that great people who are emotionally intelligent, who’ve been trained to question everything that they’ve got, who are curious, who enjoy working as part of a team in a culture — and that piece needs to be taken care of as well.
Because if you just throw robots at everything and leave very few people, then what culture are you actually trying to deliver for your staff and for your customers?
How do I get all of this work to deliver in a way that is effective, is affordable, is operationally efficient, profitable — but with great people at the core, who want to continue being curious, creating new and better ways of delivering in a better organization?
Not just in the short term — because we’re very short-termist — but how do I create a great organization that endures over the next five or ten years?
By creating flexible labor and flexible mindsets, with flexible leaders organizing and orchestrating all this — to allow me to be a successful business.
Change is happening too quickly these days. Change is going to get quicker.
Therefore, how do I develop an adaptive mindset, adaptive labor force, and adaptive organization that’s going to survive six months, twelve months — and maybe, hopefully to God, sixteen months plus?
Ross Dawson: Fantastic. That’s a great way to round out. So where can people find out more about your work?
Kieran Gilmurray: Yeah, look, I’m on LinkedIn all the time — probably too much. I should get an agentic labor force to sort that out for me, but I’d much prefer authentic relationships than anything else.
Find me on LinkedIn — Kieran Gilmurray. I think there are only two of me: one’s in Scotland, who is related some way back, and the Irish one.
Or www.kierangilmurray.com is where I publish far too much stuff and give far too much stuff — things — away for free. But I have a philosophy that says all boats rise in a floating tide. So the more we share, the more we give away, the more we benefit each other.
So that’s going to continue for quite some time.
I have a book out on agentic AI. Again, it’s being given away for free. Ross, if you want to share it, please go for it, sir, as well.
As I said, let’s continue this conversation — but let’s continue this conversation in a way that isn’t about replacing people. It’s about great leadership, great people, and great businesses that have people at their core, with technology serving us — not us serving the technology.
Ross: Fabulous. Thanks so much, Kieran.
Kieran: My pleasure. Thanks for the invite.
The post Kieran Gilmurray on agentic AI, software labor, restructuring roles, and AI native intelligence businesses (AC Ep84) appeared first on Humans + AI.