Humans + AI

Ross Dawson
undefined
Jul 23, 2025 • 41min

Tim O’Reilly on AI native organizations, architectures of participation, creating value for users, and learning by exploring (AC Ep11)

“We’re in this process where we should be discovering what’s possible… That’s what I mean by AI-native — just go figure out what the AI can do that makes something so much easier or so much better.” – Tim O’Reilly About Tim O’Reilly Tim O’Reilly is the founder, CEO, and Chairman of leading technical publisher O’Reilly Media, and a partner at early stage venture firm O’Reilly AlphaTech Ventures. He has played a central role in shaping the technology landscape, including in open source software, web 2.0, and the Maker movement. He is author of numerous books including WTF? What’s the Future and Why It’s Up to Us. Website: www.oreilly.com LinkedIn Profile: Tim O’Reilly X Profile: Tim O’Reilly What you will learn Redefining AI-native beyond automation Tracing the arc of human-computer communication Resisting the enshittification of tech platforms Designing for participation, not control Embracing group dynamics in AI architecture Unlocking new learning through experimentation Prioritizing value creation over financial hype Episode Resources Transcript Ross Dawson: Tim, it is fantastic to have you on the show. You were my very first guest on the show three years ago, and it’s wonderful to have you back. Tim O’Reilly: Well, thanks for having me again. Ross: So you have seen technology waves over decades and been right in there forming some of those. And so I’d love to get your perspectives on AI today. Tim: Well, I think, first off, it’s the real deal. It’s a major transformation, but I like to put it in context. The history of computing is the history of making it easier and easier for people to communicate with machines. I mean literally in the beginning, they had to actually wire physical circuits into a particular calculation, and then they came up with the stored program computer. And then you could actually input a program one bit at a time, first with switches on the front of the computer. And then, wow, punch cards. And we got slightly higher level languages. First it was big, advanced assembly programming, and then big, advanced, higher level languages like Fortran, and that whole generation. Then we had GUIs. I mean, first we had command lines. Literally the CRT was this huge thing. You could literally type and have a screen. And I guess the point is, each time that we had an advance in the ease of communication, more people used computers. They did more things with them, and the market grew. And I think I have a lot of disdain for this idea that AI is just going to take away jobs. Yes, it will be disruptive. There’s a lot of disruption in the past of computing. I mean, hey, if you were a programmer, you used to have to know how to use an oscilloscope to debug your program. And a lot of that old sort of analog hardware that was sort of looking at the waveforms and stuff — not needed anymore, right? I remember stepping through programs one instruction at a time. There’s all kinds of skills that went away. And so maybe programming in a language like Python or Java goes away, although I don’t think we’re there yet, because of course it is simply the intermediate code that the AIs themselves are generating, and we have to look at it and inspect it. So we have a long way before we’re at the point that some people are talking about — evanescent programs that just get generated and disappear, that are generated on demand because the AI is so good at it. It just — you ask it to do something, and yeah, it generates code, just like maybe a compiler generates code. But I think that’s a bit of a wish list, because these machines are not deterministic in the way that previous computers were. And I love this framework that there’s really — we now have two different kinds of computers. Wonderful post — trying to think who, name’s escaping me at the moment — but it was called “LLMs Are Weird Computers.” And it made the point that you have, effectively, one machine that we’re working with that can write a sonnet but really struggles to do math repeatedly. And you have another type of machine that can come up with the same answer every single time but couldn’t write a sonnet to save its life. So we have to get the best of both of these things. And I really love that as a framework. It’s a big expansion of capability. But returning back to this idea of more — the greater ease of use expanding the market — just think back to literacy. There was a time when there was a priesthood. They were the only people who could read and write. And they actually even read and wrote in a dead language — Latin — that nobody else even spoke. So it was this real secret, and it was a source of great power. And it was subversive when they first, for example, printed the Bible in English. And literally, when they printed the printed book — the printed book was the equivalent of our current, “Oh my God, social media turbocharged with AI, social disruption.” There was 100 years of war after the dissemination of movable type, because suddenly the Bible and other books were available in English. And it was all this mass communication, and people fought for 100 years. Now, hopefully we won’t fight for 100 years. But disruption does happen, and it’s not pretty. But it’s not — there’s a way that the millennialist kind of version of where this is somehow terminal is just wrong. I mean, we will evolve. We will figure out how to coexist with the machines. We’ll figure out new things to do with them. And I think we need to get on with it. But I guess, back to this post I wrote called “AI First Puts Humans First,” there’s a lot of pressure from various companies. They’re saying you must use AI. And they’ve been talking about AI first as a way of, like, “If you try to do it with AI first because we want to get rid of the people.” And I think of AI first — or what I prefer, the term AI native — as a way of noticing: no, we want to figure out what the capabilities of this machine are. So try it first, and then build with it. And in particular, I think of the right way to think about it as a lot like the term “mobile first.” It didn’t mean that you didn’t have other applications anymore. It just meant, when companies started talking about mobile first, it meant we don’t want it to be an afterthought. And I think we need to think that way about AI. How can we reinvent the things that we’re doing using AI? And anybody who thinks it’s just about replacing people is missing the point. Ross: Yeah, well, that’s going back to the main point around the ease of communication. So the layers of which we are getting our intent to be able to flow through into what the computers do. So what struck me with the beginning of LLMs is that what is distinctive about humans is our intention and our intention to achieve something. So now, as you’re saying, the gap between what we intend and what we can achieve is becoming smaller and smaller, or it’s getting narrower and faster. Also, we can democratize it in the sense of — yeah, there is more available to more people in various guises, to different degrees, where you can then manifest in software and technology your intention. Yeah, so that democratizes — as you say, this is — there are ways in which this is akin to the printing press, because it democratizes that ability to not just understand, but also to achieve and to do and to connect. Tim: Yeah, there is an issue that I do think we need to confront as an industry and as a society, and that is what Cory Doctorow calls “enshittification.” This idea — actually, I had a different version of it, but let’s talk about Cory’s version first. The platforms first are really good to their users. They create these wonderful experiences. Then they use the mass of users that they’ve collected to attract businesses, such as advertisers, and they’re really good to the advertisers but they’re increasingly bad to the users. Then, as the market reaches a certain saturation point, they go, “Well, we have to be bad to everybody, because we need the money first. We need to keep growing.” I did a version of this. I wrote a paper called Rising Tide Rents and Robber Baron Rents, where I used the language of economic rents. We have this notion of Schumpeterian rents — or Schumpeterian profits — where a company has innovated, they get ahead of the competition, and they have outsized profits because they are ahead. But in the theory, those rents are supposed to be competed away as knowledge diffuses. What we’ve seen in practice is companies put up all kinds of moats and try to keep the knowledge from diffusing. They try to lock in their users and so on. Eventually, the market stagnates, and they start preying on their users. We’re in that stage in many ways as an industry. So, coming to AI, this is what typically happens. Companies stagnate. They become less innovative. They become protective of their profits. They try to keep growing with, effectively, the robber baron rents as opposed to the innovation rents. New competition comes along, but here we have a problem — the amount of capital that’s had to go into AI means that none of these companies are profitable. So they’re actually enshittified from the beginning, or the enshittification cycle will go much, much more quickly, because the investors need their money. I worry about that. This has really been happening since the financial crisis made capital really cheap. We saw this with companies like Lyft and Uber and WeWork — that whole generation of technology companies — where the market didn’t choose the winner. Capital chose the winner. The guy who actually invented all of that technology for on-demand cars was Sunil Paul with Sidecar. Believe it or not, he raised the same amount of money that Google raised — which was $35 million. Uber and Lyft copied his innovations. Their venture was doing something completely different. Uber was black cars summoned by SMS. Lyft was a web app for inner-city people trying to find other people to share rides between cities. They pivoted to do what Sunil Paul had invented, and they threw billions at it, and they bought the market. Sure enough, the companies go public, unprofitable. Eventually, after the investors have taken out their money — it’s all great — then they have to start raising prices. They have to make the service worse. Suddenly, you’re not getting a car in a minute. You’re getting a car in 10 minutes. They’re telling you it’s coming in five, and it’s actually coming in 15. So it’s — and I think that we have some of that with AI. We’re basically having these subsidized services that are really great. At some point, that’s going to shake out. I think there’s also a way that the current model of AI is fundamentally — it’s kind of colonialism in a certain way. It’s like, we’re going to take all this value because we need it to make our business possible. So we’re going to take all the content that we need. We’re not going to compensate people. We’re going to make these marvelous new services, and therefore we deserve it. I think they’re not thinking holistically. Because this capital has bought so much market share, we’re not having that kind of process of discovery that we had in previous generations. I mean, there’s still a lot of competition and a lot of innovation, and it may work out. Ross: I’m just very interested in that point. There’s been a massive amount of capital. There’s this thesis that there is a winner-takes-most economy — so if you’re in, you have a chance of getting it all. But overlaid on that — and I think there’s almost nobody better to ask — is open source, where of course you’ve got commercial source, you’ve commercially got open source, and quite a bit in between. I’d love to hear your views on the degree to which open source will be competitive against the closed models in how it plays out coming up. Tim: I think that people have always misunderstood open source, because I don’t think that it is necessarily the availability of source code or the license. It’s what I call an architecture of participation. This is something where I kind of had a falling out with all of the license weenies back in the late ’90s and early 2000s, because — see, my first exposure to what we now call open source was with Berkeley Unix, which grew up in the shadow of the AT&T System V license. That was a proprietary license, and yet all this stuff was happening — this community, this worldwide community of people sharing code. It was because of the architecture of Unix, which allowed you to add. It was small. It was a small kernel. It was a set of utilities that all spoke the same protocol — i.e., you read and wrote ASCII into a stream, which could go into a file. There were all these really powerful concepts for network-based computing. Then, of course, the internet came along, and it also had an architecture of participation. I still remember the old battle — Netscape was the OpenAI of its day. They were going to wrest control from Microsoft, in just the same way that OpenAI now wants to wrest control from Google and be the big kahuna. The internet’s architecture of participation — it was really Apache that broke it open more than Linux, in some ways. Apache was just like, “Hey, you just download this thing, you build your own website.” But it wasn’t just that anybody could build a website. It was also that Apache itself didn’t try to Borg everything. I remember there was this point in time when everybody was saying Apache is not keeping up — Internet Information Server and Netscape Server are adding all these new features — and Apache was like, “Yeah, we’re a web server, but we have this extension layer, and all these people can add things on top of it.” It had an architecture of participation. The same thing happened with things like OpenOffice and the GIMP, which were like, “Okay, we’re going to do Microsoft Office, we’re going to do Photoshop.” They didn’t work, despite having the license, despite making the source code available — because they started with a big hairball of code. It didn’t have an architecture of participation. You couldn’t actually build a community around it. So I think — my question here with AI is: Where is the architecture of participation? Ross: I would argue that it’s an arXiv, as in that now basically, the degree of sharing — where you get your Stability and your Googles and everyone else just putting it out on arXiv for your deep seek — really detailed. Tim: Yeah, I think that’s absolutely right. There is totally an architecture of participation in arXiv. But I think there’s also a question of models. I guess the thing I would say is yes — the fact that there are many, many models and we can build services — but we have to think about specialized models and how they cooperate. That’s why I’m pretty excited about MCP and other protocols. Because the initial idea — the winner-takes-all model — is: here we are, we’re OpenAI, you call our APIs, we’re the platform. Just like Windows was. That was literally how Microsoft became so dominant. You called the Windows API. It abstracted — it hid all the complexity of the underlying hardware. They took on a bunch of hard problems, and developers went, “Oh, it’s much easier to write my applications to the Windows API than to support 30 different devices, or 100 different devices.” It was perfect. Then Java tried to do a network version of that — remember, “Write once, run anywhere” was their slogan. And in some sense, we’re replaying that with MCP. But I want to go back to this idea I’ve been playing with — it’s an early Unix idea — and I’ve actually got a piece that I’m writing right now, and it’s about groups. Because part of an architecture of participation is: what’s the unit of participation? I’ve been thinking a lot about one of the key ideas of the Unix file system, which was that every file had, by default, a set of permissions. And I think we really need to come up with that for AI. I don’t know why people haven’t picked up on it. If you compare that to things like robots text and so on, there’s a pretty simple way. Let me explain for people who might not remember this. Most people who are developers or whatever will know something about this. You had a variable called umask, which you set, and it set the default permissions for every file you created. There was also a little command called chmod that would let you change the permissions. Basically, it was read, write, or execute — and it was for three levels of permission: the user, the group, and the world (everyone) right? So here we are with AI, saying, “We, OpenAI,” or “We, Grok,” or whoever, “are going to be world,” right? “We’re going to Borg everything, and you’re going to be in our world. Then you’ll depend on us.” Then some people — like Apple maybe — are saying, or even other companies are saying, “Well, we’ll give you permission to have your own little corner of the world.” That’s user. “We’ll let you own your data.” But people have forgotten the middle — which is group. If you look at the history of the last 20 years, it’s people rediscovering — and then forgetting — group. Think about what was the original promise of Twitter, or the Facebook feed. It was: I can curate a group of people that I want to follow, that I want to be part of. Then they basically went, “No, no, actually that doesn’t really work for us. We’re going to actually override your group with our algorithmic suggestions.” The algorithmically generated group was a really fabulous idea. Google tried to do a manual version of that when they did — originally Buzz — and then, was it called Circles? Which was from Andy Hertzfeld, and was a great thing. But what happens? Facebook shuts it off. Twitter shuts it off. And guess what? Where is it all happening now? WhatsApp groups, Signal groups, Discord groups. People are reinventing group again and again and again. So my question for the AI community is: Where is group in your thinking? How do we define it? A group can be a company. It can be a set of people with similar beliefs. There’s a little bit of this, in the sense that — if you think Grok, the group is — even though it aspires to be the world-level — you could say Anthropic is the, let’s call it, the “woke group,” and Grok is the “right group.” But where’s the French group? The French have always been famously protective. So I guess Mistral is the French group. But how do people assert that groupness? A company is a group. So the question I have is, for example: how do we have an architecture of participation that says, “My company has valuable data that it can build services on, and your company has valuable data. How do we cooperate?” That’s again where I’m excited — at least the MCP is the beginning of that. Saying: you can make a set of MCP endpoints anywhere. It’s a lot like HTTP that way. “Oh, I call you to get the information that I want. Oh, I call you over here for this other information.” That’s a much more participatory, dynamic world than one where one big company licenses all the valuable data — or just takes all the valuable data and says, “We will have it all.” Ross: That’s one of the advantages of the agentic world — that if you have the right foundations, the governance, the security, and all of the other layers like team, payments, etc., then you can get entirely an economy of participation of agents. But I want to look back from what you were saying around groups, coming back to the company’s point around the “AI first” or “AI native,” or whatever it may be. And I think we both believe in augmenting humans. So what do you see as possible now if we look at an organization that has some great humans in it, and we’ve got AI that changes the nature of the organization? It’s not just tacking on AI to make each person more productive. I think we become creative humans-plus-AI organizations. So what does that look like at its best? What should we be aspiring to? Tim: Well, the first thing — and again, I’m just thinking out loud from my own process — the first thing is, there’s all kinds of things that we always wished we could do at O’Reilly, but we just didn’t have the resources for, right? And so that’s the first layer. The example I always use is, there are people who would like to consume our products in many parts of the world where they don’t speak English. And we always translated a subset of our content into a subset of languages. Now, with AI, we can make versions that may not be as good, but they’re good enough for many, many more people. So — vast expansion of the market there, just by going, “Okay, here’s this thing we always wished we could do, but could not afford to do.” Second is: okay, is there a new, AI-native way to do things? O’Reilly is a learning platform, and I’m looking a lot at — yeah, we have a bunch of corporate customers who are saying, “How do you do assessments? We need to see verified skills assessment.” In other words, test people: do they actually know this thing? And I go — wow — in an AI-native world, testing is a pretty boneheaded idea, right? Because you could just have the AI watch people. I was getting a demo from one startup who was showing me something in this territory. They had this great example where the AI was just watching someone do a set of tasks. And it said, “I noticed that you spent a lot more time and you asked a lot more questions in the section that required use of regular expressions. You should spend some time improving your skills there.” The AI can see things like that. Then I did kind of a demo for my team. I said, “Okay, let me just show you what I think AI-native assessment looks like.” I basically found some person on GitHub with an open repository. I said, “Based on this repository, can you give me an assessment of this developer’s skills — not just the technical skills, but also how organized they are, how good they are at documentation, their communication skills?” It did a great write-up on this person just by observing the code. Then I pointed to a posted job description for an engineer working on Sora at OpenAI and said, “How good of a match is this person for that job?” And it kind of went through: “Here are all the skills that they have. Here are all the skills that they need.” And I go — this is AI-native. It’s something that we do, and we’re doing it in probably a 19th-century way — not even a 20th-century way — and you have completely new ways to do it. Now, obviously that needs to be worked on. It needs to be made reliable. But it’s what I mean by AI-native — just go figure out what the AI can do that makes something so much easier or so much better. That’s the point. And that’s why it drives me nuts when I hear people talk about the “efficiencies” to be gained from AI. The efficiencies are there. Like, yeah — it was a heck of a lot more efficient to use a steam engine to bring the coal out of the mine than to have a bunch of people do it. Or to drive a train. I mean, yeah, there’s efficiency there. But it’s more that the capability lets you do more. So we’re in this process where we should be discovering what’s possible. In this way, I’m very influenced by a book by a guy named James Bessen. It’s called Learning by Doing, and he studied the Industrial Revolution in Lowell, Massachusetts, when they were bringing cotton mills and textile mills to New England. He basically found that the narrative — AI had unskilled labor replaced skilled labor — wasn’t quite right. They had these skilled weavers, and then these unskilled factory workers. And he looked at pay records and said it took just as long for the new workers to become fully paid as the old workers. So they were just differently skilled. And I think “differently skilled” is a really powerful idea. And he said okay, why did it take so long for this to show up in productivity statistics — 20, 30 years? And he said, because you need a community. Again — this is an architectural part. You need people to fix the machines. You need people to figure out how to make them work better. So there’s this whole community of practice that’s discovering, thinking, sharing. And we’re in that ferment right now. That’s what we need to be doing — and what we are doing. There’s this huge ferment where people are in fact discovering and sharing. And back to your question about open source — it’s really less about source code than it is about the open sharing of knowledge. Where people do that. That goes back to O’Reilly. What we do — we describe our mission as being “changing the world by spreading the knowledge of innovators.” We used to do it almost entirely through books. Then we did it through books and conferences. Now we have this online learning platform, which still includes books but has a big live training component. We’re always looking for people who know something and want to teach it to other people. Then the question is, what do people need to know now that will give them leverage, advantage, and make them — and their company — better? Ross: So just to round out, I mean, you’ve already — well, more than touched on this idea of learning. So part of it is, as you say, there are some new skills which you need to learn. There’s new capabilities. We want to go away from the old job description because we want people to evolve into how they can add value in various ways. And so, what are the ways? What are the architectures of learning? I suppose, as you say, that is a community. It’s not just about delivering content or interacting. There’s a community aspect. So what are the architectures of learning that will allow organizations to grow into what they can be as AI-native organizations? Tim: I think the architecture of learning that’s probably most important is for companies to give people freedom to explore. There’s so many ideas and so much opportunity to try things in a new way. And I worry too much that companies are looking for — they’re trying to guide the innovation top-down. I have another story that sort of goes back to — it’s kind of a fun story about open source. So, yeah, one of the top guys at Microsoft is a guy named Scott Guthrie. So Scott and one of his coworkers, Mark Anders, were engineers at Microsoft, and they had basically this idea back in the early — this is 20-plus years ago — and they basically were trying to figure out how to make Windows better fitted for the web. And they did a project by themselves over Christmas, just for the hell of it. And it spread within Microsoft. It was eventually what became ASP.NET, which was a very big Microsoft technology — I guess it was in the early 2000s. It kind of spread like an open source project, just within Microsoft — which, of course, had tens of thousands of employees. Eventually, Bill Gates heard about it and called them into his office. And they’re like, “Oh shit, we’re gonna get fired.” And he’s like, “This is great.” He elevated them, and they became a Microsoft product. But it literally grew like an open source project. And that’s what you really want to have happen. You want to have people scratching their own itch. It reminds me of another really great developer story. I was once doing a little bit of — I’d been called into a group at SAP where they wanted to get my advice on things. And they had also reached out to the Head of Developer Relations at Google. And he asked — and we were kind of trying to — I forget what the name of their technology was. And this guy asked a really perfect question. He said, “Do any of your engineers play with this after hours?” And they said, “No.” And he said, “You’re fucked. It’s not going to work.” So that — that play,  Ross: Yeah. Right? Tim: Encourage and allow that play. Let people be curious. Let them find out. Let them invent. And let them reinvent your business. Ross: That’s fantastic. Tim: Because that’s — that will, that will — their learning will be your learning, and their reinvention of themselves will be your reinvention. Ross: So, any final messages to everyone out there who is thick in the AI revolution? Tim: I think it’s to try to forget the overheated financing environment. You know, we talked at the very beginning about these various revolutions that I’ve seen. And the most interesting ones have always been when money was off the table. It was like — everybody had kind of given up on search when Google came along, for example. It was just like, “This is a dead end.” And it wasn’t. And open source — it was sort of like Microsoft was ruling the world and there was nothing left for developers to do. So they just went and worked on their own fun projects. Right now, everybody’s going after the main chance. And — I mean, obviously not everybody — there are people who are going out and trying to really create value. But there are too many companies — too many investors in particular — who are really trying to create financial instruments. Their model is just, “Value go up.” Versus a company that’s saying, “Yeah, we want value for our users to go up. We’re not even worried about that [financial outcome] right now.” It’s so interesting — there was a story in The Information recently about Surge AI, which didn’t raise any money from investors, actually growing faster than Scale (scale.ai), which Meta just put all this money through — because they were just focused on getting the job done. So I guess my point is: try to create value for others, and it will come to you if you do that. Ross: Absolutely agree. That’s a wonderful message to end on. So thank you so much for all of your work over the years and your leadership in helping us frame this AI as a positive boon for all of us. Tim: Right. Well, thank you very much. And it’s an amazing, fun time to be in the industry. We should all rejoice — challenging but fun. The post Tim O’Reilly on AI native organizations, architectures of participation, creating value for users, and learning by exploring (AC Ep11) appeared first on Humans + AI.
undefined
Jul 16, 2025 • 0sec

Jacob Taylor on collective intelligence for SDGs, interspecies money, vibe-teaming, and AI ecosystems for people and planet (AC Ep10)

“If we’re faced with problems that are moving fast and require collective solutions, then collective intelligence becomes the toolkit we need to tackle them.” – Jacob Taylor About Jacob Taylor Jacob Taylor is a fellow in the Center for Sustainable Development at Brookings Institution, and a leader of its 17 Rooms initiative, which catalyzes global action for the Sustainable Development Goals. He was previously research fellow at the Asian Bureau of Economic Research and consulting scientist on a DARPA research program on team performance. He was a Rhodes scholar and represented Australia in Rugby 7s for a number of years. Website: www.brookings.edu www.brookings.edu www.brookings.edu www.brookings.edu loyalagents.org LinkedIn Profile: Jacob Taylor X Profile: Jacob Taylor What you will learn Reimagining Team Performance Through Collective Intelligence Using 17 Rooms to Break Down the SDGs Into Action Building Rituals That Elevate Learning and Challenge Norms Designing Digital Twins to Represent Communities and Ecosystems Creating Interspecies Money for Elephants, Trees, and Gorillas Exploring Vibe Teaming for AI-Augmented Collaboration Envisioning a Bottom-Up AI Ecosystem for People and Planet Episode Resources Transcript Ross Dawson: Jacob, it is awesome to have you on the show. Jacob Taylor: Ross, thanks for having me. Ross: So we met at Human Tech Week in San Francisco, where you were sharing all sorts of interesting thoughts that we’ll come back to. What are your top-of-mind reflections of the event? Jacob: Look, I had a great week, and largely because of all the great people I met, to be honest. And I think what I picked up there was people really driving towards the same set of shared outcomes. Really people genuinely building things, talking about ways of working together that were driving at outcomes for, ultimately, for human flourishing, for people and planet.  And I think that’s such an important conversation to have at the moment, as things are moving so fast in AI and technology, and sometimes it’s hard to figure out where all of this is leading, basically.And so to have humans at the center is a great principle.  Ross: Yeah, well, where it’s leading is where we take it. So I think having the humans at the center is probably a pretty good starting point. So one of the central themes of this blog—for this podcast for ages—has been collective intelligence. And so you are diving deep into applying collective intelligence to achieve the Sustainable Development Goals, and I would love to hear more about what you’re doing and how you’re going about it. Jacob: Yeah, so I mean, very quickly, I’m an anthropologist by training. I have a background in elite team performance as a professional rugby player, and then studying professional team sport for a number of years. So my original collective is the team, and that’s kind of my intuitive starting point for some of this. But teams are very well built to solve problems that no individual can achieve alone, and really a lot of the SDG problems that we have—issues that communities at every scale have trouble solving on their own—need a whole community to tackle a problem, rather than just one individual or set of individuals within a community. So the SDGs are these types of—whether it’s climate action or ending extreme poverty or sustainability at the city level—all of these issues require collective solutions. And so if we’re faced with problems that are moving fast and require collective solutions, then collective intelligence becomes the toolkit or the approach that we need to use to tackle those problems. I’ve been thinking a lot about this idea that in the second half of the 20th century, economics as a discipline went from pretty much on the margins of policymaking and influence to right at the center. By the end of the 20th century, economists were at the heart of informing how decisions were made at the country level, at firms, and so on. That was because an economic framework really helped make those decisions. I think my sense is that the problems we face now really need the toolkit of the science of collective intelligence. So that’s kind of one of the ideas I’ve been exploring—is it time for collective intelligence as a science to really inform the way we make decisions at scale, particularly for our hardest problems like the SDG. Ross: One of your initiatives—so at Brookings Institution, one of the initiatives is 17 Rooms. I’m so intrigued by the name and what that is and how that works. Jacob: Yeah. So, 17 Rooms. We have 17 Sustainable Development Goals, and so on. Five or so years ago now—or more, I think it’s been running for seven or eight years now—17 Rooms thought: what if we found a method to break down that complexity of the SDGs? A lot of people talk about the SDGs as everything connected to everything, which sometimes is true. There are a lot of interlinkages between these issues, of course. But what would it look like to actually break it down and say, let’s get into a room and tackle a slice of one SDG? So Room 1: SDG 1 for ending extreme poverty. Let’s take on a challenge that we can handle as a team. And so 17 Rooms gathers groups of experts into working groups—or short-term SWAT teams of cooperation, basically—and really gets them to think through big ideas and practical next steps for how to bend the curve on that specific SDG issue. Then there’s an opportunity for these rooms or teams to interact across issues as well. So it provides a kind of “Team of Teams” platform for multi-stakeholder collaboration within SDG issues, but also connecting across the full surface of these problems as well. Ross: So what from the science of collective intelligence—or anything else—what specific mechanisms or structures have you found useful? Are you trying to enable the collective intelligence within and across these rooms or teams? Jacob: Yeah, so I think—I mean, they’re all quite basic principles. We do a lot on trying to curate teams and also trying to run them through a process that really facilitates collaboration. But the principles are quite basic, really. I mean, one of the most fundamental principles is taking an action stance. One of the biggest principles of collective intelligence is that intelligence comes from action. This is a principle we get from biology. In biology, biology acts first and then learns on the run. So you don’t kind of sit there and go, what kind of action could we take together as a multicellular organism—rather, it just unfolds, and then learning comes off the back of that action. So in that spirit, we really try to gear our teams and rooms into an action stance, and say, rather than just kind of pointing fingers at all the different aspects of the problem, let’s say: what would it look like for us in this room to act together? And then, what could we learn from that? Trying to get into that stance is really foundational to the 17 Rooms initiative. And then I think the other part is really bonding or community—so knowing that action and community are two sides of the same coin. When you act together, you connect and you share ideas and information. But likewise, communities of teams that are connected are probably more motivated to act together and to be creative and think beyond just incentives. But like, what can we really achieve together? And so we try to pair those two principles together in everything that we do. Ross: So this comes back to this point—there’s many classic frameworks and realities around acting and then learning from that. So your OODA Loop, your observe, orient, decide, act, or your Lean Startup loop, or Kolb’s learning cycle, or whatever it might be, where we act, but we only learn because we have data or insight. So that’s a really interesting point—where we act, but then, particularly in a collective intelligence perspective, we have all sorts of data we need to filter and make sense of that not just individually, but collectively—in order to be able to understand how it is we change our actions to move more towards our outcomes. Do you have any structures for being able to facilitate that flow of feedback or data into those action loops? Jacob: Yeah, I think—and again, I’m very biased as an anthropologist here—so the third principle that we think about a lot, and that answers your question, is this idea of ritual. We’re acting, we’re connecting around that action, and that’s a back-and-forth process. But then rituals actually are a space where we can elevate the best ideas that are coming out of that process and also challenge the ideas that aren’t serving us. Famously across time for humans, ritual has been an opportunity both to proliferate the best behaviors of a society, but also to contest the behaviors that aren’t serving performance. Ultimately—you don’t always think about this in performance terms—but ultimately, when you look at it big picture, that’s what’s happening. So I think rituals of differentiation between the data that are serving us versus not, I think is really important for any team, organization, or community. Ross: That’s really interesting. Could you give an example of a ritual? Jacob: Well, so there are rituals that can really—like walking on hot coals. Again, let’s start anthropological, and then maybe we can get back to collective intelligence or AI. Walking on hot coals promotes behaviors of courageousness and devotion. Whereas in other settings, you have a lot of rituals that invert power structures—so men dressing up as women, women dressing up as men, or the less powerful in society being able to take on the behaviors of the powerful and vice versa. That actually calls out some of the unhelpful power asymmetries in a society and challenges those. So in that spirit, I think when we’re thinking about high-performing teams or communities tackling the SDGs, I think there needs to be more than just… I’m trying to think—how could we form a ritual de novo here? But really, there needs to be, I guess, those behaviors of honesty and vulnerability as much as celebration of what’s working. That maybe is easier to imagine in an organization, for example, and how a leader or leaders may try to really be frank about the full set of behaviors and activities that a team is doing, and how that’s working for the group. Ross: So you’ve written a very interesting article referring to Team Human and the design principles that support—including the use of AI—and being able to build better team performance. So what are some of the design principles? Jacob: Well, I think this work came a little bit out of some DARPA work I did on a DARPA program before coming to Brookings around building mechanisms for collective intelligence. And when you boil it down to that fundamental level, it really comes down to having a way to communicate between agents or between individuals, which in psychology is referred to—the jargon in psychology is theory of mind. So, do I have a theory of Ross—what you want—and do you have a theory of what I want? That’s basically social intelligence. It’s the basic key here. But it really comes down to some way of communicating across differences. And then with that, the other key ingredient that we surfaced when we built a computational model of this, in a basic way, was an ability to align on shared goals. So it feels like there’s some combination of social intelligence and shared goals that is foundational to any collective intelligence that emerges in teams or organizations or networks. And so trying to find ways to build those—whether that’s at the community level… For example, if a city wants to develop its waste recycling program—but if you break that down, it really is a whole bunch of neighborhoods trying to develop recycling purposes. So the question for me is: do all those neighborhoods have a way of communicating to each other about what they’re doing in service of a shared goal of, let’s say, a completely circular recycling economy at the city level? And if not, then what kind of interaction and conversations need to happen at the city level so that you can share best practices, challenge practices that are hurting everyone, and then find a way to drive collective action towards a shared outcome. But I’d also think about that, like, at the team level, where there are ways to really encourage theory of mind and perspective sharing. Ross: So, in some of that work, you refer to digital twins—essentially being able to model how people might think or behave. If you are using digital twins, how is that put into practice in being able to build better team performance? Jacob: Yeah, great. Yeah, that’s probably really where the AI piece comes in. Because that recycling-at-the-city-level example that I shared—this kind of collective intelligence happens without AI. But the promise of AI is to say, well, if you could actually store a lot of information in the form of digital twins that represented the interests and activities of, let’s say, neighborhoods in a city trying to do recycling— Well, then beyond our human cognition, you could be trying to look for patterns and opportunities for collaboration by leveraging the power of AI to recognize patterns and opportunities across diverse data sets. The idea is you could kind of try to supercharge the potential collective intelligence about problem-solving by positioning AI as a team support—or a digital twin that could say, hey, actually, if we tweak our dials here and use this approach, that could align with our neighbor’s approach, and maybe we should have a chat about it. So there’s an opportunity to surface patterns, but then also potentially perform time-relevant interventions for human decision-makers to help encourage better outcomes. Ross: I think you probably should try a different phrase, because “digital twin” sounds like you’ve got a person, then you’ve got a copy of that person. Whereas you’re describing it here as representing—could be a neighborhood, or it could be a stakeholder group. So it’s essentially a representation, or some kind of representation, of the ways of thinking or values of a group, potentially, or community, as opposed to an individual. Jacob: Indeed, yeah. I think this is where it all gets a bit technical, but yeah, I agree that “twin”—”digital twin”—evokes this idea of an individual body. But if you extend that out, when you really take seriously some of the collective intelligence work, it’s like intelligence collectives become intelligent when they become a full thing, like a body—when they really individuate as a collective. Teams really click and perform when they become one—so that it’s no longer just these individual bodies. It’s like the team is a body. So I think in that spirit, when I think about this, I actually think about neighborhoods having a collective identity. That could be reflected in their twin, or like, of the community. But I agree there’s maybe some better way to imagine what that kind of community AI companion looks like at higher scales. Ross: So at Human Tech Week, you shared this wonderful story about how AI could represent not just human groups, but also animal species. Love to—I think that sort of really brings it to—it gives it a very real context, because you’re understanding that from another frame. Jacob: Yeah. And I think it’s true, Ross. I’ve been struck by how much this example of interspecies money—that I’ll explain a little bit—is not only exciting because it has potential benefit for nature and the beautiful natural environment that we live in, but I think it actually helps humans understand what it could look like to do it for us too. And so, interspecies money, basically, is this idea developed by a colleague of ours at Brookings, Jonathan Ledger. We had a room devoted to this last year in 17 Rooms to try and understand how to scale it up. But what would it look like to give non-human species—like gorillas, or elephants, or trees—a digital ID and a bank account, and then use AI to reverse engineer or infer the preferences of those animals based on the way they behave? And then give them the agency to use the money in their bank account to pay for services. So if gorillas, for example, most rely on protection of their habitat, then they could pay local community actors to protect that habitat, to extend it, and to protect them from poachers, for example. That could all be inferred through behavioral trace data and AI, but then also mediated by a trustee of gorillas—a human trustee. It’s quite a futuristic idea, but it’s actually really hit the ground running. At the moment, there are pilots with gorillas in Rwanda, elephants in India, and ancient trees in Romania. So it’s kind of—the future is now, a little bit, on this stuff. I think what it really does is help you understand: if we really tried to position AI in a way that helps support our preferences and gives agency to those from the bottom up, then what? What world would that look like? And I think we could imagine the same world for ourselves. A lot of our AI systems at the moment are kind of built top-down, and we’re the users of those systems. What if we were able to build them bottom-up, so that at every step we were representing individual, collective, community interests—and kind of trading on those interests bottom-up? Ross: Yeah, well, there’s a lot of talk about AI alignment, but this is, like, a pretty deep level of alignment that we’re talking, right? Jacob: Right. And yeah, I think Sandy Pentland, who I shared the panel with—he has this idea of, okay, so there are large language models. What would it look like to have local language models—small language models that were bounded at the individual. So Ross, you had a local language model, which was the contents of your universe of interactions, and you could perform inferences using that. And then you and I could create a one-plus-one-plus-one-equals-three kind of local language model, which was for some use case around collective intelligence. This kind of bottom-up thinking, I think, is actually technically very feasible now. We have the algorithms, the understanding of how to train these models. And we also have the compute—in devices like our mobile phones—to perform the inference. It’s really just a question of imagination, and also getting the right incentives to start building these things bottom-up. Ross: So one of the things you’ve written about is vibe teaming. We’ve got vibe coding, we’ve got viable sorts of things. You and your colleagues created vibe teaming. So what is it? What does it mean? And how do we do it? Jacob: Good question. Yeah, so this is some work that a colleague of mine, Kirsch and Krishna, and I at Brookings did this year. We got to a point where, with our teamwork—you know, Brookings is a knowledge work organization, and we do a lot of that work in teams. A lot of the work we do is to try and build better knowledge products and strategies for the SDGs and these types of big global challenges. The irony was, when we were thinking about how to build AI tools into our workflow, we were using a very old-school way of teaming to do that work. We were using this kind of old industrial model of sequential back-and-forth workflows to think about AI—when AI was probably one of the most, potentially the most, disruptive technologies of the 21st century. It just felt very ironic. To do a PowerPoint deck, Ross, you would give me the instructions. I would go away and draft it. I would take it back to you and say, “Is this right?” And you would say, “Yes, but not quite.” So instead, we said, “Wait a minute. The internet is blowing up around vibe coding,” which is basically breaking down that sequential cycle. Instead of individuals talking to a model with line-by-line syntax, they’re giving the model the vibe of what they want. We’re using AI as this partner in surfacing what it is we’re actually trying to do in the first place. So Kirsch and I said, “Why don’t we vibe team this?” Why don’t we get together with some of these challenges and experts that we’re working with and actually get them to tell us the vibe of what they’ve been learning? Homie Karas is a world expert—40-year expert—on ending extreme poverty. We sat down with him, and in 30 minutes, we really pushed him to give us, like: “Tell us what you really think about this issue. What’s the really hard stuff that not enough people know about? Why isn’t it working already?” These kinds of questions. We used that 30-minute transcript as a first draft input to the model. And in 90 minutes, through interaction with AI—and some human at the end to make sure it all looked right and was accurate—we created a global strategy to end extreme poverty. That was probably on par with anything that you see—and probably better, in fact, than many global actors whose main business is to end extreme poverty. So it’s an interesting example of how AI can be a really powerful support to team-based knowledge work. Ross: Yeah, so just—I mean, obviously, this is you. You are—the whole nature of the vibe is that there’s no explicit, well, no specific, replicable structure. We’re going with the vibes. But where can you see this going in terms of getting a group of complementary experts together, and what might that look like as the AI-augmented vibe teaming? Jacob: Well, I mean, you’re right. There was a lot of vibe involved, and I think that’s part of the excitement for a lot of people using these new tools. However, we did see a few steps that kept re-emerging. I’ve mentioned a few of them kind of implicitly here, but the big one—step one—was to really start with rich human-to-human input as a first step. So giving the model a 30-minute transcript of human conversation versus sparse prompts was a real game changer for us working with these models. It’s almost like, if you really set the bar high and rich, then the model will meet you there—if that makes sense. Step two was quickly turning around a first draft product with the model. Step three was then actually being patient and open to a conversation back and forth with the model. So not thinking that this is just a one-button-done thing, but instead, this is a kind of conversation—interaction with the model. “Okay, so that’s good there, but we need to change this.” “Your voice is becoming a little bit too sycophantic. Can you be a bit more critical?” Or whatever you need to do to engage with the model there. And then, I think the final piece was really the need to go back and meet again together as a team to sense-check the outputs, and really run a rigorous human filter back over the outputs to make sure that this was not only accurate but analytically on point. This idea that sometimes AI looks good but smells bad—and with these outputs, sometimes we’d find that it’s like, “Oh, that kind of looks good,” but then when you dig into it, it’s like, “Wait a minute. This wasn’t quite right here and there.” So just making sure that it not only looks good but smells good too at the end. Yeah. And so I think these basic principles—we’re seeing them work quite well in a knowledge work context. And I guess for us now, we’re really interested in a two-barrel investigation with approaches like vibe teaming. On the one hand, it’s really about the process and the how—like, how are we positioning these tools to support collaboration, creativity, flow in teamwork, and is that possible? So it’s really a “how” question. And then the other question for us is a full “what.” So what are we pointing these approaches at? For example, we’re wondering—if it’s ending extreme poverty, how could we use vibe teaming to actually… And Scott Page uses this term—how can we use it to expand the physics of collective intelligence? How can we run multiple vibe teaming sessions all at once to be much more inclusive of the types of people who participate in policy strategy formation? So that when you think about ending extreme poverty, it’s ending it for whom? What do they want? What does it look like in local communities, for example? That idea of expanding the physics of collective intelligence through AI and approaches like vibe teaming is very much on our minds at the moment, as we think about next steps and scale-up. Ross: Obviously, the name of the podcast is Humans Plus AI, and I think what you’re describing there is very much the best of humans—and using AI as a complement to draw out the best of that. Nice segue—you just sort of referred to “where next steps.” You’ve described a lot of the wonderful things you’re doing—some fantastic approaches to very, very critically important issues. So where to from here? What’s the potential? What are the things we need to be doing? What’s the next phase of what you think could be possible and what we should be doing? Jacob: Yeah, I think I’m really excited about this idea of growing an alternate AI ecosystem that works for people and planet, rather than the other way around. Part of the work at Brookings is really setting up that agenda—that research agenda—for what that ecosystem could look like. We discussed it a little bit together at Human Tech Week. I think of that in three parts. There’s the technical foundation—so down to the algorithms and the architectures of AI models—and thinking about how to design and build those in a way that works for people. That includes, for example, social intelligence built into the code. Another example there is around, in a world of AI agents—are agents working for humans, or are they working for companies? Sandy Pentland’s work on loyal agents, for example—which maybe we could link to afterward—I think is a great example of how to design agents that are fiduciaries for humans, and actors for humans first, and then others later. Then, approaches like vibe teaming—ways of bringing communities together using AI as an amplifier. And then I think the key piece, for me, is how to stitch the community of actors together around these efforts. So the tech builders, the entrepreneurs, the investors, the policymakers—how to bring them together around a common format. That’s where I’m thinking about a few ideas. One way to try to get people excited about it might be this idea of not just talking about it in policy terms or going around to conferences. But what would it look like to actually bring together a lab or some kind of frontier research and experimentation effort—where people could come together and build the shared assets, protocols, and infrastructures that we need to scale up great things like interspecies money, or vibe teaming, or other approaches? Where, if we had collective intelligence as a kind of scientific backbone to these efforts, we could build an evidence base and let the evidence base inform new approaches—trying to get that flywheel going in a rigorous way. Trying to be as inclusive as possible—working on everything from mental health and human flourishing through to population-level collective intelligence and everything in between. Ross: So can you paint that vision just a little bit more precisely? What would that look like, or what might it look like? What’s one possible manifestation of it? What’s the— Jacob: Yeah, I mean, it’s a good question. So this idea of a frontier experimental lab—I think maybe I’m a little bit informed by my work at DARPA. I worked on a DARPA program called ASSIST—AI, I mean, Artificial Social Intelligence for Successful Teams—and that really used this kind of team science approach, where you had 12 different scientific labs all coming together for a moonshot-type effort. There was that kind of idea of, we don’t really know how to work together exactly, but we’re going to figure it out. And in the process of shooting for the moon, we’re hopefully going to build all these shared assets and knowledge around how to do this type of work better. So I guess, in my mind, it’s kind of like: could we create a moonshot for collective intelligence, where collective intelligence is really the engine—and the goal was trying to, for example, end extreme poverty, or reach some scale of ecosystem conservation globally through interspecies money? Or—pick your SDG issue. Could we do a collective intelligence moonshot for that issue? And in that process, what could we build together in terms of shared assets and infrastructure that would last beyond that one moonshot, and equip us with the ingredients we need to do other moonshots? Ross: Yeah, well, again, going back to the feedback loops—of what you learn from the action in order to be able to inform and improve your actions beyond that. Jacob: Exactly, yeah. And I think the key ingredients here are really taking seriously what we’ve built now in terms of collective intelligence. It is a really powerful, transdisciplinary scientific infrastructure. And I think taking that really seriously, and drawing on the collective intelligence of that community to inform, to create evidence and theories that can inform applications. And then running that around. I think what I discovered at Human Tech Week with you, Ross, is this idea that there’s a lot of entrepreneurial energy—and also capital as well. I think a lot of investors really want to put their money where their mouths are on these issues. So it feels like it’s not just kind of an academic project anymore. It’s really something that could go beyond that. So that’s kind of time for collective intelligence. We need to get these communities and constituencies working together and build a federation of folks who are all interested in a similar outcome. Ross: Yeah, yeah. The potential is extraordinary. And so, you know, there’s a lot going on—not all of it good—these days, but there’s a lot of potential for us to work together. And again, there’s amplifying positive intent, which is part of what I was sharing at Human Tech Week. I was saying, what is our intention? How can we amplify that positive intention, which is obviously what you are doing in spades. So how can people find out more about your work and everything which you’ve been talking about? Jacob: Well, most of my work is on my expert page on Brookings. I’m here at the Center for Sustainable Development at Brookings, and I hope I’ll be putting out more ideas on these topics in the coming months. I’ll be mainly on LinkedIn, sharing those around too. Ross: Fantastic. Love what you’re doing. Yeah—and yeah, it’s fun. It’s fantastic. So really, really glad you’re doing that. Thank you for sharing, and hopefully there’s some inspiration in there for some of our listeners to follow similar paths. Jacob: Thanks, Ross. I appreciate your time. This has been fun. The post Jacob Taylor on collective intelligence for SDGs, interspecies money, vibe-teaming, and AI ecosystems for people and planet (AC Ep10) appeared first on Humans + AI.
undefined
Jul 9, 2025 • 12min

AI & The Future of Strategy (AC Ep9)

“Strategy really must focus on those purely human capabilities of synthesis, and judgment, and sense-making.” – Ross Dawson About Ross Dawson Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload. Website: Ross Dawson Advanced Human Technologies LinkedIn Profile: Ross Dawson Books Thriving on Overload Living Networks 20th Anniversary Edition Implementing Enterprise 2.0 Developing Knowledge-Based Client Relationships   What you will learn How AI is reshaping strategic decision-making The accelerating need for flexible leadership Why trust is the new competitive advantage The balance between human insight and machine analysis Storytelling as the heart of effective strategy Building learning-driven, adaptive organizations The evolving role of leaders in an AI-first world Episode Resources Transcript Ross Dawson: This is a little bit of a different episode. Instead of an interview, I will be sharing a few thoughts in the context of now doubling down on the Humans Plus AI theme. Our community is kicking off the next level. As you may have noticed, the podcast has been rebranded Humans Plus AI, and really just fully focused on this theme of how AI can augment humans—individually, organizations, and society. So what I want to share today is some of the thoughts which came out of Human Tech Week. I was fortunate to be at Human Tech Week in San Francisco a few weeks ago. I did the opening keynote on Infinite Potential: Humans Plus AI, and I’ll share some more thoughts on that another time. But what I also did was run a lunch event, a panel with myself, John Hagel, and Charlene Lee, talking about AI and the future of strategy. So it was an amazing conversation, and I can’t do it justice now, but what I want to do is just share some of the high-level themes that came out of that conversation, and I suppose, obviously, bringing my own particular slant on those. So we started off by thinking around how is change generally, including AI, impacting strategy and the strategy process. So fairly obviously we have accelerating change. That means that decision cycles are getting shorter, and strategy needs to move faster. It also means that there is the ability for creation of all kinds to be democratized within, across, and beyond organizations, allowing them to innovate, to act without necessarily being centralized. And this idea of this abundance of knowledge, coupled with the scarcity of insight, means that strategy really must focus on those purely human capabilities of synthesis, and judgment, and sense-making. There’s also a theme where we have institutional trust is eroding. So this means that more and more, strategy shifts to relationships-based models, ecosystem-based models. And this overlying theme, which John Hagel in particular brought out, is this idea that there is greater fear amongst leaders. There’s greater emotional pressure, and these basically shrink the timeline of our thinking. It forces us to shorter-term thinking. We are based on fear—of a whole variety of pressures from shareholders, stakeholders, politicians, and more. We need to allow ourselves to move beyond the fear, as John’s latest book The Journey Beyond Fear lays out—highly recommended—which then enables us to enable our strategic imagination and ways of thinking, and how we do that. So one of the core themes of the conversation was around: what are the relative roles of AI and humans in the strategy process? Humans are strategic thinkers by their very nature, and now we have AI which can support us and complement us in various ways. Of course, there is a strong way in which AI can use data. It can do a lot of analysis. It is very capable at pattern recognition. It can move faster. It can simulate scenarios and futures, identify signals, and so it can scale what can be done in strategy analysis. It can go deeper into the analysis. But this brings the human role of the higher levels: of the creativity, of the imagination, of the judgment, the ethical framing, the purpose, the vision, the values. One of the key things which came out of it was around storytelling, where strategy is a story. It’s not this whole array of KPIs and routes to get them—that’s a little part of it. It is telling a story that engages people, that makes them passionate about what they want to do and how they are going to do it—that’s their heroes and heroines’ journey. So this insight, this sense-making, is still human. There’s a wonderful quote from the session, saying, “AI without data is extremely stupid,” but even with the data, it can’t deliver the insight or the wisdom on its own. That is something where the human function resides. And so we are still responsible for the oversight and for the ethical nature of the decisions. Especially as we have more and more autonomous agents, we have very opaque systems. And accountability is fundamental to all leadership and to the nature of strategy. So a leader’s role is to be able to bring together those ways in which we bring in AI, deciding when to trust it, deciding when to override, and how to frame its contribution for leaders. So that’s an intrinsic part of strategy: the role of AI in the way the what, how the organization functions, and how the organization establishes and communicates direction. Well, there was a lot of discussion around the tensions. And again, John shared this wonderful frame he’s been using for a while about “zoom out and zoom in.” Essentially, he says that real leaders—the most successful organizations—they have a compelling 10- or 20-year vision, and they also have plans for the next six to twelve months, and they don’t have much in between. And so you can zoom out to sort of see this massive scale of: Why do we exist? What are we trying to create? But also looking, shrinking down to saying, All right, well, what is it we’re doing right now—creating momentum and moving towards that. And so this dual framing is emotionally resonant. It shifts people from fear to hope by being able to see this aspiration and also seeing progress today. And so there are these polarities that we manage in strategy. We’re balancing focus with flexibility. We need to be clearly guided in where we are going. So we need this coherence. We need to be able to know what we are doing, but we also need to be able to focus our resources. And so this balance between flexibility—where we can adapt to situations—while maintaining continuity in moving forward, is fundamental. One of the fundamental themes, which, again, which came out of the conversation, which comes back to some of my core themes from a very long time, is this idea of knowledge and trust. So AI is widely accessible. Everyone’s got it in various guises. So where does competitive advantage reside? And fundamentally, it is from trust. And it is trust that in the AI. Distrust about how the AI is used is around trust in the intentions. It’s around trust in, ultimately, the people that have shaped the systems and used the systems well. So this means that as you create long-term, trust-based relationships, you get more and more advantages. And this comes back to my first book on Knowledge-Based Client Relationships, which I’ve extended and applied in quite a variety of domains, including in my recent work on AI-driven business model innovation. We’re essentially saying that in an AI-driven world, that trust in the systems means you can have access to more data and more insight from people and organizations, which you can apply in building this virtual virtuous circle of differentiation. So you add value, you gain trust, you get insight from that, flowing through into more value. So ultimately, this is about passion. What John calls the passion of the explorer, where we are committed to learning and questioning and creating value. So I suppose that, in a way, the key theme that ran through the entire conversation was around learning, and where learning is not something about, how do we do these workshops, and how do we— Take these bodies of knowledge and get everybody to know them. It is about this continuous exploration of the new. And every successful organization needs to harness and to enable people inside those organizations to be passionate about what they are learning, to explore, to learn from their exploration, to share that, and so building this sustainable learning and scalable learning, which is the nature of a fast-moving world, Where we can have some consistent strategy based around that learning, which enables us to continue to both have direction and be flexible and adaptable in an accelerating world. So that just touches on some of the themes which we discussed in the session, and I will continue to share, write some more—what I call mini reports—just to frame some of these ideas. But the reality is that the nature of strategy is changing. This means the nature of leadership is changing, and we need to understand and to dig into the nature of the changing nature of strategy—where AI plays a role, how that shifts human roles, how leadership changes. Because these are fundamental to our success, not just as individual organizations, but also as industries and society at large. Because our strategies, of course, must support not just individual entities or organizations, but the entire ecosystems and communities and societies in which they are embedded. So we’ll come back. We’ve got some amazing guests coming up in our next episode, so make sure to tune in for the next episodes. Please continue to engage. Get onto Humans Plus AI, sign up for our newsletter, and we’ll see you on the journey. The post AI & The Future of Strategy (AC Ep9) appeared first on Humans + AI.
undefined
Jun 25, 2025 • 34min

Matt Lewis on augmenting brain capital, AI for mental health, neurotechnology, and dealing in hope (AC Ep8)

“The big picture is that every human on Earth deserves to live a life worth living… free of mental strife, physical strife, and the strife of war.” – Matt Lewis About Matt Lewis Matt is CEO, Founder and Chief Augmented Intelligence Officer of LLMental, a Public Benefit Limited Liability Corporation Venture Studio focused on augmenting brain capital. He was previously Chief AI Officer at Inizio Health, and contributes in many roles including as a member of OpenAI’s Executive Forum, Gartner’s Peer Select AI Community and faculty at the World Economic Forum’ New Champions’ initiative. Website: Matt Lewis LinkedIn Profile: Matt Lewis What you will learn Using AI to support brain health and mental well-being Redefining mental health with lived experience leadership The promise and danger of generative AI in loneliness Bridging neuroscience and precision medicine Citizen data science and the future of care Unlocking human potential through brain capital Shifting from scarcity mindset to abundance thinking Episode Resources Transcript Ross Dawson: Matt, it’s awesome to have you on the show. Matt Lewis: Thank you so much for having me. Ross, it’s a real pleasure and honor. And thank you to everyone that’s watching, listening, learning. I’m so happy to be here with all of you. Ross: So you are focusing on using AI amongst other technologies to increase brain capital. So what does that mean? Matt: Yeah. I mean, it’s a great question, and it’s, I think, the challenge of our time, perhaps our generation, if you will. I’ve been in artificial intelligence for 18 years, which is like an eon in the current environment, if you will. I built my first machine learning model about 18 years ago for Parkinson’s disease, under a degenerative condition where people lose the ability to control their body as they wish they would. I was working at Boehringer Ingelheim at the time, and we had a drug, a dopamine agonist, to help people regain function, if you will. But some small number of people developed this weird side effect, this adverse event that didn’t appear in clinical trials, where they became addicted to all sorts of compulsive behaviors that made their actual lives miserable. Like they became shopping addicts, or they became compulsive gamblers. They developed proclivities to sexual behaviors that they didn’t have before they were on our drug, and no one could quite figure out why they had these weird things happening to them. And even though they were seeing the top academic neurologists in this country, United States, or other countries, no one can say why Ross would get this adverse event and Matt wouldn’t. It didn’t appear in the studies, and there’s no way to kind of figure it out. The only thing that kind of really sussed out what was an adverse event versus what wasn’t was advanced statistical regression and later machine learning. But back in the days, almost 20 years ago, you needed massive compute, massive servers—like on trucks—to be able to ship these types of considerations to actually improve clinical outcomes. Now, thankfully, the ability to provide practical innovation in the form of AI to help improve people’s actual lives through brain health is much more accessible, democratisable, almost in a way that wasn’t available then. And if it first appeared for motor symptoms, for neurodegenerative disease, some time ago, now we can use AI to help not just the neurodegenerative side of the spectrum but also neuropsychiatric illness, mental illness, to help identify people that are at risk for cognition challenges. Here in Manhattan, it’s like 97 degrees today. People don’t think the way they normally do when it’s 75. They make decisions that they perhaps wish they hadn’t, and a lot of the globe is facing similar challenges. So if we can kind of partner with AI to make better decisions, everyone’s better off. That construct—where we think differently, we make better decisions, we are mentally well, and we use our brains the way that was intended—all those things together are brain capital. And by doing that broadly, consistently, we’re better off as a society. Ross: Fantastic. So that case, you’re looking at machine learning—so essentially being able to pull out patterns. Patterns between environmental factors, drugs used, background, other genetic data, and so on. So this means that you can—is this, then, alluding, I suppose, to precision medicine and being able to identify for individuals what the right pharmaceutical regimes are, and so on? Matt: Yeah. I mean, I think the idea of precision medicine, personalized medicine, is very appealing. I think it’s very early, maybe even embryonic, kind of consideration in the neuroscience space. I worked for a long time for companies like Roche and Genentech, others in that ecosystem, doing personalized medicine with biomarkers for oncology, for cancer care—where you knew a specific target, an enzyme, a protein that was mutated and there was a degradation, and identified which enzyme was a bit remiss. Then tried to build a companion diagnostic to find the signal, if you will, and then help people that were suffering. It’s a little bit more—almost at risk of saying—straightforward in that regard, because if someone had the patient, you knew that the drug would work. Unfortunately, I think there’s a common kind of misconception—I know you know this exceptionally well, but there are people out there, perhaps listening, that don’t know it as well—that the state of cognitive neuroscience, that is what we know of the brain or how the brain works and how it works in the actual world in which we live, on planet Earth and terra firma, is probably about as far advanced as the state of the heart was when Jesus Christ walked the Earth about 2,000 years ago. That is, we probably have about 100 years of knowledge truly about how the brain truly works in the world, and we’re making decisions about how to engineer personalized medicine for a very, very, very young, nascent science called the brain—with almost no real kind of true, practical, contextual understanding of how it really works in the world. So I think personalized medicine has tremendous possible promises. The reality of it doesn’t really pan out so well. There are a couple of recent examples of this from companies like Nomura, Alto Neuroscience, and the rest, where they try to build these kind of ex post facto precision medicine databases of people that have benefited from certain psychiatric medicines. But they end up not being as beneficial as you’d like them to be, because we just don’t know really a lot about how the brain actually works in the real world. There even still is the debate for people—but even if you extend past the brain and mind debate—I think it’s hard to find the number of people that are building in the space that will recognize contextual variables beyond the brain and mind. Including things like the biopsychosocial continuum, the understanding of spirituality and nature, all the rest. All these things are kind of moving and changing and dynamic at a constant equilibrium. And to try to find a point solution that says Matt or Ross are going to be beneficial at this one juncture, and they’re going to change it right now—it’s just exceptionally difficult. Important, but exceptionally difficult. So I think the focus is more about how do we show up in the real world today, using AI to actually help our actual life be meaningful and beneficial, rather than trying to find this holy grail solution that’s going to be personalized to each person in 2026. I’m not very optimistic about that, but maybe by 2036 we’ll get a little closer. Ross: Yeah. So, I mean, I guess, as you say, a lot of what people talk about with precision medicine is specific biomarkers and so on, that you can use to understand when particular drugs would be relevant. But back to the point where you’re starting with this idea of using machine learning to pick up patterns—does this mean you can perhaps be far more comprehensive in seeing the whole person in their context, environment, background, and behaviors, and so on, to be able to understand what interventions will make sense for that individual, and all of the whole array of patterns that the person manifests? Matt: Yeah, I think it’s a great question. I think the data science and the kind of health science of understanding, again, kind of what might be called the inactive psychiatry of the person—how they make meaning in the world—is just now starting to catch up with reality. When I did my master’s thesis 21 years ago in health services research, there were people trying to figure out: if you were working in the world, how do we understand when you’re suffering with a particular illness, what it means to you? It might mean to the policy wonks that your productivity loss is X, or your quality-adjusted life years is minus Y. Or to your employer, that you can’t function as much as you used to function. But to you—does it really matter to you that your symptom burden is A or Z? Or does it really matter to you that you can’t sleep at night? If you can’t sleep at night, for most people, that’s really annoying. And if you can’t sleep at night six, seven, ten nights in a row, it’s catastrophic because you almost can’t function. Whereas on the quality score, it doesn’t even register—it’s like a rounding error. So the difference between the patient-reported outcomes for what matters for real people and what it matters to the decision-makers—there’s a lot of daylight between those things, and there has been for a long time. In the neuropsychiatric, mental health, brain health space, it’s starting to catch up, for I think a couple of reasons. One, the lived experience movement. I chair the One Mind Community Advisory Network here in the States, which is a group of about 40 lived experience experts with deep subject matter expertise, all of whom suffer from neuropsychiatric illness, neurodivergence, and the rest. These are people that suffer daily but have turned their pain into purpose. The industry at large has seen that in order to build solutions for people suffering from different conditions, you need to co-create with those people. I mean, this seems intuitive to me, but for many years—for almost all the years, 100 years—most solutions were designed by engineers, designed by scientists, designed by clinicians, without patients at the table. When you build something for someone without the person there, you get really pretty apps and software and drugs that often don’t work. Now, having the people actually represented at the table, you get much better solutions that hopefully actually have both efficacy in the lab and effectiveness in the real world. The other big thing I think that’s changing a lot is that people have more of a “citizen data scientist” kind of approach. Because we’re used to things like our Apple Watch, and our iPads, and our iPhones, and we’re just in the world with data being in front of us all the time, there’s more sensitivity, specificity, and demand for visibility around data in our life. This didn’t exist 20 years ago. So just to be in an environment where your mental health, your brain health, is being handed to you on a delivery, if you will—and not to get some kind of feedback on how well it’s working—20 years ago, people were like, “Okay, yeah, that makes sense. I’m taking an Excedrin for my migraine. If it doesn’t work, I’m clearing to take a different medicine.” But now, if you get something and you don’t get feedback on how well it’s working, the person or organization supporting it isn’t doing their job. There’s more of an imprimatur, if you will, of expectation on juxtaposing that data analytics discipline, so that people understand whether they’re making progress, what good looks like, are they benchmarking against some kind of expectation—and then, what the leaderboard looks like. How is Ross doing, versus how Matt’s doing, versus what the gold standard looks like, and all the rest. This didn’t exist a generation ago, but now there’s more to it. Ross: That’s really interesting. This rise of citizen science is not just giving us data, but it’s also the attitude of people—that this is a normal thing to do: to participate, to get data about themselves, to share that back, to have context. That’s actually a really strong positive feedback loop to be able to develop better things. So I think, as well as this idea of simply just getting the patients at the table—so we’ve talked quite a bit, I suppose, from this context of machine learning—of course, generative AI has come along. So, first of all, just a big picture: what are the opportunities from generative AI for assisting mental well-being? Matt: Yeah. I mean, first of all, I am definitely a technophile. But that notwithstanding, I will say that no technology is either all good or all bad. I think it’s in the eyes of the beholder—or the wielder, if you will. I’ve seen some horrific use cases of generative AI that really put a fear into my heart. But I’ve also seen some amazing implementations that people have used that give me a tremendous amount of hope about the near and far future in brain health broadly, and in mental health specifically. Just one practical example: in the United States and a lot of the English-speaking countries—the UK, New Zealand, and Australia—there is a loneliness epidemic. When I say loneliness, I don’t mean people that are alone, that either choose to be alone or live lives that are alone. I actually mean people that have a lower quality of life and are lonely, and as a result, they die earlier and they have more comorbid illness. It’s a problem that needs to be solved. In these cases, there are a number of either point solutions that are designed specifically using generative AI or just purpose-built generative AI applications that can act both as a companion and as a thought partner for people who are challenged in their contextual environment. They act in ways where they don’t have other access or resources, and in those times of need, AI can get them to a place where they either catalyze consideration to get back into an environment that they recall being useful at an earlier point. For example, they find an interest in something that they found utility in earlier—like playing chess, or playing a card game, a strategy game, or getting back to dancing or some other “silly” thing that to them isn’t silly, but might be silly to a listener. And because they rekindle this interest, they go and find an in-person way of reigniting with a community in the environment. The generative AI platform or application catalyzes that connection. There are a number of examples like that, and the AI utility case is nearly free. The use of it is zero cost for the person, but it prevents them from slipping down the slippery slope of an actual DSM-5 psychiatric illness—like depression or anxiety—and becoming much, much worse. They’re kind of rescued by AI, if you will, and they become closer to healthy and well because they either find a temporary pro-social kind of companion or they actually socialize and interact with other humans. I have seen some kind of scary use cases recently where people who are also isolated—I won’t use the word lonely—don’t have proper access to clinicians. In many places around the world, there is a significant shortage of licensed professionals trained in mental health and mental illness. In many of these cases, when people don’t have a diagnosed illness or they have a latent personality disorder, they have other challenges coming to the fore and they rely on generative AI for directional implementation. They do something as opposed to think something, and it can rapidly spiral out of control—especially when people are using GPTs or purpose-built models that reinforce vicious cycles or feedback loops that are negatively reinforcing. I’ve seen some examples, due to some of the work I do in the lived experience community, where people have these built-in cognitive biases around certain tendencies, and they’ll build a GPT that reinforces those tendencies. What starts out as a harmless comment from someone in their network—like a boyfriend, employee, or neighbor—suddenly becomes the millionth example of something that’s terrible. The GPT reinforces that belief. All of a sudden, this person is isolated from the world because they’ve cut off relationships with everyone in their entire circle—not because they really believe those things, but because their GPT has counseled them that they should do these things. They don’t have anyone else to talk to, and they believe they should do them, and they actually carry those things out. I’ve seen a couple of examples like this that are truly terrifying. We do some work in the not-for-profit space trying to provide safe harbors and appropriate places for care—where people have considerations of self-harm, where a platform might indicate that someone is at risk of suicide or other considerations. We try to provide a place where people can go to say, “Is this really what you’re thinking?” If so, there’s a number to call—988—or someone you can reach out to as a clinician. But I think, like all technologies: you can use a car to drive to the grocery store. You could also use the same car to run someone over. We have to really think about: what in the technology is innate to the user, and what it was really meant to do? Ross: Yeah. Well, it’s a fraught topic now, as in there are, as you say, some really negative cases. The commercial models, with their tendency toward sycophancy and encouraging people to continue using them, start to get into all these negative spirals. We can and have, of course, some clinically designed tools—generative AI tools to assist—but not everybody uses those. One of the other factors, of course, is that not everybody even has the finances, or the finance isn’t available to provide clinicians for everybody. So it’s a bit fraught. I go back to 15 years ago, I guess—Paro, the robot seal in Japan—which was a very cute, cuddly robot given to people with neurodegenerative diseases. They came out of their shell, often. They started to interact more with other people just through this little robot. But as you say, there is the potential then for these not to be substitutes. Many people rail against, “Oh, we can’t substitute real human connection with AI,” and that’s obviously what we want. But it can actually help re-engage people with human connection—in the best circumstances. Matt: Yeah. I mean, listen, if I was doing this discussion with almost any other human on planet Earth, Ross, I would probably take that bait and we could progress it. But I’m not going to pick that up with you, because no one knows this topic—of what humans can, should, and will potentially do in the future—better than you, than any other human. So I’m not going to take that. But let me comment one little thing on the mental health side. The other thing that I think people often overlook is that, in addition to being a tool, generative AI is also a transformative force. The best analogy I have comes from a friend of mine, Connor Brennan, who’s one of the top AI experts globally. He’s the Chief AI Architect at NYU here in New York City. He says that AI is like electricity in this regard: you can electrify things, you can build an electrical grid, but it’s also a catalyst for major advances in the economy and helps power forward the industry at large. I think generative AI is exactly like that. There are point solutions built off generative AI, but also—especially in scientific research and in the fields of neurotechnology, neuroscience, cognition, and psychology—the advances in the field have progressed more in the last three years post–generative AI, post–ChatGPT, than in the previous 30 years. And what’s coming—and I’ve seen this in National Academy of Medicine presentations, NIH, UK ARIA, and other forums—what’s coming in the next couple of years will leapfrog even that. It’s for a couple of reasons. I’m sure you’re familiar with this saying: back in the early 2000s, there was a saying in the data science community, “The best type of machine learning is no machine learning.” That phrase referred to the fact that it was so expensive to build a machine learning model, and it worked so infrequently, that it was almost never recommended. It was a fool’s errand to build the thing, because it was so expensive and worked so rarely. When I used to present at conferences on the models we would build, people always asked the same questions: What was the drift? How resilient was the model? How did we productionize it? How was it actually going to work? And it was—frankly—kind of annoying, because I didn’t know if it was going to work myself. We were just kind of hoping that it would. Now, over the last couple of years, no one asks those questions. Now people ask questions like: “Are robots going to take my job?” “How am I going to pay my mortgage?” “Are we going to be in the bread lines in three years?” “Are there going to be mass riots?” That’s what people ask about now. The conversation has shifted over the last five years from “Will it work?” to “It works too well. What does it mean for me—for my human self?” “How am I going to be relevant in the future?” I think the reason why that is, is because it went from being kind of a tactical tool to being a transformative force. In the scientific research community, what’s really accelerating is our ability to make sense of a number of data points that, up until very recently, people saw as unrelated—but that are actually integrated, part of the same pattern. This is leading to major advances in fields that, up until recently, could not have been achieved. One of those is in neuroelectronics. I’m very excited by some of the advances in neurotechnology, for example—and we have an equity interest in a firm in this space. Implantable brain considerations is one major place where mental illness can advance. AI is both helping to decipher the language of communication from a neuroplasticity standpoint, and making it possible for researchers and clinicians to communicate with the implant in your brain when you’re not in the clinic. So, if you go about your regular life—you go to work, you play baseball, you do anything during your day—you can go about your life, and because of AI, it makes monitoring the implant in your brain no different than having a continuous glucose monitor or taking a pill. The advances in AI are tremendous—not just for using ChatGPT to write a job description—but for allowing things like bioelectronic medicine to exist and be in the clinic in four or five years from now. Whereas, 40 years ago, it would have been considered magic to do things like that. Ross: So, we pull this back, and I’d like to come back to where we started. Before we started recording, we were chatting about the big picture of brain capital. So I just want to think about this idea of brain capital. What are the dimensions to that? And what are the ways in which we can increase it? What are the potential positive impacts? What is the big picture around this idea of brain capital? Matt: Yeah. I mean, the big picture is that every human on Earth deserves to live a life worth living. It’s really that simple. Every person on planet Earth deserves to have a life that they enjoy, that they find to be meaningful and happy, and that they can live their purpose—every person, regardless of who they’re born to, their religion, their race, their creed, their region. And they should be free of strife—mental strife, physical strife, and the strife of war. For some reason, we can’t seem to get out of these cycles over the last 100,000 years. The thesis of brain capital is that the major reason why that’s been the case is that a sixth of the world’s population currently has mental illness—diagnosed or undiagnosed. About a quarter of the world’s population is living under what the World Health Organization calls a “brain haze” or “brain fog.” We have a kind of collective sense of cognitive impairment, where we know what we should do, but we don’t do it—either because we don’t think it’s right, or there are cultural norms that limit our ability to actually progress forward. And then the balance of people are still living with a kind of caveman mindset. We came out of the caves 40,000–60,000 years ago, and now we have iPhones and generative AI, but our emotions are still shaped by this feeling of scarcity—this deficit mindset, where it feels like we’re never going to have the next meal, we’re never going to have enough resources. It’s like less is more all the time. But actually, right around the corner is a mindset of abundance. And if you operate with an abundance mindset, and believe—as Einstein said—that everything is a miracle, the world starts responding appropriately. But if you act like nothing is a miracle, and that it’s never going to be enough, that’s the world through your eyes. So the brain capital thesis is: everyone is mentally well, everyone is doing what’s in the best collective interest of society, and everyone is able to see the world as a world of abundance—and therefore, a life worth living. Ross: That is awesome. No, that’s really, really well put. So, how do we do it? What are the steps we need to take to move towards that? Matt: Yeah. I mean, I think we’re already walking the path. I think there are communities—like the ones that we’ve been together on, Ross—and others that are coming together to try to identify the ways of working, and putting resources and energy and attention to some of these challenges. Some of these things are kind of old ideas in new titles, if you will. And there are a number of trajectories and considerations that are progressing under new forms as well. I think one of the biggest things is that we really need both courage to try new ways of working, and also—to use a Napoleon expression—Napoleon said that a leader’s job is to be a dealer in hope. We really need to give people the courage to see that the future is brighter than the past, and that nothing is impossible. So our considerations in the brain capital standpoint are that we need to set these moonshot goals that are realistic—achievable if we put resources in the right place. I’ve heard folks from the World Economic Forum, World Health Organization, and others say things like: by this time next decade—by the mid-2030s—we need to cure global mental illness completely. No mental illness for anyone. By 2037–2038, we need to prevent brain health disorders like Alzheimer’s, Parkinson’s, dystonia, essential tremor, epilepsy, etc. And people say things like, “That’s not possible,” but when you think about other major chronic illnesses—like Hepatitis C or breast cancer—when I was a kid, either of those things were death sentences. Now, they’re chronic illnesses or they don’t exist at all. So we can do them. But we have to choose to do them, and start putting resources against solving these problems, instead of just saying, “It can’t be done.” Ross: Yeah, absolutely. So, you’ve got a venture in this space. I’d love to round out by hearing about what you are doing—with you and your colleagues. Matt: So, we’re not building anything—we’re helping others build. And that’s kind of a lesson learned from experience. To use another quote that I love—it’s a Gandhi quote—which is, “I never lose. I only win or I learn.” So we tried our hand at digital mental health for a time, and found that we were better advisors and consultants and mentors and coaches than we were direct builders ourselves. But we have a firm. It’s the first AI-native venture studio for brain capital, and we work with visionary entrepreneurs, CEOs, startups—really those that are building brain capital firms. So think: mental illness, mental health, brain health, executive function, mindset, corporate learning, corporate training—that type of thing. Where they have breakthrough ideas, they have funding, but they need consideration to kind of help scale to the ecosystem. We wrap around them like a halo and help support their consideration in the broader marketplace. We’re really focused on these three things: mental health, mindset, and mental skills. There are 12 of us in the firm. We also do a fair amount of public speaking—workshops, customer conferences, hackathons. The conference we were just at last week in San Francisco was part of our work. And then we advise some other groups, like not-for-profits and the government. Ross: Fantastic. So, what do you hope to see happen in the next five to ten years in this space? Matt: Yeah, I’m really optimistic, honestly. I know it’s a very tumultuous time externally, and a lot of people are suffering. I try to give back as much as possible. We, as an organization, we’re a public benefit corporation, so we give 10% of all our revenue to charity. And I volunteer at least a day a month directly in the community. I do know that a lot of people are having a very difficult time at present. I do feel very optimistic about our mid- and long-term future. I think we’re in a very difficult transition period right now because of AI, the global economic environment, and the rest. But I’m hopeful that come the early 2030s, human potential broadly will be optimized, and many fewer people on this planet will be suffering than are suffering at present. And hopefully by this time next decade, we’ll be multi-planetary, and we’ll be starting to focus our resources on things that matter. I remember there was a quote I read maybe six or seven years ago—something like: “The best minds of our generation are trying to get people to click on ads on Facebook.” When you think about what people were doing 60 years ago—we were building the space shuttle to the moon. The same types of people that would get people to click on ads on Meta are now trying to get people to like things on LinkedIn. It’s just not a good use of resources. I’ve seen similar commentary from the Israeli Defense Forces. They talk about all the useless lives wasted on wars and terrorism. You could think about not fighting these battles and start thinking about other ways of helping humanity. There’s so much progress and potential and promise when we start solving problems and start looking outward, if you will. Ross: Yeah. You’re existing in the world that is pushing things further down that course. So where can people find out more about your work? Matt: Right now, LinkedIn is probably the best way. We’re in the midst of a merger of equals between my original firm, Elemental, and my business partner John Nelson’s firm, John Nelson Advisors. By Labor Day (U.S.), we’ll be back out in the world as iLIVD—i, L, I, V, D—with a new website and clout room and all the rest. But it’s the same focus: AI-native venture studio for brain health—just twice the people, twice the energy, and all the consideration. So we’re looking forward to continuing to serve the community and progressing forward. Ross: No, it’s fantastic. Matt, you are a force for positive change, and it’s fantastic to see not just, obviously, the underlying attitude, but what you’re doing. So, fantastic. Thank you so much for your time and everything you’re doing. Thank you again. Matt: Thank you again Ross, I really appreciate you having me on, and always a pleasure speaking with you. The post Matt Lewis on augmenting brain capital, AI for mental health, neurotechnology, and dealing in hope (AC Ep8) appeared first on Humans + AI.
undefined
Jun 18, 2025 • 34min

Amir Barsoum on AI transforming services, pricing innovation, improving healthcare workflows, and accelerating prosperity (AC Ep7)

“Successful AI ventures are those that truly understand the technology but also place real human impact at the center — it’s about creating solutions that improve lives and drive meaningful change.” – Amir Barsoum About Amir Barsoum Amir Barsoum is Founder & CEO of InVitro Capital, a venture studio that builds and funds companies at the intersection of AI and human-intensive industries, with four companies and over 150 professionals. He was previously founder of leading digital health platform Vezeeta and held senior roles at McKinsey and AstraZeneca. Website: InVitro Capital LinkedIn Profile: Amir Barsoum X profile: Amir Barsoum What you will learn Understanding the future of AI investment Exploring the human impact of technology Insights from a leading AI venture capitalist Balancing risk and opportunity in startups The evolving relationship between humans and machines Strategies for successful AI entrepreneurship Unlocking innovation through visionary thinking Episode Resources Transcript Ross Dawson: I’m here. It’s wonderful to have you on the show. Amir Barsoum: Same here, Ross. Thank you for the invite. Ross: So you are an investor in fast-moving and growing companies. And AI has come along and changed the landscape. So, from a very big picture, what do you see? And how is this changing the opportunity landscape? Amir: So, actually, we’re InVitro Capital. We actually started because we have seen the opportunity of AI. We actually started with the sort of the move. And a big part of the reason of what we started is we think that the service industry—think about healthcare and home repair, even some service providers today—they’re going to be hugely disrupted by AI. Whether there will be automation, replacement as a bucket, or augmentation as a bucket, or at least facilitation. And we’ve seen a huge opportunity that we can build. We can build AI technology that could do the service. Instead of being a software-as-a-service provider, we basically build the service provider itself. So that’s what excites us about what we’re trying to do and what we’re building. Ross: So what’s the origin of the word InVitro Capital? Does this mean test tubes? Amir: So, I think it originates from there. I think the idea is we’re building companies under controlled conditions. And it’s kind of the in vitro—in vitro fertilization, like the IVF. We keep on building more companies under these controlled conditions. That’s the idea, and because we come from a healthcare background, so it kind of resonated. Ross: All right, that makes sense. So, there’s a lot of talk going around—SaaS is dead. So this kind of idea, you talk about services and the way services are changing. And so that’s—yeah, absolutely—service delivery, whether that’s service by humans, whether it’s service by computers, whatever the nature of that, is changing. So does this mean that we are fundamentally restructuring the nature of what a service is and how it is delivered? Amir: I think, yes. I think between the service industry and the software industry, both of them are seeing a categorical change in how they’re going to be provided to the users. And, I mean, the change is massive. I’m not sure about the word “dead,” but we’re definitely seeing a huge, huge change. Think about it from a service perspective, from a software perspective. In software, I used to sell software to a company. The company needs people to be smart enough, educated enough, trained enough to use the software and give you value out of it. They used to be called the system of records with some tasks, but really it’s a system of record that has a lot of records, and then somebody—some employee—who sits there and does the job. In the service, it’s kind of, you think this is going to be very difficult, and they’re going to do somebody as an outsource to do the service for me. Think about, I’m going to go and hire someone who’s going to help us do marketing content, or someone who would do even legal—and I’m going to the extreme. And I think both are seeing categorical change. The software and the employee, both together, could become one, or at least 80% of the job could be done now by AI technologies. And the service—the same thing. So we’re definitely seeing a massive change in these aspects. And talk legal, talk content marketing—all of them. Ross: I’d like to dig into that a little bit more. But actually, just one question is around pricing. Are you looking at or exploring ways in which fee structures or pricing of services change? I mean, that’s classically where services involved humans—there was some kind of correlation to the cost of the human plus the margin. Now there is AI, which has taken often an increasing proportion of the way the service is delivered. So—and different perceptions where clients, customers think, “Oh, you must be able to do it really cheap now, so give it to me cheaper.” Or maybe there’s more value created. So are you thinking about how fees and pricing models change for service? Amir: I have a strong concept when it comes to pricing and innovation. Think about ride-hailing when it has been introduced in the market. It was introduced at the price-competitive advantage compared to the yellow cab, right? Yes, you can come in with many other better benefits, like it’s coming with security and safety, but the reality is, it’s there. That’s when you hit the mass market—you need to play on pricing. And I think that’s the beauty of innovation. And I think AI as a technology, and with its very, very wide use cases, it’s going to make every single thing around us significantly cheaper. Let’s take the same ride-hailing example. If you introduce the auto-driving ride-hailing, we are literally taking almost 70% of the cost today off of the table. So if you’re not going to introduce a significantly cheaper price, I don’t think it’s going to find the mass market. So that’s from a form of the absolute value of pricing—how to think of pricing. So where I would split this into two categories—we tried going and we basically say, “You know what, if you hire a person, it will cost you X. Hire an AI person, and it will cost you Y.” I find this not working very well. Where where seeing the pay-as-you-go model is the easier way, the more comprehensible way. So, you think about the SaaS pricing, if you think about the service pricing—and that’s a continuum—I think we’re somewhere in the middle. And I think the best is closer a bit to the SaaS pricing, but more about—you use more, you pay more—kind of a game, rather than feature-based pricing. So consumption-based pricing, but less related to the FTE. Because, for example, when you say, “We have a recruiter, AI recruiter,” and you say it’s $800 a month instead of paying for a full-time recruiter, human recruiter, who is $7,000 a month—then what is the capacity of this AI recruiter? Is it equal to the human recruiter? Or it’s an unlimited capacity? So we find this is not working really, really well. What works is really use-based, not feature-based events. Ross: Right. So moving away from necessarily time-based subscription to some kind of consumption— Amir: Consumption-based, yeah. Yeah, you could time it. You can time it a little bit as to timing, but really, it’s a consumption-based. Ross: Yeah, and there is also the new models like outcome-based, such as Sierra, where they price—if you get an outcome, where you get a customer resolution, you pay. If you don’t, then you don’t. Amir: Which is—this one is actually—so we have a company that we’re going to put in the market that is related. That is related to AI GTM—so AI go-to-market solution. We’re going to go with the model of, “You know what? Pay when you get.” Which I think is a very, very interesting model. It’s a super good and easy way to acquire customers. But you also need them to be a little bit engaged in some input so you can do a great job for them. But if they haven’t paid, then you’re going to find it—the engagement component—I think the funnel drops a little bit there. We haven’t fixed that yet, but I think somehow it mixes between the consumption-based, but very, very small, and then more of the pay-per-outcome. I think this would be the fascinating solution. Ross: Yeah, yeah. Well, it’s certainly evolving fast, and we’ll experiment, see what works, and work on that. Amir: So I’ll tell you about it. I’ll tell you what’s going to happen.  Ross: So you have your healthcare background, you’re InVitro Capital, you are investing in the healthcare sector. So I’d like to sort of just pull back to the big picture. So there’s a human-plus-AI perspective. And thinking more—there are humans involved, there’s AI, which is now complementing us. Of course, there’s been many things in healthcare where AI can do things better and more effectively than people—just doing things which are not inspiring. And there’s also a lot of abilities to complement AI in how we deliver services, the quality of those services, and so on. So best you can, just sort of take a big picture and say, what are that—where in this flow of healthcare do you see opportunities for humans and AI to do things better, faster, more effectively than they’ve done before? Amir: So, healthcare—because the technicalities of the technical component of healthcare—is a very sensitive topic. When you start getting into the clinical decisions of it, it’s a very sensitive topic. But in reality, healthcare is written in books, right? Especially the non-intervention healthcare. You think about the primary care—most of the non-intervention is written in books. And the LLM models know them. And even with many other data models you have—and even the big healthcare systems—they have tons of this data. So you can actually today go straight away with some of the clinical solutions. You know, if you take a picture now as a consumer of something on your skin and put it on, it can kind of give you a very, very good answer. But is this something that we think is ready to be commercialized and go to market? The answer is: no, not today. But we’re seeing, on the other side, every single thing until the clinical decisions is seeing massive, massive augmentation. And we think about it from a patient journey perspective. And in the patient journey, there are anchors as well. You can see it with the provider side, but see the accessibility component—where can patients access healthcare? And I mean having the conversation and the scheduling and the difficulty of the scheduling and getting third parties involved. And this is not a typical administrative task—there are some medical people who used to be involved in this. So, for example, the patient can’t see the diagnostic center unless you get an approval. But when you try to get an approval from the insurance firm, the insurance firm declines. So you need to get one more comment here, one more writing here, to get the insurance firm to approve. Can you do these kinds of things—which is not the billing part, it’s still accessibility? And we are—we’re seeing AI technology playing a significant role in this. Take it to the next step: billing, for example, which is really getting the provider to be paid for this visit, and maybe start diverting what is the copay and what is not. A lot of people are involved in this, and we’re seeing massive, massive implementation in that space, and workflow automation in that space as well. Ross: So just coming back to that. So this idea of workflow, I think, is really critical. Where you mentioned this idea of approvals. And so, yes, part of it is just flow—okay, this thing is done, it then goes on to the next person, machine, whatever system or organization. But this comes back to these decisions where essentially AI can provide recommendations, AI can basically be automated with some kind of oversight. There may be humans involved with providing input. So in healthcare—particularly pointed—I mean, I suppose even in that patient experience process. So how are you thinking about the ways in which AI is playing a role at decision points, in terms of how we shift from what might have been humans making decisions to now what are either AI-delegated or AI-with-humans-in-the-loop or any other configuration? So how are we configuring and setting up the governance and the structures so these can be both efficient and effective? Amir: In very simple terms, there are the workflows and there are the AI workflows, which are very different—very different from how technology is designed and built, and very, very different in their outcome. I think every single thing that we tried to do before in healthcare using workflows was, at best, not working. It even could look nice, but it just didn’t work. That’s the fact. Because you start building algorithms and start putting rules in your code—if this happens, do that; if this happens, do that—you never cover enough rules that would make it really solid. And if you do, then the system collapses. I think now we are at the stage where there are data models that you keep on indicating—this data model on whether this worked or not, the satisfaction level of the patient, whether this ended up in crazy billing and payment or not, whether this ended up in actually losing money for the provider or not losing money for the provider, the amount of time that has been lost, whether we have utilized the provider’s time or not—which is the most expensive component until today. We talk AI, but still, we need healthcare providers. So there, you build these data models that make the AI take decisions on: shall I book Amir tomorrow at 2 p.m., or I’d rather book Amir the day after tomorrow? There are many, many data points that need to be considered in this intervention—Amir’s timeline, the doctor’s timeline, availability. These are the easy parts. But the not-easy part is what the data models tell us—that makes the AI think like a human and on its feet, and saying, “You know what, I would book Ross tomorrow, but Amir the day after tomorrow,” because of the tons and tons of things: utilization, expectation, what’s the time that you’re going to take—leveraging on history of data that could work. And the more you move into the billing component—and by the way, I know most of the people in healthcare think more about the clinical decisions—but in reality, healthcare is decided by payment and billing. These are the two biggest points, right? Ross: So one of the interesting things here—I guess, pointing to the fact—we’ve got far more data than ever before, and hopefully we’re able to do things like measure emotional responses and so on, which is important if we’re going to build experience. I mean, just massive things that can feed into models. But one of the points is that healthcare is multiparty. There’s a whole wealth of different players involved. And there are some analogies to supply chain, except supply chains are far easier than healthcare. You have multiple players, and you have data from many different participants. And there is value to optimizing across the entire system, but you’ve got segregated data and then segregated optimization algorithms. And in fact, if you optimize within different segments of the entire piece, then the entire thing as a whole may, in fact, end up being worse off than it was before. So do you have a vision for how we can get to, I suppose, healthcare-wide optimization based on data and effective AI? Amir: That’s a very, very, very good question, honestly—and quite deep. So, healthcare has been—there are the payers, the insurance—the guys that pay the money. There’s the healthcare providers. And those—think about it—healthcare providers as entities, the organization. And healthcare providers as individuals—the doctors, the nurses, the pharmacists, right? And then there’s the patient. And there’s the employer. So there are all of these components together. And we have been seeing a trial of creating vertical integration in healthcare in the past—a payer buying hospitals, buying clinics—and thinking that this is going to be cheaper for him, and it is. And to do it—but it has been slow because it’s very difficult to run a complete insurance firm and a healthcare provider—hiring doctors and managing workflows and payroll and the quality of patients, and making sure that the patient liked your hospital so they can come again, or your clinic, and not leaving or walking away. And then what we are seeing is—there’s a very well-known concept in AI, which is the flattening of the org structure. I think we’re going to see this in healthcare. It becomes easier to do this vertical integration—the clinic, the scan centers, the pharmacists, the hospitals. It’s becoming way easier with time, by basically automating and using AI and augmenting what we do today—and shrinking, kind of running it together. I think we’re going to see this more and more in the future. Ross: The existing players start to do roll-ups or consolidate. So that becomes quite capital intensive in terms of being able to build this vertical integration. So either you build it out, or you buy it. Amir: Or you build it out without being super capital intensive, because you’re using tech that is, again, you don’t need to be as capital intensive as you used to be before. You know, for example, the working capital of people involved is going to be significantly less than what you used to see before. I’m less talking about the hospitals at this stage, but I think the outpatient setup will definitely see this. I’ll give you an example. In the pharmacy business, we have automated—not fully automated, but augmented. In the pharmacy business, think about it. It’s a small factory. You get the prescription, somebody needs to type the prescription—we call them typists. Then somebody needs to fill it, and then a pharmacist needs to check it. So we’ve automated many, many of those, using even some machines there until the filling component. Then the pharmacist would show up and do the checking. So the working capital is shrinking, the time is becoming way and way more leaner, and hence it’s way more efficient. Ross: So let’s pull back to the big picture of how does AI impact work, society, investments—everything. So what are any macro thoughts you have around how some of the big shifts—and how we can basically make these as positive as possible? Amir: So I will give you my answer from what I’m seeing in the AI and the service intersection, because we are doing a lot of work in that space. I think many, many of the jobs gonna vanish and cease to exist. But also, very interesting, we’re seeing a massive uptake in the EdTech, where people are jumping in to kind of elevate their skill sets. And I think the time is there. It’s not a crazy, gloomy picture. I think there’s some time they can actually get that and fill up the space. The level of shortage we’re seeing in healthcare is unheard of, and we are aging as a population. And the reality of the matter—we need those. So I need less people working on the reception and the billing department, and I want more people who could provide some level of healthcare component. And we’re seeing this happening. Think about the home repair. I need less people who do administrative work, and I need more electricians, and I need more plumbers. And I think we’re seeing more jumping into the EdTech components, getting these preparation to the exams and the test to kind of elevate themselves into this role. And I think AI is definitely accelerating the elimination of the jobs, but also accelerating access to education so that you can capture the new job. And we’re definitely seeing these two pieces happening, I would say, in a very, very fast pace. So that’s one thing we’re seeing. From an investment perspective, we look at investment into three categories. We look at them into: Category A: Investing into foundational models like the LLMs. And I think foundation models—and I would say the whole foundation thing about AI—there’s the foundation model, then the infrastructure game there. I think it’s a very, very interesting space. It’s the power law game—applies very strongly. And I think the choice there is, I would say, the biggest factor. And obviously access at the right time. So this is category A. Category B: The application layer. And I think in the application layer, I personally believe—and I think there’s a lot of belief that’s happening—we’re seeing less of the power law exercise. We’re not expecting people to exit $10 billion in that space. I would say there’s democratization of the application layer. And I think the best there is how you can build with a very cost-efficient manner, so that the return on capital is as good as people expected. And that’s what we operate, actually, as a venture studio and a venture builder. Category C: What’s going to happen in the businesses, and the mom-and-pop shops, and the street businesses in the service industry. And I think for this category and the second category, we’re going to see a lot of kind of merging together—roll-ups between the second category and the third category. Either the big ones of the third buy some of the second, or the big ones of the second buy some of the third. We’re seeing this—even Silicon Valley start to talk about the roll-ups in the VC space and the like. So that’s how we think about it. Ross: So actually, I want to dig into the venture studio piece. All right, so you’ve got a venture studio, you’re running a bunch of things in parallel. You’re doing that as efficiently as possible. So what? How? Right? Well, what are your practices? How do you amplify your ability to build multiple ventures simultaneously, given all of the extraordinary technologies? What are the insights which are really driving your ability to scale faster than ever before? Amir: So, usually, when we try to scale, we think about whether there is recyclability of customers. That’s the first thing we think about. I think we use, you know, if you look at our first deck, there was recyclability of customer, recyclability of technology. Honestly, if we talk about a segment of technology now, it’s a joke. You know, it’s going to take, what, a month to build? So we took this out. Really, it’s distribution. And this has become, again—think about the application layer—distribution is the most difficult thing to capture, because everybody would come and tell you, “I’m a sales AI, I’m a sales AI.” Okay, I’ve got 20 emails about sales AI, 20 emails about AI recruiters. And distribution is a very big component. So the recyclability of customers is a very big part. The second part is availability of data, because if you don’t build your own data models, train your own solutions, so you create a very, very unique quality of the product. The product wouldn’t be good enough for the expectations, because today, the consumer and the business expectations when you say AI have been super high, because they think it’s going to produce the same value as if you are a consumer using ChatGPT, which in most of the cases, that doesn’t happen unless you have a very unique data model that helps fix a very unique solution. And again, think about the diagnostics that we’re doing in the home repair space. We collected millions and millions of pictures and images, and we spend—we even keep our model trained ourselves. So we go into the—we provide the—we do the service, and we take it and we make sure that we can start providing feedback, and then we feed back to the system so that we can start creating these data models that would make sense. Otherwise, the solution is not as great. And if you think about the solution that we launched in the market three months ago, I would say bad at best. Now, I would say it’s significantly better. And I still think that we have a way to go adding more and more data to what we are building to fix that. So, recyclability of consumers and of customers is a big thing, and availability of data—I would say these are the two big components that, if we find them, say, “That is something to do.” I’m also not going to say all the clichés, you know, find the pain in the market. I think it’s a standard.  Ross:Yeah, yeah that’s not new. That’s all.  Amir: Yeah, yeah. Ross: Fabulous. That’s, that’s really interesting. So to round out, I mean, what’s most exciting to you about the potential of humans plus AI — around where it is we can go from here? Amir: Oh, I’m, I’m very excited to see the—so I will say something that is, I’m not sure how contrarian it is. I think we’re going to see the quality of products and service around us in a totally different level. Totally different level. We’re gonna—I think our generation has been living in prosperity significantly better than the previous generation. And the previous generation is that, you know, if you—a very, very rich man 100 years ago lives way worse of a life than a poor man who lives today, right? I know if you compare it—just get this one and get this one—and you see the level of comfort, the day-to-day work and so on. But this takes 100 years to see major, major difference. I think we’re going to see this now in much shorter periods, I would say, 10 years. And that’s the positive and the good part of it. But also that comes with a little bit of, you know, it’s a scary belief, okay, what’s going to happen tomorrow? Am I fast enough as an investor, as a human being? What’s our kids going to do? So I think these questions also pop up and make us think about it. But I’m quite excited, generally, about how the quality of life is going to move significantly in an upward trajectory. Hopefully, we, as humans, we’re going to mitigate the risks that we’re seeing potentially going to happen—security risks, cyber security risks, and tons of others. Ross: So where can people go to find out more about your work in ventures? Amir: InVitroCapital.com. I’m on LinkedIn. Just writing the name “Amir Barsoum,” you’re going to find me there. On LinkedIn, our team is there. So all of these are the best ways to reach out. Ross: Fantastic. Thank you for your time and your insights, Amir. Amir: Ross, that was great. Thank you very much for the deep questions. The post Amir Barsoum on AI transforming services, pricing innovation, improving healthcare workflows, and accelerating prosperity (AC Ep7) appeared first on Humans + AI.
undefined
Jun 4, 2025 • 34min

Minyang Jiang on AI augmentation, transcending constraints, fostering creativity, and the levers of AI strategy (AC Ep6)

What are the goals I really want to attain professionally and personally? I’m going to really keep my eye on that. And how do I make sure that I use AI in a way that’s going to help me get there—and also not use it in a way that doesn’t help me get there? – Minyang Jiang (MJ) About Minyang Jiang (MJ) Minyang Jiang (MJ) is Chief Strategy Officer at business lending firm Credibly, leading and implementing the company’s growth strategy. Previously she held a range of leadership positions at Ford Motor Company, most recently as founder and CEO of GoRide Health, a mobility startup within Ford. Website: Minyang “MJ” Jiang Minyang “MJ” Jiang LinkedIn Profile: Minyang “MJ” Jiang What you will learn Using ai to overcome human constraints Redefining productivity through augmentation Nurturing curiosity in the modern workplace Building trust in an ai-first strategy The role of imagination in future planning Why leaders must engage with ai hands-on Separating the product from the person Episode Resources Transcript Ross Dawson: MJ, it’s a delight to have you on the show. Minyang “MJ” Jiang: I’m so excited to be here, Ross. Ross: So I gather that you believe that we can be more than we are. So how do we do that?  MJ: Absolutely I’m an eternal optimist, so I’m always—I’m a big believer in technology’s ability to help enable humans to be more if we’re thoughtful with it. Ross: So where do we start?  MJ: Well, we can start maybe by thinking through some of the use cases that I think AI, and in particular, generative AI, can help humans, right? I come from a business alternative financing perspective, but my background is in business, and I think there’s been a lot of sort of fear and maybe trepidation around what it’s going to do in this space. But my personal understanding is, I don’t know of a single business that is not constrained, right? Employees always have too much to do. There are things they don’t like to do. There’s capacity issues. So for me, already, there’s three very clear use cases where I think AI and generative AI can help humans augment what they do. So number one is, if you have any capacity constraints, that is a great place to be deploying AI because already we’re not delivering a good experience. And so any ability for you to free up constraints, whether it’s volume or being able to reach more people—especially if you’re already resource-constrained (I argue every business is resource-constrained)—that’s a great use case, right? The second thing is working on a use case where you are already really good at something, and you’re repeating this task over and over, so there’s no originality. You’re not really learning from it anymore, but you’re expected to do it because it’s an expected part of your work, and it delivers value, but it’s not something that you, as a human, you’re learning or gaining from it. So if you can use AI to free up that part, then I think it’s wonderful, right? So that you can actually then free up your bandwidth to do more interesting things and to actually problem-solve and deploy critical thinking. And then I think the third case is just, there are types of work out there that are just incredibly monotonous and also require you to spend a lot of time thinking through things that are of little value, but again, need to be done, right? So that’s also a great place where you can displace some of the drudgery and the monotony associated with certain tasks. So those are three things already that I’m using in my professional life, and I would encourage others to use in order to augment what they do. Ross: So that’s fantastic. I think the focus on constraints is particularly important because people don’t actually recognize it, but we’ve got constraints on all sides, and there’s so much which we can free up. MJ: Yes, I mean, I think everybody knows, right? You’re constrained in terms of energy, you’re constrained in terms of time and budget and bandwidth, and we’re constrained all the time. So using AI in a way that helps you free up your own constraints so that it allows you to ask bigger and better questions—it doesn’t displace curiosity. And I think a curious mind is one of the best assets that humans have. So being able to explore bigger things, and think about new problems and more complicated problems. And I see that at work all the time, where people are then creating new use cases, right? And it just sort of compounds. I think there’s new kinds of growth and opportunities that come with that, as well as freeing up constraints. Ross: I think that’s critically important. Everyone says when you go to a motivational keynote, they say, “Curiosity, be curious,” and so on. But I think we, in a way, we’ve been sort of shunned. The way work works is: just do your job. It doesn’t train us to be curious. So if, let’s say, we get to a job or workplace where we can say—we’re in a position of work where you can say—all right, well, all the routine stuff, all the monotony, we’ve done. Your job is to be curious. How do we help people get to that thing of taking the blinkers off and opening up and exploring? MJ: I mean, I think that would be an amazing future to live in, right? I mean, I think that if you can live in a world where you are asked to think—where you’re at the entry level, you’re asked to really use critical thinking and to be able to build things faster and come up with creative solutions using these technologies as assistance—wouldn’t that be a better future for us all? And actually, I personally would argue and believe that curiosity is going to be in high demand, way higher demand than it’s been in the future, because there is this element of spontaneous—like spontaneous thinking—which AI is not capable of right now, that humans are capable of. And you see that in sort of—even sort of personal interactions, right? A lot of people use these tools as a way to validate and continue to reinforce how they think. But we all know the best friendships and the best conversations come from being called out and being challenged and discovering new things about yourself and the thing. And that same sentiment works professionally. I think curiosity is going to be in high demand, and it’s going to be a sort of place of entry in terms of critical thinking, because those are the people that can use these tools to their best advantage, to come up with new opportunities and also solve new problems. Ross: So I think, I mean, there is this—I say—I think those who are curious will, as you say, be highly valued, be able to create a lot of value. But I think there are many other people that have latent curiosity, as in, they would be curious if they got there, but they have been trained through school and university and their job to just get on with the job and study for the exam and things like that. So how do we nurture curiosity in a workplace, or around us, or within? MJ: I mean, I think this is where you do have this very powerful tool that is chat-based, for the most part, that you don’t require super technical skills to be able to access. At least today, the accessibility of AI is powerful, and it’s very democratizing. You can be an artist now if you have these impulses but never got the training. Or you can be a better writer. You can come up with ideas. You can be a better entrepreneur. You can be a better speaker. It doesn’t mean you don’t have to put in the work—because I still think you have to put in the work—but it allows people to evolve their identity and what they’re good at. What it’s going to do, in my mind, rather than these big words like displacement or replacement, is it’s going to just increase and enhance competition. There’s a great Wharton professor, Stefano Plutoni, who talked about photography before the age of digital photography—where people had to really work on making sure that your shutter speed was correct, you had the right aperture, and then you were in the darkroom, developing things. But once you had digital photography, a lot of people could do those things. So we got more photographers, right? We actually got more people who were enamored with the art and could actually do it. And so some of that, I think, is going to happen—there’s going to be a layering and proliferation of skills, and it’s going to create additional competition. But it’s also going to create new identities around: what does it mean to be creative? What does it mean to be an artist? What does it mean to be a good writer? In my mind, those are going to be higher levels of performance. I think everyone having access to these tools now can start experimenting, and companies should be encouraging their employees to explore their new skills. You may have someone who is a programmer who is actually really creative on the side and would have been a really good graphic artist if they had the training. So allowing that person to experiment and demonstrate their fluidity, and building in time to pursue these additional skill sets to bring them back to the company—I think a lot of people will surprise you. Ross: I think that’s fantastic. And as you say, we’re all multidimensional. Whatever skills we develop, we always have many other facets to ourselves. And I think in this world, which is far more complex and interrelated, expressing and developing these multiple skills gives us more—it allows us to be more curious, enabling us to find more things. Many large firms are actively trying to find people who are poets or artists or things on the side. And as you say, perhaps we can get to workplaces where, using these tools, we can accelerate the expansion of the breadth of who we are to be able to bring that back and apply that to our work. MJ: I mean, I’ve always been a very big fan of the human brain, right? I think the brain is just a wonderful thing. We don’t really understand it. It truly is infinite. I mean, it’s incredible what the brain is capable of. We know we can unlock more of its potential. We know that we don’t even come close to fully utilizing it. So now having these tools that sort of mimic reasoning, they mimic logic, and they can help you unlock other skills and also give you this potential by freeing up these constraints—I think we’re just at the beginning of that. But a lot of the people I work with, who are working with AI, are very positive on what it’s done for their lives. In particular, you see the elevated thinking, and you see people challenging themselves, and you see people collaborating and coming up with new ideas in volume—rewriting entire poorly written training manuals, because no one reads those, and they’re terrible. And frankly, they’re very difficult to write. So being able to do that in a poetic and explicable way, without contradictions—I mean, even that in itself is a great use case, because it serves so many other new people you’re bringing into the company, if you’re using these manuals to train them. Ross: So you’ve worked on Jedi, Jedi projects in the workplace—put this into practice, sort of. So I’d love to hear just, off the top of your mind, what are some of the lessons learned as you did that? MJ: Yeah, we’ve been deploying a lot of models and working with our employee base to put them into production. We also encourage innovation at a very distributed level. The biggest thing I will tell you is—change management. For me, the important part is in the management, right? Change—everybody wants change. Everyone can see the future, and I have a lot to say about what that means. But people want change, and it’s the management of change that’s really difficult. That requires thought leadership. So when companies are coming out with this AI-first strategy, or organizations are adopting AI and saying “we are AI-first,” for me the most important lesson is strategically clarifying for employees what that means. That actually isn’t the first thing we did. We actually started doing and working and learning—and then had to backtrack and be like, “Oh, we should have a point of view on this,” right? Because it’s not the first thing. The first thing is just like, “Let’s just work on this. This is fun. Let’s just do it.” But having a vision around what AI-first means, and acknowledging and having deep respect for the complexities around that vision—because you are touching people, right? You’re touching people’s sense of self-worth. You’re touching their identities. You’re touching how they do work today and how they’re going to do work three to five years from now. So laying that out and recognizing that we don’t know everything right now—but we have to be able to imagine what different futures look like—that’s important. Because a lot of the things I see people talking about today, in my view, is a failure of the imagination. It’s pinning down one scenario and saying, “This is the future we’re going to march towards. We don’t love that future, but we think it’s inevitable.” As leaders—it’s not inevitable. So doing the due diligence of saying, “Let me think through and spend some time really understanding how this affects my people, and how I can get them to a place where they are augmented and they feel confident in who they are with these new tools”—that are disruptive—that’s the hard work. But that is the work I expect thought leadership and leaders to be doing. Ross: Yes, absolutely right. And I think this—as you say—this getting any sense of the inevitable is deeply dangerous at best. And as you say, any way of thinking about the future, we must create scenarios—partly because there are massive uncertainties, and perhaps even more importantly, because we can create the future. There are no inevitabilities here. So what does that look like? Imagination comes first if we are building the company of the future. So how do we do that? Do we sit down with whiteboards and Post-it notes? What is that process of imagining? MJ: There’s so many ways to do it, right? I mean, again—I took a class with a Wharton professor, Scott Snyder. He talked about “future-back” scenario planning, which is basically: First, I think you talk to many different people. You want to bring in as many diverse perspectives as possible. If you’re an engineer, you talk to artists. If you’re a product person, you talk to finance people. You really want to harness everyone’s different perspectives. And I think, along with the technology, there’s one thing that people should be doing. They should first of all think about defining—for your own function or your own department—what does it mean to be literate, proficient, and a master at AI? What are the skill sets you’re going to potentially need? Then it’s really up to every company. I myself created a strategic framework where I can say, “Okay, I think there’s a spectrum of use cases all the way from a lot of automation to AI being simply an assistant.” And I ask different people and functions in the company to start binning together what they’re doing and placing them along this spectrum. Then I would say: you do this many times. You write stuff down. You say, “Okay, perhaps I’m wrong. Let’s come up with an alternate version of this.” There are several levers that I think a lot of people could probably identify with respect to their industry. In my industry, one of the most important is going to be trust. Another one is going to be regulation. Another one is going to be customer expectation. So when I lay out these levers, I start to move them to the right and left. Then I say, “Well, if trust goes down in AI and regulations go up, my world is going to look very different in terms of what things can be automated and where humans come in.” If trust goes up and regulations go down, then we have some really interesting things that can happen. Once you lay out multiple of these different kinds of scenarios, the thing you want to look for is: what would you do the same in each one of these scenarios? Would you invest in your employees today with respect to AI? And the answer is always yes—across every single scenario. You will never have less ROI. You will always be investing in employees to get that ROI. So now you look at the things and say, “What am I going to do in my AI-first strategy that’s going to position me well in any future—or in a majority of futures?” Those are the things you should be doing first, right now. Then you can pick a couple of scenarios and say, “Okay, now I need to understand: if this were to change, my world is going to be really different. If that were to change, my world is going to be really different.” How do I then think through what are the next layer of things I need to do? Just starting with that framework—to say, what are the big levers that are going to move my world? Let’s assume these things are true. Let’s assume those things are true. What do my worlds look like? And then, is there any commonality that cuts across the bottom? The use cases I gave earlier—around training, freeing up capacity—that cuts across every single scenario. So it makes sense to invest in that today. I’m a big believer in employee training and development, because I always think there’s return on that. Ross: That’s really, really good. And I can just, I can just imagine a visual framework laid out just as you’ve described. And I think that would be extremely useful for any organization. So you mentioned trust. There’s obviously multiple layers of trust. There’s trust in institutions. There’s trust in companies—as you mentioned, in financial customer service, very relevant. There’s trust in society. There’s trust in AI. There’s trust in your peers. And so this is going to be fundamental. Of course, your degree of trust—or appropriate trust—in AI systems is a fundamental enabler or determinant of how you can get value from them. Absolutely. So how do we nurture appropriate trust, as it were, within workplaces with technology in order to be able to support something which can be as well-functioning as possible? MJ: Yeah. I mean, I think trust is foundationally going to remain the same, right? Which is: do you know what is the right thing to do, and do people believe that you’re going to consistently execute on that right thing, right? So companies that have values, that have principles that are well-defined, are going to continue to capitalize on that. There’s no technology that’s going to change that. Trust becomes more complicated when you bring in things like AI that can create—that’s very, very persuasive—and is mimicking sort of the human side so well that people have difficulties differentiating, right? So, for example, I run a sales team. And in sales, often people use generative AI to overcome objections. That is a great usage of generative AI. However, where do you draw the line between that—between persuasion and manipulation—and between manipulation and fraud, right? I don’t think we need technology to help us draw the line. I think internally, you have to know that as a business. And you have to train your employees to know where the line is, right? Ethics is always going to be something that the law can’t quite contain. The law is always what’s legal, and it’s sort of the bottom of the ethics barrel, in my opinion, right? So ethics is always a higher calling. So having that view towards what is the use of ethical or accountable or responsible AI in your organization—having guardrails around it, writing up use cases, doing the training, having policies around what does that look like in our industry. In many industries, transparency is going to be a very big factor, right? Do people know and do they want to know when they’re talking to a human versus talking to generative AI, right? So there’s customer expectations. There’s a level of consistency that you have to deliver in your use cases. And if the consistency varies too much, then you’re going to create mistrust, right? There’s also bias in all of the data that every single company is working with. So being able to safeguard against that. So there are key elements of trust that are foundationally the same, but I think generative AI adds in a layer of complexity. And companies are going to be challenged to really understand: how have they built trust in the past, and can they continue to capitalize and differentiate that? And those that are rushing to use generative AI use cases that then have the byproduct of eroding trust—including trust from their own employees—that’s where you see a lot of the backlash and problems. So it pays to really think through some of these things, right? Where are you deploying use cases that’s going to engender credibility and trust? And where are you deploying use cases that may seem like it’s a short-term gain—until a bad actor or a misuse or something happens on the internet? Which now, with deepfakes, it’s very easy to do. Now your reputation is very brittle because you don’t have a good foundational understanding of: do you have the credibility of your customers, of employees, that they trust, that you know what to do on what’s right, and then you can lead them there. Ross: Yeah, that’s obviously—in essence—trust can be well-placed or misplaced. And generally, people do have a pretty good idea of whether people, technology, institutions are trustworthy or not. And especially the trustworthiness is ultimately reflected in people’s attitudes and ultimately that which flows through to business outcomes. So I think the key here is that you have to come from the right place. So having the ethical framework—that will come through. That will be visible. People will respond to it. And ultimately, customers will go to those organizations that are truly trustworthy, as opposed to those that pretend to be trustworthy. MJ: And I think there’s—and I think trust is about—there’s a time dimension here. There’s a time dimension with any technology, which is: you have to do things consistently, right? Integrity is not a one-day game. It’s a marathon. It’s not a sprint. And so if you continue to be consistent, you can explain yourself when you make mistakes, right? You know how to own up to it. You know what to say. You know how to explain it to people in a real way that they can understand. That’s where trust—which is hard—trust is earned over time, and it can be depleted very quickly. And I think many, many companies have been burned through not understanding that. But overall, it is still about doing the right thing consistently for the majority of the time and owning up to mistakes. And to the extent that having an ethical AI framework and policy can help you be better at that, then I think those use cases and organizations and companies will be more successful. And to the extent that you’re using it and it creates this downstream effect of eroding that trust, then it is extremely hard to rebuild that again. Ross: Which takes us to leadership and leadership development. Of course, one foundation of leadership is integrity. There’s many things about leadership which aren’t changing. There are perhaps some aspects of leadership that are changing in a—what is—a very, very fast-moving world. So what are your thoughts around how it is we can develop effective leaders, be they young or not so young, into ones that can be effective in this pretty, pretty wild world we live in? MJ: I think with leadership, as it is, always a journey, right? There’s two things that in my mind leadership sort of comes back to. One is experience, right? And the other is the dimension we already mentioned, which is time. As a leader, first of all, I encourage all senior leaders of companies—people who are in the highest seats of the companies—to really get in the weeds involved with generative AI. Don’t outsource that to other people. Don’t give it to your youngest employees. Don’t give it to third-party vendors. Really engage with this tool. Because they actually have the experience and the expertise to understand where it’s working and where it’s not working, right? You actually recognize what a good product looks like, what’s a good outcome, what seems like it’s not going to work. A great marketing leader lives in the minds of their customers, right? So you’re going to know when it produces something which is like, this is not hitting the voice, this is not speaking with my customers, I’m going to continue to train and work. A new marketing analyst is not going to have any idea, right? And also as a great leader, once you actually get into the guts of these tools and start to learn with it, then it is, as we mentioned before, your role to think about: How do I create the strategy around where I’m going to augment my company—the growth, the business, the profit, and the people? What am I going to put in place to help foster that curiosity? Where am I going to allow for use cases to break those constraints, to create this hybrid model where both AI can be used and humans can be more successful? What does being more successful mean outside of just making more money, right? Because there’s a lot of ways to make more money, especially in the short term. So defining that after having learned about the tool—that’s really the challenge that every leader is going to face. You have this vastly changing landscape. You have more complexity than you’re dealing with, right? You have people whose identities are very much shaped by technology and who are dealing with their own self-worth with respect to these tools. Now you have to come in and be a leader and address all of these dimensions. And exactly what you mentioned before, this idea of being a multidimensional leader is starting to become very important, right? You can’t just say, “I’m going to take the company to this.” Now I have to think about: how do I do it in a way that’s responsible? And how do I do it in a way that guarantees long-term success for all of the stakeholders that are involved? These questions have never really changed for leadership, but they certainly take on a new challenge when it comes to these tools that are coming in. So making strategic decisions, envisioning the future, doing scenario planning, using your imagination—and most of all, having a level of humility—is really important here. Because this idea of being able to predict the future, settle into it, and charge in—really, that looks fun on paper. That’s very flashy. And I understand there’s lots of press releases, that’s a great story. The better story is someone who journals, takes time, really thinks about what this means, and recognizes that they don’t know everything. And we are all learning. We’re all learning. There’s going to be really interesting things that come up, and there’s going to be new challenges that come up. But isn’t that what makes leadership so exciting, though, right? If everyone could do it, then that would be easy, right? This is the hard thing. I want leaders to go and do the hard thing, because that’s what makes it amazing. And that’s what makes AI suitable for you. It’s supposed to free up your constraint and help you do harder, more difficult things—take on more challenges, right? And that’s where I think we can truly all augment ourselves. Ross: Yes, it is. Any effective leader is on a path of personal growth. They are becoming more. Otherwise, they would not be fulfilling the potential of the people in the organization they lead—let alone themselves, right? So to round out, what are a few recommendations or suggestions to listeners around how they can help augment themselves or their organizations—and grow into this world of more possibilities than ever before? MJ: Yeah. So my best advice is asking people to separate the product from the person, right? You can use AI to create a better product, but in doing so, understand—is that making you a better person, right? Is that making you better at the thing that you actually want to do? We know about people actually having to understand the product. But even so—if your goal is to be a better writer, for example, and you use Generative AI to create beautiful pieces—is that helping you be a better writer? Because if it’s not, that may not be the best use case. Maybe you use it for idea generation or for copy editing. So being able to separate that and really understanding that is going to be important. The other thing is: understand what parts of your identity you really value, that you want to protect, right? And don’t then use these tools that are going to slowly chip away at that identity. Really challenge yourself. The thing about AI—until we get to AGI—that is interesting is that it is always going to validate you. It is always going to support what you want it to do. You’re going to give it data, and it’s going to do what you tell it to do. So it’s not going to challenge you, right? It’s not going to make you better by calling you out on stuff that your friends would—unless you prompt it, right? Unless you say, “Critique how I can be better. Help me think through how I can be better.” And using it in that way is going to help you be a better leader. It’s going to help you be a better writer, right? So making sure that you’re saving room to say, “Hey, yes, I’m talking to this machine,” but using it to make you better—and separating the product you’re going to create and the person you want to become. Because no one is going to help you be a better person unless you really want to make an effort to do that. And so that, I think, is really key—both in your professional and personal life—to say: What are the goals I really want to attain professionally and personally? I’m going to really keep my eye on that. And how do I make sure that I use AI in a way that’s going to help me get there—and also not use it in a way that doesn’t help me get there? Ross: I think that’s really, really important, and not everyone recognizes that. That yes—how do we use this to make me better? Better at what I do? Better person? And without intent, you can achieve it. So that’s very important. So where can people follow you and your work? MJ: Well, I post a lot on LinkedIn, so you should always look me up on LinkedIn. I do work for Credibly, and we recently launched a credibly.ai webpage where we constantly are telling stories about what we’re doing. But I’m very passionate about this stuff, and I love to talk to people about it. So if you just look me up on LinkedIn and connect with me and want to get into a dialog, I’m more than happy to just share ideas. I do think this is one of the most interesting, seismic shifts in our society. But I’m a big believer in its ability—when managed correctly—to unlock more human potential. Ross: Fantastic. Thank you so much for your time, your insight, and your very positive energy and how we can create the future. MJ: Thanks, Ross. The post Minyang Jiang on AI augmentation, transcending constraints, fostering creativity, and the levers of AI strategy (AC Ep6) appeared first on Humans + AI.
undefined
May 28, 2025 • 36min

Sam Arbesman on the magic of code, tools for thought, interdisciplinary ideas, and latent spaces (AC Ep5)

Code, ultimately, is this weird material that’s somewhere between the physical and the informational… it connects to all these different domains—science, the humanities, social sciences—really every aspect of our lives. – Sam Arbesman About Sam Arbesman Sam Arbesman is Scientist in Residence at leading venture capital firm Lux Capital. He works at the boundaries of areas such as open science, tools for thought, managing complexity, network science, artificial intelligence, and infusing computation into everything. His writing has appeared in The New York Times, The Wall Street Journal, and The Atlantic. He is the award-winning author of books including Overcomplicated, The Half Life of Facts, and The Magic of Code, which will be released shortly. Website: Sam Arbesman Sam Arbesman LinkedIn Profile: Sam Arbesman Books The Magic of Code The Half-Life of Facts Overcomplicated What you will learn Rekindling wonder through computing Code as a universal solvent of ideas Tools for thought and cognitive augmentation The human side of programming and AI Connecting art, science, and technology Uncovering latent knowledge with AI Choosing technologies that enrich humanity Episode Resources Books The Magic of Code As We May Think Undiscovered Public Knowledge People Richard Powers Larry Lessig Vannevar Bush Don Swanson Steve Jobs Jonathan Haidt Concepts and Technical Terms universal solvent latent spaces semantic networks AI (Artificial Intelligence) hypertext associative thinking network science big tech machine-readable law Transcript Ross Dawson: Sam, it is wonderful to have you on the show. Sam Arbesman: Thank you so much. Great to be talking with you. Ross: So you have a book coming out. When’s it coming out? Sam: It comes out June 10. So, yeah, so it comes out June 10. The name of the book is The Magic of Code, and it’s about, basically, the wonders and weirdness of computing—kind of viewing computation and code and all the things around computers less as a branch of engineering and more as almost this humanistic liberal art. When you think of it, it should not just talk about computer science, but should also connect to language and philosophy and biology and how we think, and all these different areas. Ross: Yeah, and I think these things are often not seen in the biggest picture. Not just, all right, this is something that draws my phone or whatever, but it is an intrinsic part of thought, of the universe, of everything. So I think you—indeed, code, in as many manifestations—does have magic, as you have revealed. And one of the things I love, love very much—just the title Magic—but also you talk about wonder. I think when I look at the change, I see that humans are so quick to take things for granted, and that takes away from the wonder of what it is we have created. I mean, what do you see in that? How do we nurture that wonder, which nurtures us in turn? Sam: Yeah. I mean, I completely agree that we are—I guess the positive way to think about it is—we adapt really quickly. But as a result, we kind of forget that there are these aspects of wonder and delight. When I think about how we talk about technology more broadly, or certain aspects of computing, computation, it feels like we kind of have this sort of a broken conversation there, where we focus on it as an adversary, or we are worried about these technologies, or sometimes we’re just plain ignorant about them. But when I think about my own experiences with computing growing up, it wasn’t just that. It was also—it was full of wonder and delight. I had, like, my early experiences—like my family’s first computer was the Commodore VIC-20—and kind of seeing that. And then there was my first experience using a computer mouse with the early Mac and some of the early Macintoshes or earlier ones. And then my first programming experiences, and thinking about fractals and screensavers and SimCity and all these things. These things were just really, really delightful and interesting. And in thinking about them, they drew together all these different domains. And my goal is to kind of try to rekindle that wonder. I actually am reminded—I don’t think I mentioned this story in the book—but I’m reminded of a story related to my grandfather. So my grandfather, he lived to the age of 99. He was a lifelong fan of science fiction, and he read—he basically read science fiction since, like, the modern dawn of the genre. Basically, I think he read Dune when it was serialized in a magazine. And I remember when the iPhone first came out, I went with my grandfather and my father. We went to the Apple Store, and we went to check it out. We were playing with the phone. And my grandfather at one point says, “This is it. Like, this is the object I’ve been reading about all these years in science fiction.” And we’ve gone from that moment to basically complaining about battery life or camera resolution. And it’s fair to want newer and better things, but we kind of have to take a beat and say, no, no—the things that we have created for ourselves are quite spectacular. And so my book tries to rekindle that sense of wonder. And as part of that process, tries to show that it’s not just this kind of constant march of better camera resolution or whatever it is. It’s also this process of touching upon all these different areas that we think about—whether it’s the nature of life or art or all these other things. And I think that, hopefully, is one way of kind of providing this healthier approach to technology, rekindling this wonder, and ultimately really trying to connect the human to the machine. Ross: Yes, yes, because we have—what I always point out is that we are inventors, and we have created extraordinary things. We are the creators, and we have created things in our own image. We have a relationship with them, and that relationship is evolving. These are human artifacts. Why they matter, and how they matter, is in relationship to us, which, of course, goes to— You, sorry, go on. Sam: Oh no, I was just gonna agree with you. Yeah. I feel like, right, these are human artifacts, so therefore we should think about how can they make us the best versions of humans, or the best versions of ourselves, as opposed to sometimes the worst versions of ourselves. Right? So there’s a sense of—we have to be kind of deliberate about this, but also remember, right, we are the ones who built these things. They’re not just kind of achieving escape velocity, and then we’re stuck with the way in which they make us feel or the way in which they make us act. Ross: All right. Well, you’re going to come back in a moment, and I’m going to ask you precisely that—how do we let technology make us the best we can be? But sort of on the way there, there are a couple of very interesting phrases you use in the book. “Connection machines”—these are connection machines. Also “universal solvent.” You use this phrase both at the beginning and the end of the book. So what do you mean by “universal solvent”? In what way is code a universal solvent? What does that mean? Sam: Yeah, so the idea is—it’s by analogy with water. Water is kind of a universal solvent; it is able to dissolve many, many different things within itself. I think about computing and code and computation as this universal solvent for many aspects of our lives—kind of going back to what I was saying before, when we think about language. It turns out that thinking about code actually can provide insight into how to think about language. If we want to think about certain ideas around how ancient mythological tales are transmitted from generation to generation—it turns out, maybe with a little bit of stretching, but you can actually connect it to code and computation and software as well. And the same kind of thing with biology, or certain aspects of trying to understand reality through simulation. All these things have the potential to be dissolved within computing. Now, it could be that maybe I’m just being overly optimistic with code, like, “Oh, code can do this, but no other thing can do that.” It could be that lots of other fields have the ability to connect. Certainly, I love this kind of interdisciplinary clashing of different ideas. But I do think that the ideas of computation and computing—they are beyond just what we would maybe categorize as computer science or programming or software development or engineering. When we think about these ideas—and it turns out there’s a lot of really deep ideas within the theory of computation, things like that—when we think about those ideas or the areas that they connect with, it really does impinge upon all these different domains: of science, of the humanities, of the social sciences, of really just every aspect of our lives. And so that’s kind of what I’m talking about. And then you also mentioned this kind of, like, this supreme connection machine. And so I quote this from—it was, I believe, the novelist Richard Powers. He’s talking about the power of the novel—like, certain novels can really, in the course of their plot and their story, connect so many different ideas. And I really agree with that. But I also think that we can think the same thing about computing as well. Ross: You know, if we think about physics as the various layers of science—where physics is the study of nature and the universe—and that is basically a set of equations. It is maths. And these are things which are essentially algorithms which we can express in code. But this goes to the social layers of the algorithms that drive society. And I also recall Larry Lessig’s book Code, back from 25 years ago, with the sort of parallels between essentially the code as law and code as software. In fact, a recent innovation in New Zealand has released machine-readable law—legislation basically embedding legislation in code—so that this can now be unambiguous and then read by machines, and so they can implicitly obey what they do. So there’s a similar multiple facets of code, from social structures down to the nature of the universe. Sam: I love that, yeah. And where I do think, yeah, there is something deep there, right? That when we think about—because code, ultimately, it is this very weird thing. We think of it as kind of text, like on a screen, but it is only really code when it’s actually able to be run. And so it’s this kind of thought stuff—these words—but they’re very precise, and they also are then able to act in the world. And so it’s kind of this weird material that’s somewhere between the physical and the informational. It’s definitely more informational, but it kind of hinges on the real world. And in that way, it has this kind of at least somewhat unique property. And as a result, I think it can connect to all these other different domains. Ross: So the three major sections of your book—in the middle one is Thought. So, of course, we can have code as a manifestation of thought. We can have code which shapes thought. And one of the chapters is titled Tools for Thought, which has certainly been a lot of what we’ve looked at in this podcast over a long period of time. So, let’s start to dig into that. At a high level, what do you describe as—what do you see as—tools for thought? Sam: Yeah, I mean, so tools for thought—I mean, certainly, there’s a whole domain of software within this kind of thing. And I actually think that there’s a really long history within this, and this is one of the things I also like thinking about, and I do a lot in the book as well, which is kind of try to understand the deeper history of these technologies—trying to kind of understand where they’ve come from, what are the intellectual threads. Because one of the other interesting things that I’ve noticed is that a lot of interesting trends now—whether it’s democratizing software development or tools for thought or certain cutting-edge things in simulation—these things are not that new. It turns out most of these ideas were present, if not at the very advent of the modern digital computer, then they were at least around relatively soon after. But it was the kind of thing where these ideas maybe were forgotten, or they just took some time to really develop. And so, like, for example, one of the classic beginnings of tools for thought—well, I’ll take a step back. The way to kind of think about tools for thought is probably the best way to think about it is in the context of the classic Steve Jobs line, “the bicycle for the mind.” And so the idea behind this is—I think he talked about it in the 1970s, at least initially—I think it was based on a Scientific American article he read in the ’70s, where there was a chart of, I guess, like the energy efficiency for mobility for different animals. And I think it was, like, the albatross was really efficient, or whatever it was, and some other ones were not so efficient. And humans were pretty mediocre. But then things changed—if you put a human on a bicycle, suddenly they were much, much more energy efficient, and they were able to be extremely mobile without using nearly as much energy. And his argument is that in the same way that a bicycle provides this efficiency and power for mobility for humans, computers can be these bicycles for the mind—kind of allowing us to do this stuff of thought that much more efficiently. Ross: Well, but I guess the thing is, though, is that—yeah, that’s, it’s a nice concept. I think, yeah,  Sam: Oh yeah, it’s very popular.  Ross: The question is, how? Sam: Yes, yeah. So, how does it, how does it work? So the classic place—and I actually discuss even a deeper prehistory—but like, the classic place where people start a lot of this is with Vannevar Bush, his essay in The Atlantic, I think in 1945, As We May Think. And within it—he’s discussing a lot of different things in this article—but within it, he describes this idea of a tool called the Memex, which is essentially a thought experiment. And the way to think about it is, it’s kind of like a desk pseudo-computer that involves, I think, microfilm and projections. But basically, he’s describing a personalized version of the web, where you can connect together different bits of information and articles and things you’re reading and traverse all of this information. And he kind of had this idea for the web—or at least, if you squint a lot. It was not a reality; there was not the technology really quite there yet, although he describes it using the current cutting-edge technology of microfilm or whatever it was. And then people kind of proceeded with lots of different things around hypertext or whatever. But in terms of one of the basic ideas there, in terms of what is that tool for thought—it is ultimately the idea of being able to stitch together and interconnect lots of different kinds of information. Because right now—or I wouldn’t say right now—in the early days of computing, I think a lot of people thought about computers from the perspective of just either managing large amounts of information or being able to step through things in a linear fashion. And there was this other trend saying, no, no—things should be interconnected, and it should be able to be accessed non-linearly, or based on similar topics, or based on, ultimately, the way in which our brains operate. Because our brains are very associative. Like, we associate lots of different things. You’ll say one thing, it’ll spark a whole bunch of different ideas in my mind, and I’ll go off in multiple different directions and get excited about lots of different things. And we should have a way, ultimately, of using computers that enhances that kind of ability—that associative ability. Sometimes maybe complement it, so it’ll make things a little bit more linear when I want to go very associative. But I think that’s ultimately the kinds of tools for thought that people have talked about. But then there’s other ones as well. Like, using kind of more visual methods to allow you to manipulate information, or see or visualize or see things in a different way that allows you to actually think different thoughts. Because ultimately, one of the nice things about showing your work or writing things down on paper is it allows you to have some spatial representation of the ideas that you’re exploring, or write all the things down that maybe you can’t immediately remember in your short-term memory. And ultimately, what it comes down to is: humans are limited creatures. Our memories are not great. We’re distractible. We associate things really well, but it’s not always nearly as systematic as we want. And the idea is—can a computer, as a tool for thought, augment all these things? Make the way in which we think better, as well as offset all the limitations that we have? Because we’re pretty bad when it comes to certain types of thinking. And so I think that is kind of the grand vision. And I can talk about how certain trends with AI are kind of helping actually cash a lot of these promissory notes that people have tried to do for many, many years. But I think that’s kind of one broad way of thinking about how to think of this broad space of tools for thought—which is recognizing humans are finite, and how can we do what we want to do already better, which is think. And to be clear, I don’t want computers to act as sort of a substitute for thought. I enjoy thinking. I think that the process of thought itself is a very rewarding thing. And so I want these kinds of tools to allow me to feel like the best version of the thinking Sam—as opposed to, “Oh no, this kind of thing can think for me. I don’t have to do that.” Ross: So you mentioned—you start off from looking around the sense of how it is you can support or augment the implicit semantic networks of our thinking. These are broad ideas where, essentially, we do think in semantic networks of various kinds. And there are ways in which technology can support it. So I suppose, coming to the present, as you say, AI has been able to bring some of these to fruition. So what specifically have you seen, or do you see emerging around how AI tools can support us in specifically that richer, more associative or complementary type of prostheses? Sam: Yeah, so one basic feature of AI is this idea of being able to embed huge amounts of information in these kind of latent spaces, where there are some massively high-dimensional representations of articles or essays or paragraphs—or just information in general. And the locations of those different things often are based on proximity in some sort of high-dimensional semantic space. And so the way I think about this is—well before a lot of these current AI advances, there was this information scientist by the name of Don Swanson. And I think he wrote this paper—I think it was like the mid-1980s—it was called… Oh, and I’m blanking on it, give me a moment. Oh—it was called Undiscovered Public Knowledge. And the idea behind it is: imagine some scientific paper somewhere in the vast scientific literature that says “A implies B.” Then somewhere else in the literature—could be in the same subfield, could be in a totally different field—there’s another paper that says “B implies C.” And so, if you were to read both papers and combine them, you would know that perhaps “A implies C” by virtue of combining these two papers together. But because the scientific literature is so vast, no one has actually ever read both of these papers. And so there is this knowledge that is kind of out there, but it’s undiscovered—this kind of undiscovered public knowledge. He was not content to leave this as a thought experiment. He actually used the cutting-edge technology of the day, which was—I think—keyword searches and online medical databases. Or I don’t know if it was even online at the time. And he was actually able to find some interesting medical results. I think he published them in a medical journal, which is kind of exciting. This is kind of a very rudimentary thing of saying, “Okay, can we find relationships between things that are not otherwise connected?” Now, in this case, it required keyword searches, and it was pretty limited. Once you eliminate some of those barriers, the ability to stitch together knowledge that might otherwise never be connected is enormously powerful and completely available. And I think AI, through this kind of idea of embedding information within latent spaces, allows for this kind of thing. So the way I think about this is—if you know the specific terms, maybe you can find those specific papers you need. But oftentimes, people are not specifying things in the exact same way. Certainly, if they are in different domains and different fields, there are jargon barriers that you might have to overcome. For example, back when I was a postdoc—I worked in the general field of network science—and I was part of this interdisciplinary email list. I feel like every week, someone would email and say, “Oh, how do I do this specific network metric?” And someone else would invariably email back and say, “Oh, this has been known for 30 years in physics or sociology,” or whatever it was. And it was because people just didn’t even know what to search for. They couldn’t find the information that was already there. And with these much more fuzzy latent spaces, a lot of these jargon barriers are just entirely eliminated. And so I think we now have an unbelievable possibility for being able to stitch together all this information—which will potentially create new hypotheses that can be tested in science, new ideas that could be developed—because these different fields are stitched together. Yeah, there’s so many things. And so that is certainly one area that I think a lot about. Ross: Yeah, so just one—I mean, in that domain, absolutely, there’s extraordinary potential to, as you say, reveal the latent connections between knowledge—complementary knowledge—which is from our vast knowledge we’ve created as humanity. There are many more connections between those to explore, which will come to fruition. This does come to the humans-plus-AI piece, where, on one level, the AI can surface all of these connections which might not have been evident, but then come to the fore. So that is now a critical part of the scientific process. I mean, arguably, a lot of science is collecting what was already there before, and now we’re able to supercharge that. So in this humans-plus-AI world, where’s the role of the human there? Sam: So that’s a good question. I mean, I would say, I’m hesitant to say that there’s any specific task that only a human can do forever. It seems to be—any time you say, “Oh, only humans can do this,” we are invariably proven wrong, sometimes almost instantly. So I kind of say this a lot with a lot of humility. That being said, I do think in the near term, there is a great deal of space for humans to act in this almost managerial role—specifically in terms of taste. Like, what are the interesting areas to focus on? What are the kinds of questions that are important? And then, once you aim this enormously powerful tool in that direction, then it kind of goes off, and it’s merciless in connecting things and providing hypotheses and suggestions and ideas and potential discoveries and things to work on. But knowing the kinds of questions and the kinds of things that are important or that will unlock new avenues—it seems right now (maybe this will no longer be the case soon), but at least right now, I still think there’s an important role for humans to provide that sense of taste or aim, in terms of the directions that we should be focusing on. Ross: So going back to that question we touched on before—how do we as humans be the best we possibly can be? Now that we have—well, I suppose this is more a general, broader question—but also now that we have extraordinary tools, including ones of code in various guises, to assist us, how do we be the best we can be? Sam: Yeah, I think that is the singular question of this age, in this moment. And in truth, I think we should always be asking these questions about, okay, being the best versions of ourselves. How do we create meaning and purpose and things like that? I do think a lot of the recent advances with AI are sharpening a lot of these kinds of things. Going back to what I was saying before—at many moments throughout history, we’ve said, “Oh, humans are distinct from animals in certain ways,” and then we realized, “Oh, maybe animals can actually do some of those kinds of things.” And now, we are increasingly doing the same kind of thing with AI—saying, “Oh, AI can maybe recommend things to purchase, but it can never write crappy poetry,” and guess what? Oh, it actually can write pretty mediocre poetry too. So for me, I kind of view it as—by analogy, there’s this idea, somewhat disparagingly, within theology, of how you define the idea of God. Some people will say, “Oh, it’s simply anything that science cannot explain yet.” This is called the “God of the gaps.” And of course, science then proceeds forward, explaining various things in astronomy, cosmology, evolution, all these different areas. And suddenly, if you ascribe to this idea, your conception of God gets narrower and narrower and might eventually vanish entirely. And I feel like we are doing the same kind of thing when it comes to how we think about AI and humanity. Like, “Oh, here are the things that AI can do, but these are the things that humans can do that AI can never do.” And suddenly, that list gets shorter and shorter. So for me, it’s less about what is uniquely human—because that uniqueness is sort of a moving target—and more about what is quintessentially human. What are the things—and this goes back to exactly your question—what are the things that we truly want to be focusing on? What are the things that really make us feel truly human—like the best versions of ourselves? And those answers can be very different for many people. Maybe you want to spend your time gardening, or spending time with your family, or whatever it is. But certainly, one aspect of this—related to tools for thought—is the idea that I do think that certain aspects of thought and thinking are a quintessentially human activity. Not necessarily unique, because it seems as if AI can actually do, if not real thought, then a very accurate simulacrum of thought. But this is something that does feel quintessentially human—that we actually want to be doing ourselves, as opposed to outsourcing entirely. So I think, as a society, we have to say, “Okay, what are the things that we do want to spend our time doing?” and then make sure that our technologies are giving us that space to do those kinds of things. And I don’t have all the answers of what that kind of computational world will look like exactly, or even how to bend the entire world of big tech toward those ends. I think that is a very large and complicated issue. But I do think that these kinds of questions—the ones you asked me and the ones I’m talking about—these are the kinds of questions we need to really be asking as a society. You’re seeing hints of that, even separate from AI, in terms of how we’re thinking about smartphone usage—especially smartphone usage among children. Like, Jonathan Haidt has been talking about these things over the past several years, and really caused—at least in the United States—kind of a national conversation around, “Okay, when should we be giving phones to children? Should we be giving them phones? What kinds of childhoods do we want our children to have?” And I feel like that’s the same kind of conversation we should be having more broadly for technology: What are the lives we want to have? If so, how can we pick and choose the kinds of technologies we want? And I do think—even though some of these things are out of our hands, in the sense that I cannot unilaterally say, “Oh, large social media giant, change the way your algorithm operates”—they’re not going to listen to me. But I can still say, “Oh, in the absence of you doing the kinds of things that I want, I don’t have to play your game. I don’t have to actually use social media.” So there is still some element of agency in terms of picking and choosing the kinds of technologies you want. Now, it’s always easier said than done, because a lot of these things have mechanisms built in to make you use them in a certain way that is sometimes against your better judgment and the better angels of our nature. But I still think it is worth trying for those kinds of things. So anyway, that’s a long way of saying I feel like we need to have these conversations. I don’t necessarily have all the answers, but I do think that the more we talk about what are the kinds of things that make us feel quintessentially human, then hopefully we can start picking and choosing the kinds of technologies that work for that. So, like, if we love art, what are the technologies that allow us to make better art—as opposed to just creating sort of, I don’t know, AI slop, or whatever people talk about? Depending on the specific topic you’re focusing on, there’s lots of practicalities. But I do think we need to be having this conversation. Ross: So just rounding out, in terms of looking at the ideas in your book—sort of very wide-ranging—what is your advice, or what are your suggestions for people in terms of anything that they could do which will enhance themselves or make them better versions of themselves, or more better suited to the world in which we are living? Sam: That is a great question. And I think I would say it’s related to kind of just being deliberate—whether it’s being deliberate in the technologies you adopt or being deliberate in terms of the kinds of things that you want to be spending your time on. And it’s even beyond technology. It’s more about, okay, what is the—it involves saying, “Okay, what are the kinds of things I want to do, or the kind of life I want to live?” And then pick and choose technology, and the kinds of technology, that really feel like they enhance those kinds of things as opposed to diminish them. Because, I mean, as much as I talk about computation as this universal solvent that touches upon lots of different things—computing, it is not all of life. As much as I think there is the need for reigniting wonder and things like that, not everything should be computational. I think that’s fine—to have spaces where we are a little bit more deliberate about that. But going back to the sense of wonder, I also think ultimately it is about trying to find ways of rekindling that wonder when we use certain aspects of our technologies. Like, if we feel like, “Oh, my entire technological life is spent in this, I don’t know, fairly bland world of enterprise software and social media,” there’s not much wonder there. There’s maybe anger or rage or various other kind of extreme emotions, but there’s usually not delight and wonder. And so I would say, on the practical sense, probably a good rule of thumb for the kinds of technologies that are worth adopting are the ones that spark that sense of wonder and delight. Because if they do that, then they’re probably at least directionally correct in terms of the kinds of things that are maybe a little bit more humane or in line with our humanity. Ross: Fantastic. So where can people go to find out more about your work and your book? Sam: So my website—it’s just my last name, Arbesman. So arbesman.net is my website. And on there, you can read about the book. I actually made a little website for this new book The Magic of Code. It’s just themagicofcode.com. So if you go to that, you can find out more about the book. And if you go on arbesman.net, you can also find links to subscribe to my newsletter and various other sources of my writing. Ross: Fantastic. Loved the book, Sam. Wonderful to have a conversation with you. Thanks so much. Sam: Thank you so much. This was wonderful. I really appreciate it. The post Sam Arbesman on the magic of code, tools for thought, interdisciplinary ideas, and latent spaces (AC Ep5) appeared first on Humans + AI.
undefined
May 21, 2025 • 26min

Bruce Randall on energy healing and AI, embedding AI in humans, and the implications of brain-computer interfaces (AC Ep4)

I feel that the frequency I have, and the frequency AI has, we’re going to be able to communicate based on frequency. But if we can understand what each is saying, that’s really where the magic happens. – Bruce Randall About Bruce Randall Bruce Randall describes himself as a tech visionary and Reiki Master who explores the intersection of technology, human consciousness, and the future of work. He has over 25 years of technology industry experience and is a longtime practitioner of energy healing and meditation. Website: Bruce Randall LinkedIn Profile: Bruce Randall What you will learn Exploring brain-computer interfaces and human potential Connecting reiki and AI through frequency and energy Understanding the limits and possibilities of neural implants Balancing intuition, emotion, and algorithmic decision-making Using meditation to sharpen awareness in a tech-driven world Navigating trust and critical thinking in the age of AI Imagining a future where technology and consciousness merge Episode Resources Companies & Organizations Neuralink Synchron MIT Technologies & Technical Terms Brain-computer interfaces AI (Artificial Intelligence) Agentic AI Neural implants Hallucinations (in AI context) Algorithmic trading Embedded devices Practices & Concepts Reiki Meditation Sentience Consciousness Critical thinking Transcript Ross Dawson: Bruce, it’s a delight to have you on the show. Bruce Randall: Well, Ross, thank you. I’m pleased to be on the show with you. Ross: So you have some interesting perspectives on, I suppose, humanity and technology. And just like to, in brief, hear how you got to your current perspectives. Bruce: Sure. Well, when I saw Neuralink put a chip in Nolan’s head and he could work the computer mouse with his thoughts, and he said, sometimes it goes where it moves on its own, but it always goes where I want it to go. So that, to me, was fascinating on how with the chip, we can do things like sentience and telecommunications and so forth that most humans can’t do. But with the chip, all of a sudden, all these doors are open now, and we’re still human. That’s fascinating to me. Ross: It certainly extends, extending our capabilities. It’s done in smaller ways in the past and now in far bigger ways. So you do have a deep technology background, but also some other aspects to your worldview. Bruce: I do. I’ve sold cloud, I’ve been educated in AI at MIT, and I built my first AI application. So I understand it from, I believe, from all sides, because I’ve actually done the work instead of read the books. And for me, this is fascinating because AI is moving faster than anything that we’ve had in recent memory, and it directly affects every person, because we’re working with it, or we can incorporate it in our body to make us better at what we do. And those possibilities are absolutely fascinating. Ross: So you describe yourself as a Reiki Master. So what is Reiki and how does that work? What’s its role been in your life? Bruce: Well, Reiki Master is you can connect with the universal energy that’s all around us, and it means I have a bigger pipe to put it through me, so I can direct it to people or things. And I’ve had a lot of good experiences where I’ve helped people in many different ways. The Reiki and the meditation came after that, and that brought me inside to find who I truly am and to connect with everything that has a vibration that I can connect with. That perspective, with the AI and where that’s going—AI is a hardware, but it produces software-type abilities, and so does the energy work that I do. They’re similar, but they’re very different. And I believe that everything is a vibration. We vibrate and so forth. So that vibration should be able to come together at some point. We should be able to communicate with it at some level. Ross: So if we look at the current state of research, scientific research into Reiki, there seems to be some potential low-level and small-population results. So it doesn’t seem to be a big tick. It doesn’t—there’s—there does appear to be something, but I think it’s fair to say there’s widespread skepticism in mainstream science about Reiki. So what’s your, I suppose, justification for this as a useful perspectival tool? Bruce: Well, I mean, I’ve had an intervention where I actually saved a life, which I won’t go into here. But my body moved, and I did that, and I said, I don’t know why I’m doing this, but I went with the body movement and ended up saving a life. To me, that proved to me, beyond a shadow of a doubt, that there’s something there other than just what humans can see and feel. And that convinced me. Now, it’s hard to convince anybody else. It’s experiential, so I really can’t defend it, other than saying that I have enough experiences where I know it’s real. Ross: Yeah, and I think that’s reasonable. So let’s come back to that—the analogy or linkage you are painting between the energy, underlying energy and Reiki that you experience, and the AIs, I suppose, augmentation of humans and humanity. Bruce: Well, everything has a vibration or frequency. So music has a frequency. People have a frequency. And AI has a frequency. So when you put AI in somebody, there’s the ability at some point for them to communicate with that AI beyond the electrical signal communication. And if that can be developed with the electrical signal from the AI chip, that person can grow leaps and bounds in all areas—instead of just intelligence—but they have to develop that first to do that. Now, AI is creating—or is potentially creating—another class of people. Whereas Elon said in the first news conference, if you’re healthy and you can afford it, you too can have a chip. So that’s a form of commercialization. You may not need to be a quadriplegic to get a chip. If you can afford it, then you can have a chip potentially too. So that puts commercialization at a very high level. But when it gets normalized and the price becomes more affordable, I see that as being something that more mainstream people can get if they choose to. Now, would there be barriers or parentheses on that—where you can only go so far with it? Or if you get a chip, you can do whatever you want? And those are some of the things that I look at as saying we’re moving forward, but we have to do it thoughtfully, because we have to look at all areas of implications, instead of just how fast can we go and how far can we go. Ross: Yeah, well, I mean, for a long time, I’ve said that the very look at the advancement of brain-computer interfaces—first phase, of course, they’re used to assist those who are not fully abled. And then there’s a certain point when, through safety and potential advantages, people who are not disabled will choose to use them. And so that’s still not a point which we’ve reached—or probably not even close to at this point. But still, the massive constraint is the input-output bandwidths of the brain-computer interfaces of today. Still, the “1000 bits per second” rule, which is very similar—so it’s very low bandwidth—and there’s potential to be able to expand that. But that still is essentially bits. It is low levels of information—input, output. So that’s a different thing to what you are pointing to, where there are things beyond simple information in and out. So, for example, the ability to control the computer mouse with your brain… Bruce: Right. But that’s the first step. And the fact that we got to the first step and we can do that—it’s like we had the Model A, and all of a sudden, a couple decades later, we’ve got these fancy cars. That’s a huge jump in a relatively short period of time. And with all the intelligence of the people and the creativity of the scientists that are putting this together, I do believe that we’re going to get advances in the short and medium-long term that are really going to surprise people. On what we can do as humans with AI—either embedded or connected in some way or fashion—because you can also cut the carotid and put a capsule in, and you’ve got AI bots running throughout your body. Now that’s been proven—that that works—and that’s something that hasn’t gotten a lot of press. But we’ve got other ways that we can access the body with AI, and it’s a matter of: we have to figure out which is best, what the risks are, what the parameters are, and how we best move forward with that. Ross: Yeah, it sounds like you’re referring to Synchron, which is able to insert something into the brainstem through the carotid. But that’s not what’s through the body—that’s simply just an access point to the brain for Synchron. Which is probably a stronger approach than the—well, can be—than the Neuralink swarm, which is directly interfacing with the brain tissue. So what do you—so, one of the—if you think about it as an input-output device, that’s quite simple, as in the sense of, we can take information into our brain, whatever sense. So that’s still being used a bit less. And we can also output it—as in, we can basically take thoughts or directions and use that as outputs to devices. So what specifically—can you point to specific use cases that you would see as the next steps for using BCIs, brain-computer interfaces, with AI? Bruce: Yeah, I think that we’re just in the very beginning of that. And I think that there are ways to connect the human with the AI that can increase where we are right now. I just don’t think we know the best way to do that yet. We’re experimenting in that. And I think there are many other ways that we can accomplish the same thing. It’s in the development stages. We’re really on the upward curve of the bell curve for AI, and we’ve got a long way to go before we get to the top. Ross: Yeah, I asked for specifics. So what specifically do you see as use cases for next steps? Bruce: Well, for specifics, I see in people with science and medical, I think there are significant use cases there where they can process faster and better with AI than we can process right now. That’s pure information. And then they can take their intelligence they have as a human, and analyze that quickly and get it faster. In an ER situation, there is a certain amount of error in that area from mistakes that are made. With AI, that can fine-tune that so you have fewer errors and you can make better choices going forward. There are many other cases like that. You could be on the floor trading, and everything is a matter of ratios and so forth. Or you could be in an office trading in real time on the machines. At that point, you’re looking at a lot of different screens and trying to make a decision. If you had AI with you, that would be able to process—speed your processing time—and you could make better decisions faster, because time is of the essence in both of those examples. And AI could help in that. Now, is that a competitive and comparative advantage? I would say so, but it’s in a good way—especially in the medical field. Ross: Yes, so you’re talking about AI very generically, so in this idea of humans plus AI decision-making. So, essentially, you can have human-only decisions, you can have AI decisions. In many cases, the trading—algorithmic trading—is fully delegated to AI because the humans can’t make the decisions fast enough. So are there any particular structures? What specific ways do you see that AI can play a role in those kinds of decision-making? I mean, you mentioned the things of being able to point to potential errors or flag those, and so on. What are other ways in which you can see decisions being made in medical or financial or other perspectives where there is an advantage to the human and AI collaboration—as opposed to having them both separate—and the ways in which that would happen? Bruce: Well, in the collaboration, AI is getting to the point where it has hallucinations right now, so you have to get around that in order to get this in a more reliable fashion. But once you train AI for a specific vertical, that AI is going to work better in that vertical than in an untrained vertical. So that’s really the magic in how you get that to work better. And then AI, with the genetic, has the ability to make decisions. And you have to gauge that with the human ability to make decisions to make sure that that’s in line. You could always put a check and balance in place where, if the AI wanted to do something in a fast-moving environment and you weren’t comfortable with that, you could say no, or you could let it go. That’s something that could be in an earpiece. It can be embedded. There are many different ways to do that. It could be on a speaker where they’re communicating—that’s an easy way to do it. As far as other ways to do it, I mean, we are auditory—so we see, we hear, and we speak—and that’s how we take in information. That’s what it’s going to be geared to. And those devices are coming on right now to be developed where it all works together. But we’re not there yet. But this is where I see it going in both those environments, where you can have a defined benefit for AI working with humans. Ross: So one of the things which is deeply discussed at the moment is AI’s impact on critical thinking. Many people are concerned that because we are delegating complex thinking to AI, in many cases we become lazy or we become less capable of doing some of that critical thinking. Whereas in other domains, some people are finding ways to use AI to be able to sharpen, or to find different perspectives, or find other ways to add to their own cognition. So what are your perspectives or beliefs, specifically on how it is we can best use AI as a positive complement to our cognitive thinking and critical thinking and our ability to develop it? Bruce: Well, we think at a very fast rate, and scientists don’t understand the brain yet in its full capacity, and we don’t understand AI to its full capacity. So I would say with that, we need to work in both areas to better understand them, to find out how we can get to the common denominator where both are going to work together. Because you’ve got—it’s like having two people—you’ve got, for example, the Agentic AI, which has got somewhat of a personality with data, and then you’ve got us with data and with emotions. Those are hard to mix when you put the emotions in it, right? We also have a gut feel, which is pretty accurate. When you put all that together, you’ve got conflicts here, and you have to figure out how you’re going to overcome that to work in a better system. Now, once you get trust with it, you can just rely on it and move forward. But as humans, we have a hard time giving trust to something when it’s important. We rely on our own abilities more than a piece of technology. So that bridge has to be crossed, and we haven’t crossed that yet. And at the same time, humans have done a pretty good job in some very trying situations. AI hasn’t been tested in those yet, because we’re very early in the stages of AI. When we get to that point, then we’re going to start working together and comparing—and really answer your question. Because right now, you’ve got both sides. They both have valid points, but we don’t yet know who’s right. Ross: Yeah, there’s definitely a pathway to a few elements you raised there. One is in trust. So how do we get justified trust in systems so they can be useful? Conflicts around decision-making, and to what point do we trust in our own validation of our own decision-making or thinking in a way that we can effectively, essentially, patch the better decision-makers through that external perspective or addition? So you have deep practice or meditation, amongst other things. And we have a deluge of information which we are living in, which is certainly continuing to increase. So what would your advice be for how to stay present and sharp and connected and be able to deal with the very interesting times we live in? Bruce: Well, that’s a big question, but I’ll give you a short answer for that. My experience with meditation is I’ve gotten to know myself much better, and it’s fine-tuned who I am. Now, you can listen to a tape and you can make incremental movies with that to relax, but I suggest meditation is a great way to expand in all areas—because it’s expanded in all areas for me. And it’s a preference. It’s an opinion based on experience. And everybody has different paths and would have different experiences in that. It’s an option. But what I tell everybody is—because there are a lot of people that still aren’t into AI to the extent that they need to be—I say take 20 minutes to 30 minutes a day in the vertical that you’re in and understand AI and how it can enable you. Because if you don’t do that, in two years, you’re going to be looking from behind at the people that have, and it’s going to be very hard to catch up. Ross: So, slice of time for studying AI and slice of time for meditation, right? Bruce: Yeah, I do. I do 30 minutes twice a day, and I fit it in for 12 years in a busy schedule. So it’s doable. May not be easy, but it’s doable. Ross: Yes, yes. Well, I personally attest to the benefits of meditation, though I’m not as consistent as you are. But I think, yeah, and that’s where there is some pretty solid evidence—well, very solid evidence—that meditation is extremely beneficial on a whole range of different fronts, including physical health, as well as mental well-being and ability to focus, and many other things that are extremely useful to us in the busy world that we learn…. Bruce: Scientific explanation is correct. Ross: Yeah, yeah. And it’s very, very, very well validated for those that have any doubts. So to round out, I mean, we’ll just paint a big picture. So I’d like to let you go wild. Where—what is the potential? Where can we go? What should we be doing? What’s the future of humanity? Bruce: Well. That’s a huge question. And AI is not there yet. But humans—I see, because I’ve been able to do some very unusual things with my combination—I feel that the frequency I have, and the frequency AI has, we’re going to be able to communicate based on frequency. But if we can understand what each is saying, that’s really where the magic happens. And I see people—their consciousness increasing—just because humanity is increasing. And I think in—I mean, they’re discussing sentience and AI. I don’t know. I mean, I understand it, but I don’t know where they’re going with this. Because if you weren’t born with a soul, you don’t have the sentience that a piece of software has. I mean, it can be very intelligent, but it’s not going to have that, in my opinion. Now, will a hybrid come out with person and AI? Doubtful, but it’s possible. There are a lot of possibilities without a lot of backup for them for the future. But I know that if you promote yourself with meditation and getting to know yourself better, everything else happens much easier than if you don’t. And I think with AI—I mean, the sky’s the limit. What does the military have that we don’t have with AI, right? I mean, there’s a lot of smart people working that aren’t in public with AI, and we don’t know where they are. But we know that they’re making progress, because every once in a while we hear something. And I was watching a video on LinkedIn—they mapped the mouth area, and this person could go through seven different languages while he’s walking and talking, and his lips match the words. That point right there, which was a month ago—I said, now I can’t—I’m not sure if I’m watching somebody actually saying something, or if it’s AI. So we make advancements, and then we look at it and say, who can I believe now? Because it’s hard to tell. Ross: Yes. Bruce: So I hope that gives what I think is possible in the future. Where we go—who knows? Ross: Yeah, the future is always unpredictable, but a little bit more now than it ever has been. And one of the aspects of it is, indeed, the blurring of the boundaries of reality and knowing what is real and otherwise. And so I think this still comes back to—we do know that we exist. There still is a little bit of the “I think, therefore I am,” as Descartes declared, where we still feel that’s valid. And beyond that, all the boundaries of who we are as people, individuals, who we are as humanity, are starting to become a lot less clear than they have been. Bruce: And it will get more or less clear, I think, until it gets clearer. Ross: So thanks, Bruce, for your time and your perspectives. I enjoyed the conversation. Bruce: Thank you, Ross. I appreciate your time, and I enjoyed it also. The post Bruce Randall on energy healing and AI, embedding AI in humans, and the implications of brain-computer interfaces (AC Ep4) appeared first on Humans + AI.
undefined
7 snips
May 14, 2025 • 37min

Carl Wocke on cloning human expertise, the ethics of digital twins, AI employment agencies, and communities of AI experts (AC Ep3)

We’re not trying to replace expertise—we’re trying to amplify and scale it. AI wants to create the expertise; we want to make yours omnipresent. – Carl Wocke About Carl Wocke Carl Wocke is the Managing Director of Merlynn Intelligence Technologies, which focuses on human to machine knowledge transmission using machine learning and AI. Carl consults with leading organizations globally in areas spanning risk management, banking, insurance, cyber crime and intelligent robotic process automation. Website: Emory Business Merlynn-AI LinkedIn Profile: Carl Wocke What you will learn Cloning human expertise through AI How digital twins scale decision-making Using simulations to extract tacit knowledge Redefining employee value with digital models Ethical dilemmas in ownership and bias Why collaboration beats data sharing Keeping humans relevant in an AI-first world Episode Resources Companies / Groups Merlynn Emory Tech and Tools Tom (Tacit Object Modeler) LLMs Concepts / Technical Terms Digital twin Tacit knowledge Human-in-the-loop Knowledge engineering Claims adjudication Financial crime Risk management Ensemble approach Federated data Agentic AI Transcript Ross Dawson: Carl, it’s wonderful to have you on the show. Carl Wocke: Thanks, Ross. Ross: So tell me about what Merlynn, your company, does. It’s very interesting, so I’d like to learn more. Carl: Yeah. So I think the most important thing when understanding what Merlynn is about is that we’re different from traditional AI in that we’re sort of obsessed with the cloning of human expertise. So where your traditional AI looks at data sources generating data, we are passionate about cloning our human experts. Ross: So part of the process, I gather, is to take human expertise and to embed that in models. So can you tell me a bit about that process? How does that happen? What is that process of—what I think in the past has been called knowledge engineering? Carl: Yeah. So we’ve built a series of technologies. The sort of primary technology is a technology called Tom. And Tom stands for Tacit Object Modeler. And Tom is a piece of AI that has been designed to simulate a decision environment. You are placed as an expert into the simulation environment, and through an interaction or discussion with Tom, Tom works out what the heuristic is, or what that subconscious judgment rule is that you use as an expert. And the way the technology works is you describe your decision environment to Tom. Tom then builds a simulator. It populates the simulator with data which is derived from the AI engine, and based on the way you respond, the data evolves. So what’s happening in the background is the AI engine is predicting your decision, and based on your response, it will evolve the sampling landscape or start to close up on the model. So it’s an interaction with a piece of AI. Ross: So you’re putting somebody in a simulation and seeing how they behave, and using their behaviors in that simulation to extract, I suppose, implicit models of how it is they think and make decisions. Carl: Absolutely so absolutely. And I think there’s sort of two main things to consider. The one is Tom will model a discrete decision. And a discrete decision is, what would Ross do when presented with the following environment? And that discrete decision can be modeled within an hour, typically. And the second thing is that there’s no data needed in the process. Validation is done through historical data, if you like. But yeah, it’s an exclusive sort of discussion between you and the AI, if that makes sense. Ross: So when people essentially get themselves modeled through these frameworks, what is their response when they see how the model that’s being created from their thinking responds to decision situations? Do they say, “These are the decisions I would have made?” I suppose there’s a feedback loop there in any case. But how do people feel about what’s been created? Carl: So there is a feedback loop. Through the process, you’re able to validate and test your digital twin. We refer to the models that are created as your digital twin. You can validate the model through the process. But what also happens—and this is sort of in the early days—is the expert might feel threatened. “You don’t need me anymore. You’ve got my decision.” But nothing could be further from the truth, because that digital twin that you’ve modeled is sort of tied to you. It evolves. Your decisions as an expert evolve over time. In certain industries, that happens quicker. But that digital twin actually amplifies your value to the organization. Because essentially what we’re doing with a digital twin is we’re making you omnipresent in an organization—and outside of the organization—in terms of your decisions. So the first reaction is, “I’m scared, am I going to have a job?” But after that, as I said, it amplifies your value to the organization. Ross: So one of the things to dig into there—here—but let’s dig into that for now, which is: what are the mechanics? There are some ways we can say, “All right, my expertise is being captured,” and so then that model can do that work, not me. But there are other mechanisms where it amplifies value by, as you say, being able to be deployed in various ways. So can we unpack that a little bit in terms of those dynamics of value to the person whose expertise has been embodied in a digital twin? Carl: Yeah, Ross, that’s really sort of a sensitive discussion to have, in that when someone has been digitized, the role that they play in the organization is now able to potentially change. So we have customers—banking customers—that have actually developed digital twins of compliance expertise. Those compliance experts can now go and work at the clients of the bank. So the discussion or the relationship between the employer and the employee might well need to be revisited within the context of this technology. Because a compliance expert at a bank knows that they need to work the following hours, they have the following throughput. They can now operate anywhere around the world, theoretically. So the value to the expert within a traditional corporate environment—or employee-employee environment—is going to be challenged. When you look at an expert outside of the corporate environment—so let’s say you’ve got someone who’s a consultant—they are able to digitize themselves and work pretty much anywhere around the world, in multiple organizations. So I do—we don’t have the answer. Whose IP is it? Is another question. We’ve had legal advice on this. Typically, the corporate who employs the employee would be the owner. But if the employee leaves the organization, what happens to the IP? What happens to the digital twin? So as Merlynn, we’ve sort of created this stage. We don’t have the answers, but we know it’s going to get interesting. Ross: Yeah. So Gartner predicted that by 2027, 70% of organizations will be putting something in their employee contracts about AI representations, if I remember the statistics correctly. And then I suppose what the nature of those agreements are is, as you say, still being worked out. And so these are fraught issues. But I think the first thing is to resurface them and be clear that they are issues, and so that they can be addressed in a way which is fair for the individuals as well as the organizations. Carl: I think, Ross, just to add to that as well—the placement of the digital twin is now able to be sort of placed at an operational level, which also changes the profile of work that the employee typically has. So that sort of feeds the statement around being present throughout the organization. So the challenges are going to be, well, I’m theoretically doing a lot more, and therefore I understand the value I’m contributing. But yes, absolutely an interesting space to watch right now. Ross: And I think there’s an interesting point here where machine learning is domain-bounded based on the dataset that it has been trained on. And I think that any expertise from an individual—I mean, people, of course, build a whole body of expertise in a particular domain because they’ve been working, essentially—but what they have also done at the same time is enhanced their judgment, which I would suggest is almost always cross-domain judgment. So a person’s judgment is still something they can apply across multiple domains. You can embody it within a specific domain and capture that in a system, but still, the human judgment is—and will remain, I think, indefinitely—a complement to what any AI system can do. Carl: Absolutely. I think when you look at the philosophical nature of expertise, an expert—and this is sort of the version according to Carl here—is someone who cannot necessarily and readily explain their expertise. If you could defend your expertise through data, then you wouldn’t be needed anymore, and you wouldn’t actually be an expert anymore. So an expert sort of jumps the gaps that we have within data. What we found—and Merlynn has been running as an AI business for the last nine, ten years now, so we’ve been in the space for a while—is that the challenge with risk is that risk exists because I haven’t got enough data. And where I have a risk environment, there’s a drain on the expertise resource. So experts are important where you have data insufficiency. So absolutely, to your point, I think the nature of expertise—when one looks at the value of expertise, specifically when faced with areas that have inherent risk—we cannot underestimate the value of someone making that judgment call. Ross: So to ground this a little bit, I know you can’t talk too much about your clients, but they include financial services, healthcare, and intelligence agencies around the world. And I believe you have come from a significantly risk background. So without necessarily being too explicit, what are some examples of the use cases, or where the domains in which organizations are finding this useful and relevant—and the match for the ability to extract or distill expertise? Carl: So we focused on four main areas as a business, and these are areas that we qualify because they involve things that need to be done. As a business, we believe it makes business sense to get involved in things that the world needs help with. So we focused on healthcare, banking, insurance, and law enforcement. I’ll speak very high-level on all of these. In healthcare, we’ve deployed our technology over the last four or five years, creating synthetic or digital doctors making critical decisions. In the medical environment, you can follow a textbook, and there’s a moment where you actually need a second opinion or you need a judgment call. We never suggest replacing anything that AI is doing at the moment, or any of these phenomenal technologies. The LLMs out there—we think—are phenomenal technologies. We just think there’s a layer missing, which is: we’ve reached this point, and we’ve got to make that judgment call. We would value the input of a professor or an expert—domain expert. So would there be benefit in that? In the medical space—treatment protocols, key decisions around being admitted—those are environments where you’ve got a protocol, but you don’t always get it right. And the value of a second opinion—our technology plays that second opinion role. Where you’re about to do the following, but it might not be the correct approach. In the medical world, there are two industries where we don’t think we’re going to make money, but we know we need to do it. And medical is one of them. Imagine a better world where we can have the right decision available at the right time, and we’ve got the technology to plan that decision. So when you talk about telemedicine, you can now have access to a multitude of decisions in the field. What would a professor from a university in North America say? Having said that, we work with the Emerys of the world—Emory Medical, Emory University—building these kinds of technologies. So that’s medical. On the insurance side, we’ve developed our technology to assist in the insurance industry in anything from claims adjudication, fraud, payments. You can imagine the complexity of decisions that are found within the processes in insurance. In banking, we primarily focus on financial crime, risk, compliance, money laundering, terrorist financing-type interventions. If I can explain the complexity of the banking environment: you’ve got all manner of AI technology that’s deployed to monitor transactions. A transaction is flagged, and that flagged transaction needs to be adjudicated by a human expert. That’s quite telling of the state of AI, where you do all of the heavy lifting, but you have that moment where you need the expert. And that really is a bottleneck. Our technology clones your champion—or best-of-breed—expert within that space. You go from a stuck piece of automation to something that can actually occur in real time. And then the last one is within the law enforcement space. So we sponsor, here in South Africa, a very innovative collaboration environment, which comprises law enforcement agencies from around the world. We’ve got federal law enforcement agencies in North America. We’ve got the Interpols, Europols. We’ve got the Federal Police—Australian Federal Police—who participate. So law enforcement from around the world, where we have created what they refer to as a safe zone, and where we have started to introduce our technology to see if we can help make this environment better. The key being the ability to access expertise between the different organizations. Ross: So in all of these cases that you are drawing—modeling—people who are working for these organizations, or are you building models which are then deployed more broadly? Carl: Yeah, so in the line—well, in fact, across all of them—you know, there’s two answers to that. The one is that organizations that deploy technology will obviously build a library of digital twin expertise and deploy that internally. What we’re moving towards now is a platform that we’ve launched where organizations can collaborate as communities to fight, you know, joint risk. I’ll give you an example to sort of make that clearer. So we won an innovation award with Swift. So Swift is a sort of a payments-type platform, monitoring-type platform. They’ve got many roles that they play. They’ve got 12,000 banks, and the challenge that they posed was: how do we get the banks to collaborate better? And what we suggested was, if you attack one bank, what if you can draw on the expertise of the other banks? So if you’ve got a cyberattack or you’ve got some kind of financial crime unfolding, what if there’s a way for you to pool the expertise? And I think that model allowed us to win that challenge, which answers the second part of the question, which is: do you bring expertise from outside of the organization? We see a future where collaboration needs to take place, where we face common risk, common challenges. So the answer is both. Ross: Yes, I can. I mean, there are some analogs of federated data, where you essentially take data which is not necessarily exposing it fully but be able to structure it so that’s available as a pool—for example, the MELLODY Consortium in healthcare. But I think there are other ways. And so there’s Visa as well—it has some kind of a system for essentially sharing data on risk, which is aggregated and made available across the network. And of course, you know, there are then the choices to be made inside organizations around what you share to be available, what you share in an anonymized or hidden fashion, or what you don’t share at all. And essentially, there’s more and more value in ecosystems. And I think I would argue there’s more and more value, particularly in risk contexts, to the sharing to make this valuable for everyone. Carl: Ross, if I can just add to that, I mean, you can share data, which has got so many compliance challenges. You can share models that you created with the data, which I think is being exploited or explored at the moment. The third is, I can share my experts. Because who do you turn to when things go off script? My experts. So they’re all valid. But the future—certainly, if we want to survive—I mean, we have sight of the financial crime that’s being driven out there. It’s a war. And at times I wonder if we’re winning the war. So we have to, if we want to survive, we have to find ways to collaborate in these critical environments. It’s critical. And yet, we’re hamstrung by not being able to share data. I’m not challenging that—I think it’s important that that is protected. But when you can’t share data, what am I sharing? I go to community meetings in the form of conferences, you know, from time to time, and share thoughts and ideas. But that’s not operational. It’s not practical. So we have to share our experts. As Merlynn, we see expertise—and that second-opinion, monitoring, judgment-type resource—as so critical. It’s critical because it’s needed when things go off script. We have to share this. So, yeah. Ross: Yeah. So, moving on to Step—you also have this concept, I’m not sure, maybe we’ve decided to put it in practice—of an AI employment agency. So what is that? What does it look like? What are the considerations in that? Carl: Yeah. So, the AI employment agency is a platform that we’ve actually established. So, I’m going to challenge you on the word “concept”—the platform’s running. It’s not open to the public, but it’s a marketplace—an Amazon marketplace—of digital twins. So if I want to hire a compliance officer, and I’m a bank here in South Africa, I can actually go and hire expertise from a bank in America. I can hire expertise from a bank in Europe. So, the concept or the product of the AI employment agency is a platform which facilitates creation and consumption. As an expert, we see a future where you can create a digital version of your expertise. And as a consumer—being the corporates, in fact, I suppose individuals would also be consumers—at the moment it’s corporates, but corporates can come and access that expertise. And a very interesting thing happens. I’ll give you a practical example out of a banking challenge. Very often, a bank has a thing called a “spike,” which is a new name added to a world database that looks for the undesirables. The bank has got to check their client base for potential matches, and that’s an instant sort of drain on expert resource. What you could do with the employment agency is I could hire an expert, bring them into the bank for the afternoon to solve the challenge, and then just as readily let them go—or fire them out of that process. So I think, just to close off on that, the fascination for me is: as we get older, hopefully we get wiser, and hopefully we stay up to date. But that skill—what happens to that skill? What if there’s a way for us to mobilize that skill and to allow people to earn off that skill? So the AI employment agency is about digitizing expertise and making it available within a marketplace. We’re going to open it up probably within the next 12 months. At the moment, it’s operational. It’s making a lot of people a lot of money, but we’ve got to be very careful once we open the gates. Ross: But I think one of the underlying points here is that you are pointing to this humans-plus-AI world, where these digital twins are complements to humans, and where and how they’re being deployed. Carl: Yeah. I think the—you know, I often see the areas where we differ from traditional AI approaches. And again, not negating or suggesting that it’s not the approach. But when you look at a traditional AI approach, the approach is to replace the function. So replace the function with an AI component. The function would be a claims adjuster. And the guardrails around that—that’s a whole discussion around the agentic AI and the concerns around that. It brings hallucination discussions and the like. Our version of reality is—we’re dealing with a limitation around access to expertise, not necessarily expertise. Whereas AI wants to create the expertise, we want to amplify and scale the expertise. So they’re different approaches to the same challenge. And what we found is that both of them can live in the same space. So AI will do its part, and we will bring the “What does Ross think about the following?” moment, which is that key decision moment. Ross: So I guess one of the issues of modeling—creating digital twins of humans—is that humans are… they may be experts, but they’re also fallible. There are some better than others, some more expert than others, but nobody is perfect. And as a—part of that is, people are biased. They have biases in potentially a whole array of different directions. So does this—all of the fallibility and the bias and the flaws of humanity—get embedded in the digital twin? Or if so, or if not, how do you deal with that? Carl: Well, Ross, you might lose a whole lot of listeners now, but bias is a—well, let’s look at expertise. Expertise is a point of view that I have that I can’t validate through data. So within a community, they’ll go, “Carl’s an expert,” but we can’t see it in the data, and therefore he might be biased. So the concept of expertise—I see the world through positive bias, negative bias. A bias is a position that you hold that, as I said, is not necessarily accepted by the broader community, and expertise is like that. An expert would see something that the community has missed. So, you know, I live in South Africa. If you stop on the side of the road, it’s probably a dangerous exercise. But if there’s an animal, I’m going to stop on the side of the road. And that might be a sort of bad bias, good bias. “Why did you do that?”—you put your family at risk and all of those things. So I can play out a position on anything as being positive and negative. But I think we’ve got to be very careful that we don’t dehumanize processes by saying, “Well, you’re just biased,” and I’m going to take you out of the equation or out of the process. In terms of people getting it right, people getting it wrong, good day, bad day—our technology is deployed in terms of an ensemble approach, where you would have a key decision. I can build five digital twins to check on each other and monitor it that way. You can build a digital twin to monitor yourself. So we’ve built trading environments where the digital twin will monitor you as the trader, given that you’re digital twinned, to see whether you’re acting out of sorts—for whatever reason. So bias—as I said, I hope I haven’t alienated any of your listeners—but bias is a… we’ve got to be very careful that we don’t use whatever mechanism we can to get rid of anything that allows people to offer that expertise into a process or transaction. Ross: Yeah, no. Well, that makes sense. And I suppose what it points to, though, is the fact that you do need diversity—as in, you can’t just have a single expert. You shouldn’t have a single human. You bring diverse—as diverse as possible—perspectives of humans together. And that’s what boards are for, and that’s why you’re trying to build diversity into organizations, so you do have a range of perspectives. And, you know, as you say, positive or useful biases can be… the way you’re using the term bias is perhaps a bit different than others, in saying it is just something which is different from the norm. And—well—I mean, which goes to the point of: what is the norm, anyway? But I think what this points to then is, if we can have a diverse range of experts—be they human or digital twins—then that’s when you design the structures where those, whatever those distinctiveness—not using the word “bias”—but say, those distinctive perspectives can be brought together into a more effective framing and decision. Carl: Absolutely, Ross If I can sort of jump in and give you an interesting dilemma—the dilemma of fair business is something that… fairness is going to be decided by your customer. So the concept of actually having a panel of experts adjudicating your business—because they say they think that this is fair. Look at an insurance environment. Imagine your customers are adjudicating whether you should have, in fact, paid out the claim—even though you didn’t. That’s a form of bias. It’s an interpretation or an expectation of a customer to a corporate. So I think, again, it just reinforces the value of bias—or expertise-slash-bias—because at the end of the day, I believe organizations are going to be measured against fairness of trade. Now for AI—imagine the difficulty to find data to define fairness. Because your fair is different from my fair. I have different fairness compared to my neighbor. How are we going to define that? So again, that means there are so many individual versions of this, which is why I use the example of: organizations should actually model their customers and place them as an adjudicator into their processes or into their organizations. Ross: Yeah. Well, I think part of the point here is, in fact, since AI embodies bias—human bias—because it’s trained on human data, it basically embodies human biases or perspectives, whatever. So this is actually helping us to surface some of these issues around just saying, “Well, what is bias?” There is—it’s hard to say there is any objective… you know, there are obviously many subjective views on what bias is or how it could be mitigated. These are issues which are on the table for organizations. So just to round out—where do you see… I mean, the horizons are not very far out at the moment because it is moving fast—but what do you see as sort of the things on your mind for the next little while in the space you’re playing? Carl: So I think if I look at the—two things. One thing that concerns me about the current technology drive is that we are building very good ways to consume things, but we’re not building very good ways to make things. And what I mean by that is—we’ve got to find ways for us as humans to stay relevant. If we don’t, we’re not going to earn. It’s as simple as that. And if we don’t, we’re not going to spend. So it’s a very simplistic view, but I think it’s critical. It’s critical for us to keep humans relevant. And I think people—humans—are relevant to a process. So we’ve just got to find a mechanism for them to keep that relevance. And if you’re relevant, you’re going to earn. I don’t see a world where you’re going to be fed pizzas under the door, and you’re going to be able to order things because everything’s taken care of for you. That just doesn’t stack up for me. So I think that’s a challenge. I think the moment that we’ve arrived at now—which is an important moment—is the moment of human-in-the-loop. How do we keep people in the loop? And human-in-the-loop is the guardrail for the agentic AI, for the LLMs, the Gen AIs of the world. That’s a very, very important position we need to reinforce. And when one reinforces human-in-the-loop, you also bring relevance back to people. And then you also allow things like empathy, fairness of trade, ethics—to start to propagate through technology. So I think the future for me—you know, I get out of bed, and sometimes I’m really excited about what the technology landscape holds. And then I’m worried. So I think it’s going to work out when people realize: what are we racing towards here? So again, concepts like human-in-the-loop—the guardrails—that are starting to become more practical. So today, I’m excited, Ross. And let’s see what the future holds. Ross: Yes. And I think it’s out of shape, because if we read this with the human-first attitudes, I think we’ll get there. So where can people go to find out more about your work? Carl: So you can go to merlynn-ai.com—so it’s M-E-R-L-Y-N-N dash A-I dot com. You can also mail me at Carl@merlynn-ai.com if you want to have a discussion. And, you know, good old Google—there’s a lot of information about us on the web. So, yeah.  Ross: Fantastic. Thank you for your time and your insights, Carl. It’s a fascinating journey you’re on. Carl: Thanks, Ross. Thanks very much. The post Carl Wocke on cloning human expertise, the ethics of digital twins, AI employment agencies, and communities of AI experts (AC Ep3) appeared first on Humans + AI.
undefined
May 7, 2025 • 33min

Nisha Talagala on the four Cs of AI literacy, vibe coding, critical thinking about AI, and teaching AI fundamentals (AC Ep2)

“The floor is rising really fast. So if you’re not ready to raise the ceiling, you’re going to have a problem.” – Nisha Talagala About Nisha Talagala Nisha Talagala is the CEO and Co-Founder of AIClub, which drives AI literacy for people of all ages. Previously, she co-founded ParallelM where she shaped the field of MLOps, with other roles including Lead Architect at Fusio-io and CTO at Gear6. She is the co-author of Fundamentals of Artificial Intelligence – the first AI textbook for Middle School and High School students. Website: Nisha Talagala Nisha Talagala LinkedIn Profile: Nisha Talagala What you will learn Understanding the four C’s of AI literacy How AI moved from winter to wildfire Teaching kids to build their own AI from scratch Why professionals must raise their ceiling The role of curiosity in using generative tools Navigating context and motivation behind AI models Embracing creativity as a key to future readiness Episode Resources People Andrej Karpathy Organizations & Companies AIClub AIClubPro Technical Terms AI Artificial General Intelligence ChatGPT GPT-1 GPT-2 GPT Neural network Loss function Foundation models AI life cycle Crowdsourced data Training data Iteration Chatbot Dark patterns Transcript Ross Dawson: Nisha, it’s a delight to have you on the show. Nisha Talagala: Thank you. Happy to be here. Thanks for having me. Ross: So you’ve been delving deep, deep, deep into AI for a very long time now, and I would love to hear, just to start, your reflections on where AI is today, and particularly in relation to humans. Nisha: Okay, absolutely. So I think that AI has been around for a very long time. And there was a long time which was actually called AI winter, which is effectively that very few people working on AI—only the true believers, really. And then a few things kind of happened. One of them was that the power of computers became so much greater, which was really needed for AI. And then the data also, with the internet and our ability to store and track all of this stuff, the data also became really plentiful. So when the compute met the data, and then people started developing software and sharing it, that created kind of like a perfect storm, if you will. That enabled people to really see that AI could do things. Previously, AI experiments were very small, and now suddenly companies like Google could run really big AI experiments. And often what happened is that they saw that it worked before they truly knew why it worked. So this entire field of AI kind of evolved, which is, “Hey, it works. We don’t actually know why. Let’s try it again and see if it works some more,” kind of thing. So that has been going on now for about a decade. And so, AI has been all around you for quite a long time. And then came ChatGPT. And not everyone knows, but ChatGPT is actually not the first version of GPT. GPT-1 and GPT-2 were pretty good. They were just very hard to use for someone who wasn’t very technical. And so, for those who are technical—one thing is, you had to—actually, it was a little bit like Jeopardy. You had to ask your question in the form of an incomplete sentence, which is kind of fun in the Jeopardy sort of way. But normally, we don’t talk to people with incomplete sentences hoping that they’ll finish that sentence and give us something we want to know. So ChatGPT just made it so much easier to use, and then suddenly, I think it just kind of burst on the mainstream. And that, again, fed on itself: more data, more compute, more excitement—going to the point that the last few years have really seen a level of advancement that is truly unprecedented, even in the past history of AI, which is almost already pretty unprecedented. So where is it going? I mean, I think that the level—so it’s kind of like—so people talk a lot about AGI and generalized intelligence and surpassing humans and stuff like that. I think that’s a difficult question, and I’m not sure if we’ll ever know whether it’s been reached. Or I don’t know that we would agree on what the definition is there, to therefore agree whether it’s been reached or not reached. There are other milestones, though. For example, standardized testing has already been taken over by AI. AI’s outperform on just about every level of standardized test, whether it’s a college test or a professional test, like the US medical licensing exam. It’s already outperforming most US doctors in those fields. And it’s scoring well on tests of knowledge as well. And also making headway in areas that are traditionally considerably challenged—areas like mathematics and reasoning have become issues. So I think you’re dealing with a place where, what I can tell you is that the AIs that I see right now in the public sphere rival the ability of PhD students I’ve worked with. So it’s serious. And I think it’s a really interesting question of—I think the future that I see is that we have to really be prepared for tools that are as capable, if not in some areas more capable than we are. And then figure out: What is the problem that we are trying to solve in that space? And how do we work collaboratively with the tools? I think picking a fight with the tools is unwise. Ross: Yeah, yeah. And I guess my broader view is that the intent of being able to create an AI of humans as a reference point was always misguided. I mean to say, all right, we want to create intelligence. Well, the only intelligence we know is human, so let’s try to mimic that and to replicate what it does as much as possible. But this goes to the point, as you mentioned, of augmentation, where on one level, we can say, all right, we can compare humans versus AI on particular tests or so on. But there are, of course, a multitude of ways in which AIs can augment humans in their capabilities—cognitive and intellectual and otherwise. So where are you seeing the biggest potentials in augmenting intelligence or cognition or thinking or positive intent? Nisha: Absolutely. So I think, honestly, the examples sort of—I feel like if you look for them, they’re kind of everywhere. So, for example, just yesterday—or the day before yesterday—I wrote an article about vibe coding. Vibe coding is a term coined by Andrej Karpathy, which is essentially the way he codes now. And he’s a very famous person who, obviously, is a master coder. So he has alternatives—lots of ways that he could choose to write code. And his basic point is that now he talks to the machine, and he basically tells it what he wants. Then it presents him with something. And then he says, “I like it. Change this, change that, keep going,” right? And I definitely use that model in my own programming, and it works really well. So really, it comes down to: you have something to offer. You know what to build. You know when you don’t like something, right? You have ideas. This is the machine that helps you express them, and so on and so forth. So if you do that, that’s a very good way of doing augmented. So you’re creating something, and sometimes, when you see a lot of options presented to you, you’re able to create something better just because you can see it. Like, “Oh, it didn’t take me three weeks to create one. Suddenly I have fifteen, and now I know I have more cycles to think about which one I like and why.” So that’s one example—just of creation collaboratively. Examples in medicine just abound. The ability to explore molecules, explore fits, find new candidates for drugs—it’s just unbelievable. I think in the next decade, we will see advancements in medicine that we cannot even imagine right now, just because of that ability to really formulate a problem, give a machine a task, have it come back, and then you iterate on it. And so I think if we can just tap humans into that cycle and make that transition—so that we can kind of see a bigger problem—then I think there’s a lot of opportunity. Ross: So, which—that leads us to the next thing. So the core of your work is around AI literacy and learning. And so it goes to the question of: AI is extraordinarily competent in many domains. It can augment us. So what is—what are the foundational skills or knowledge that we require in this world? Do we need to understand the underlying architectures of AI? What do we need to understand—how to engage with generative AI tools? What are the layers of AI literacy that really are going to be important in coming years? Nisha: Very good question. So I can tell you that kind of early on in our work, we defined AI literacy as what we call the four C’s. We call them concepts, context, capability, and creativity. Ross: Sorry, could you repeat this? Nisha: Yes—concepts, context, capability, and creativity. Ross: Awesome. Nisha: So, concept is—you really should know something about the way these tools are created. Because as delightful as they are, they are not perfect. And a good user who’s going to use it for their own—who’s going to have a good experience with it—is going to be able to pick where and how to interact with it in ways that are positive and productive, and also be able to pick out issues, and so forth. And so what I mean by concept is: the reliance of AI on data and being able to ask critical questions. “Okay, I’m dealing with an AI. Where did it get its data? Who built it? What was their motivation?” Like these days, AIs are so complex that what I tell my students is: you don’t know what it’s trying to do. What is its goal? It’s sitting there talking to you. You didn’t pay for it—so what is it trying to accomplish? And the easiest way to find out is: figure out who paid for it and figure out what it is they want. And that is what the AI is trying to accomplish. Sometimes it’s to engage you. Sometimes it’s to get information from you. Sometimes it’s to provide you with a service so that you will pay, in which case the quality of its service to you will matter, and such like that. But it’s really important, when you’re dealing with a computer or any kind of service, that you understand the motivations for it. What is it being optimized for? What is it being measured on? And so forth. So there’s kind of concepts like that—about how these tools are created. That does not mean everyone has to understand the nuances of how a neural network gets trained, or what it means to have a loss function, or all these things. That’s suitable for some people, but not necessarily for everyone. But everyone should have some conceptual understanding. Then context. Ross: Or just gonna say, those interesting patterns on dark patterns. A paper in dark patterns on AI, which came out last week, I think, in one of the domains was second fancy, where, essentially, as you suggest, AI can say, “You’re wonderful” in all sorts of guises, which, amongst other things, makes you like it or more to use it more. Nisha: Oh yes, they definitely have. They definitely want you to keep coming back, right? You suddenly see that. And it’s funny, because I was having some sort of an interaction with—I’m not gonna name which company wrote the model—and it said something like, “Yeah, we have to deal with this.” And I’m like, there’s no we here. It’s just me. When did we become we? You’re just trying just a little too hard to get on my good side here. So I just kind of noticed that. I’m like, not so good. But so concepts, to me, effectively means that—underlying the fundamental ways that these programs are built, how they rely on data, what it means for an AI to have a brain—and then the depth depends entirely on the domain. Context, for me, is really the fact that these things are all around us, and therefore you truly do want to know that they are behind some of the tooling that you use, and understand how your information is shared, and so forth. Because there’s a lot of personal decisions to be made here, and there are no right answers. But you should feel like you have the knowledge and the agency to make your own choices about how to handle tools. So that’s what I mean by context. It’s particularly important for young people to appreciate—context. Ross: And I think for professionals as well, because their context is, you know, making decisions in complex situations. And if they don’t really appreciate the context—and the context of the AI—then that’s, that’s not a good thing. Nisha: Absolutely. And then capability—really, it varies very much on domain. But capability is really about: are you going to be able to function, right? Are you going to be able to do a project using these tools? Or do you need to build a tool? Do you need to merge the tools? Do you need to create your own tools? So in our case, for young people, for example—because they don’t have a domain yet—we actually teach them how to build AI from scratch. So one of the very common things that we do is: almost in every class, starting from third grade, they build an AI in their first class completely from scratch. And they train it with their own data, and they see for themselves how its opinions change with the information they give it. And that’s a very powerful exercise because—so what I typically ask students after that exercise is, I ask them two questions. First question is: did it ever ask you if what you were teaching it was true? And the answer is always, no. You can teach it anything, and it will believe you. Because they keep teaching it information, and children being children, will find all sorts of hilarious things to teach a machine, right? And then—but then—they realize, oh, truth is not actually a part of this. And then the next question, which is really important, is: so what is your responsibility in this whole thing? Your responsibility is to guide the machine to do the right thing, because you already figured out it will do anything you ask. Ross: That’s really powerful. Can you tell me a little bit more about precisely how that works, and when you say, getting them to build their own AI? Nisha: So we have built a tool. It’s called Navigator, and it’s effectively a web-based front end to industry standard tools like TensorFlow and scikit-learn. And it runs on the cloud. Then we give each of our students accounts on it, and depending on how we do it, they can either—anonymized accounts, whatever we need to protect their privacy. At large-scale installations with schools, for example, it’s always anonymous. Then what happens is they go in, and they’re taken through the steps of building an AI. We give them a few datasets that are kid-friendly. So one other thing to remember when you’re teaching young people is a lot of the data that’s out there is not friendly to young people, so we maintain a massive repository of kid-friendly datasets. A very common case that they run is a crowdsourced dataset that we crowdsourced from children, which are sentences about happiness and sadness. So a child’s view—like chocolate might be happy, broccoli might be sad, things like that. But nothing sad—children can relate to. So they start teaching about happy and sad. And one of the first things that they notice is—those of them that have written programs before—this is kind of hard to write a program for. What word would you be looking for? There’s so many words. Like, I can’t use just the word happy. I might say, “I feel great.” I didn’t use the word happy, but I’m clearly happy. So they’re like, “Oh, so there’s something here—more than just looking for words. You have to find a pattern somehow.” And if you give it enough examples, a pattern kind of emerges. So then they train the AI—it takes about five minutes. They actually load up the data, they train an AI, they deploy it in the cloud, and it presents itself as a little chatbot, if you will, that they can type in some sentences and ask it whether it thinks they’re happy or sad. And when it’s wrong, they’re like, “Oh, it’s wrong now.” Then there’s a button they can press that says, “I don’t think you’re right.” And then it basically says, “Oh, interesting. I will learn some more.” They can even teach it new emotions. So they teach it things like, “I’m hungry,” “I’m sleepy,” “I’m angry,” whatever it is. And it will basically pick up new categories and learn new stuff. So after the first five minutes, when they interact with it—within about 15 minutes—every child has their own entire, unique AI that reflects whatever emotions they chose to teach and whatever perspective. So if you want to teach the AI that your little brother is the source of all evil, then it will do that. And stuff like that. And then after a while, they’re like, “Oh, I know how this was created. I can see its brain change.” And now you can ask it questions about what does this even mean when we have these programs. Ross: That is so good. Nisha: So that’s what I mean. And it has a wonderful reaction in that it takes away a lot of the—it makes it tangible. Takes away a lot of the fear that this is some strange thing. “I don’t know how it was made.” “I made it. I converted it into what it is. Now I understand my agency and my responsibility in this situation.” So that’s capability—and it’s also creativity in an element—because every single one of our projects, even at third grade, we encourage a creative use of their own choosing. So when the children are very young, they might teach an AI to learn all about an animal that they care about, like a rabbit. In middle school, they might be looking more at weather and pricing and stuff like that. In high school, they’re doing essentially state-of-the-art research. At this point, we have a massive number of high school students who are professionally published. They go into conferences and they speak next to PhDs and professors and others, and their work is every bit as good and was peer-reviewed and got in entirely on merit. And that, I think, tells me what is possible, right? Because part of it is that when the tools get more powerful, then the human brain can do more things. And the sooner you put— And the beautiful thing about teaching K–12 is they are almost fearless. They have a tremendous amount of imagination. They start getting a little scared around ninth grade—kicks in: “Oh, maybe I can’t do this. Maybe this isn’t cool. I’m going to be embarrassed in front of my friends.” But before that, they’re almost entirely fearless. They have fierce imagination, and they don’t really think anything cannot be done. So you get a tool in front of them, and they do all sorts of nifty things. So then I assume these kids, I’m hoping, will grow up to be adults who really can be looking at larger problems, because they know the tools can handle the simpler things. Ross: That is, that is wonderful. So this is a good time just to pull back to the big picture of your initiatives and what you’re doing, and how all of these programs are being put into the world? Nisha: Yeah, absolutely. So we do it in a number of different ways. Of course, we offer a lot of programs on our own. We engage directly with families and students. We also provide curriculums and content for schools and organizations, including nonprofits. We provide teacher training for people who want to launch their own programs. We have a professional training program, which is essentially—we work with both companies and individuals. In our companies, it’s basically like they run a series of programs of their choosing through us. We work both individually with the people in the company—sometimes in a more consultative manner—as well as providing training for various employees, whether they’re product managers, engineers, executives. We kind of do different things. And then individuals—there are many individuals who are trying to chart a path from where they are to where—first of all, where should they be, and then, how can they get there? So we have those as well. So we actually do it kind of in all forms, but we also have a massive content base that we provide to people who want to teach as well. Ross: And so what’s your geographical scope, primarily? Nisha: So we’re actually worldwide. The company—we started out in California. We went remote due to COVID, and we also then started up an office in Asia around that time. So now we’re entirely remote—everywhere in the world. We have employees primarily in the US and India and in Sri Lanka, and we have a couple of scattered employees in Europe and elsewhere. And then most of our clients come from either the US or Asia. And then it’s a very small amount in Europe. So that’s kind of where our sweet spots are. Ross: Well, I do hope your geographical scope continues to increase. These are wonderful initiatives. Nisha: Thank you.  Ross: So just taking that a step further—I mean, this is obviously just this wonderful platform for understanding AI and its role in having development capabilities. But now looking forward to the next five or ten years—what are the ways in which, for example, people who have not yet exposed themselves to that, what are the fundamental capability sets in relation to work? So, I mean, part of this is, of course, people may be applying their capabilities directly in the AI space or technology. But now, across the broader domain of life, work—across everything—what are the fundamental capabilities we need? I mean, building on this understanding of the layers of AI, as you’ve laid out? Nisha: Yeah, so I think that, you know, a general sort of—so if we follow this sort of the four C’s model, right—a general, high-level understanding of how AI works is helpful for everyone. And I mean, you know, and I mean things like, for example, the relationship between AI and data, right? How do AI models get created? One of the things I’ve learned in my career is that—so there’s some sort of thing as an AI life cycle, like, you know, how does an AI get built? And even though there are literally thousands of different kinds of AI, the life cycle isn’t that different. There’s like this relationship between data, the models, the testing, the iteration. It’s really helpful to know that, because that way you understand—when new versions come out—what happened. Yeah, what can you expect, and how does information and learning filter through? You know, context is very critical—of just being aware. And these days, context is honestly not that complicated. Just assume everything that you’re—everything that you interact with—has an AI in it. Doesn’t matter how small it is, because it’s mostly, unfortunately, true. The capability one is interesting. What I would suggest for the most broad-based audience is—really, it is a good idea to start learning how to use these foundation models. So I’m talking about the—you know—these models that are technically supposed to be good at everything. And one of the things—the one thing I’ve kind of noticed, dealing with particularly professionals, is—sometimes they don’t realize the tool can do something because it never occurred to them to ask, right? It’s one of those, like—if somebody showed you how to use the tool to, you know, improve your emails, right? You know the tool can do that. But then you come along and you’re looking for, I don’t know, a recipe to make cookies. Never occurs to you that maybe the tool has an opinion on recipes for cookies. Or it might be something more interesting like, “Well, I just burned a cookie. Now, what can I do? What are my options? I’ve got burnt cookies. Should I throw out the burnt cookies? Should I, you know, make a pie out of them?” Whatever it is, you know. But you can always drop the thing and say, “Hey, I burnt a cookie. Burned cookies.” And then it will probably come back and say, “Okay, what kind of cookies did you burn? How bad did you burn them?” You know, and this and that. “And here are 10 things you can do with them.” So I think the simplest thing is: just ask. The worst thing it’ll do is, you know, it will come back with a bad answer. And you will know it’s a bad answer because it will be dumb. So some of it is just kind of getting used to this idea that it really might actually take a shot at doing anything. And it may have kind of a B grade in almost anything—any task you give it. So that’s a very mental shift that I think people need to get used to taking. And then after that, I think whatever they need to know will sort of naturally evolve itself. Then from a professional standpoint, I think—I kind of call it surfing the wave. So sometimes people would come to me and say, “Hey, you know, I’m so behind. I don’t even know where to begin.” And what I tell them is: the good news is, whatever it is that you forgot to look up is already obsolete. Don’t worry about it. It’s totally gone. You know, it doesn’t matter. You know, whatever’s there today is the only thing that matters. You know, whatever you missed in the last year—nobody remembers it anymore anyway. So just go out there. Like, one simple thing that I do is—if you use, like, social media and such—you can tailor your social media feed to give you AI inputs, like news alerts, right, or stuff that’s relevant to you. And it’s a good idea to have a feel for: what are the tools that are appropriate in your domain? What are other people thinking about the tools? Then just, you know, pick and choose your poison. If you’re a professional working for a company—definitely understand the privacy concerns, the legal implications. Do not bring a tool into your domain without checking what your company’s opinions are. If the company has no opinions—be extra careful, because they don’t know, but they don’t know. So just—there’s a concern about that. But, you know, just be normal. Like, just think of the tool like a stranger. If you’re going to bring them into the house, then, you know, use your common sense. Ross: Well, which goes to the point of attitude. And part of it’s how—this—how do we inculcate that attitude of curiosity and exploration and trying things, as opposed to having to take a class, go in a classroom before you know what to do? And you have to find your own path by—learn by doing. But that takes us to that fourth step of creativity, where—I mean, obviously—you need to be creative in how you try to use the tools and see what you learn from that. But also, it goes back to this idea of augmenting creativity. And so, we need to be creative in how we use the tools, but also there are ways where we can hopefully create this feedback loop, where the AI can help us augment or expand our creativity without us outsourcing to it. Nisha: Absolutely. And I think part of this is also recognizing that—here’s the problem. If you’re—particularly if you’re a professional—this is less an issue for students because their world is not defined yet. But if you’re a professional, there is a ceiling of some kind in your mind, like “this is what I’m supposed to do,” right? And the floor is wherever you’re standing right now. And your value is in the middle. The floor is rising really fast. So if you’re not ready to raise the ceiling, you’re going to have a problem. So it’s kind of one of those things that is not just about the AI. You have to really have a mental shift—that I have to be looking for bigger things to do. Because if you’re not looking for bigger things to do, unfortunately, AI will catch up to whatever you’re doing. It’s only a matter of time. So if you don’t look for bigger things—that’s why the areas that feel like medicine are flourishing—is because there are so many bigger problems out there. And so, some of it is also looking at your job and saying, “Okay, is this an organization where I can grow? So if I learn how to use the AI, and I’m suddenly 10x more efficient at my job, and I have nothing left to do—will they give me more stuff to do?” If they don’t, then I think you might have a problem. And so forth. So it’s one of those—you have to find—there’s always a gap. Because, look, we’re a tiny little planet in the middle of a massive universe that we don’t know the first thing about. And as far as we know, we haven’t seen anyone else. There are bigger problems. There are way, way bigger problems. It’s a question of whether we’ve mapped them. Ross: Yeah, we always need perspective. So looking forward—I mean, you’re already, of course, having a massive positive impact through what you are doing—but if you’re thinking about, let’s say, the next five years, since that’s already pretty much beyond what we can predict, what are the things that we need to be doing to shape a better future for humans in a world where AI exists, has extraordinary capabilities, and is progressing fast? Nisha: I think really, this is why I focus so much on AI literacy. I think AI literacy is critical for every single human on the planet, regardless of their age or their focus area in life. Because it’s the beginning. It’s going away from the fear and really being able to just understand just enough. And also understanding that this is not a case where you are supposed to become—everyone in the world is going to become a PhD in mathematics. That’s not what I mean at all. I mean being able to realize that the tool is here to stay. It’s going to get better really fast. And you need to find a way to adapt your life into it, or adapt it into you, or whichever way you want to do it. And so if you don’t do that, then it really is not a good situation. So I think that’s where I put a lot of my focus—on creating AI literacy programs across as many different dimensions as I can, and providing— Ross: With an emphasis on school? Nisha: So we have a lot of emphasis on schools and professionals. And recently, we are now expanding also to essentially college students who are right in the middle tier. Because college students have a very interesting situation—that the job market is changing very, very rapidly because of AI. So they will be probably the first ones who see the bleeding edge. Because in some ways, professionals already have jobs—yes—whereas students, prior to graduating from college, have time to digest. It’s this year’s and next year’s college graduates who will really feel the onslaught of the change, because they will be going out in the job market for the first time with a set of skills that were planned for them before this happened. So we do focus very much on helping that group figure out how to become useful to the corporate world. Ross: So how can people find out more about your work and these programs and initiatives? Nisha: Yeah, so we have two websites. Our website for K–12 education is aiclub.world. Our website for professionals and college students—and very much all adults—is aiclubpro.world. So you can look there and you can see the different kinds of things we offer. Ross: Sorry, could you repeat the second URL? Nisha: It’s aiclubpro.world. Ross: aiclubpro.world. Got it? That’s fantastic. So thank you so much for your time today, but also your—the wonderful initiative. This is so important, and you’re doing a marvelous job at it. So thank you.  Nisha: Really appreciate it. Thank you for having me. The post Nisha Talagala on the four Cs of AI literacy, vibe coding, critical thinking about AI, and teaching AI fundamentals (AC Ep2) appeared first on Humans + AI.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app