Humans + AI

Ross Dawson
undefined
May 28, 2025 • 36min

Sam Arbesman on the magic of code, tools for thought, interdisciplinary ideas, and latent spaces (AC Ep5)

Code, ultimately, is this weird material that’s somewhere between the physical and the informational… it connects to all these different domains—science, the humanities, social sciences—really every aspect of our lives. – Sam Arbesman About Sam Arbesman Sam Arbesman is Scientist in Residence at leading venture capital firm Lux Capital. He works at the boundaries of areas such as open science, tools for thought, managing complexity, network science, artificial intelligence, and infusing computation into everything. His writing has appeared in The New York Times, The Wall Street Journal, and The Atlantic. He is the award-winning author of books including Overcomplicated, The Half Life of Facts, and The Magic of Code, which will be released shortly. Website: Sam Arbesman Sam Arbesman LinkedIn Profile: Sam Arbesman Books The Magic of Code The Half-Life of Facts Overcomplicated What you will learn Rekindling wonder through computing Code as a universal solvent of ideas Tools for thought and cognitive augmentation The human side of programming and AI Connecting art, science, and technology Uncovering latent knowledge with AI Choosing technologies that enrich humanity Episode Resources Books The Magic of Code As We May Think Undiscovered Public Knowledge People Richard Powers Larry Lessig Vannevar Bush Don Swanson Steve Jobs Jonathan Haidt Concepts and Technical Terms universal solvent latent spaces semantic networks AI (Artificial Intelligence) hypertext associative thinking network science big tech machine-readable law Transcript Ross Dawson: Sam, it is wonderful to have you on the show. Sam Arbesman: Thank you so much. Great to be talking with you. Ross: So you have a book coming out. When’s it coming out? Sam: It comes out June 10. So, yeah, so it comes out June 10. The name of the book is The Magic of Code, and it’s about, basically, the wonders and weirdness of computing—kind of viewing computation and code and all the things around computers less as a branch of engineering and more as almost this humanistic liberal art. When you think of it, it should not just talk about computer science, but should also connect to language and philosophy and biology and how we think, and all these different areas. Ross: Yeah, and I think these things are often not seen in the biggest picture. Not just, all right, this is something that draws my phone or whatever, but it is an intrinsic part of thought, of the universe, of everything. So I think you—indeed, code, in as many manifestations—does have magic, as you have revealed. And one of the things I love, love very much—just the title Magic—but also you talk about wonder. I think when I look at the change, I see that humans are so quick to take things for granted, and that takes away from the wonder of what it is we have created. I mean, what do you see in that? How do we nurture that wonder, which nurtures us in turn? Sam: Yeah. I mean, I completely agree that we are—I guess the positive way to think about it is—we adapt really quickly. But as a result, we kind of forget that there are these aspects of wonder and delight. When I think about how we talk about technology more broadly, or certain aspects of computing, computation, it feels like we kind of have this sort of a broken conversation there, where we focus on it as an adversary, or we are worried about these technologies, or sometimes we’re just plain ignorant about them. But when I think about my own experiences with computing growing up, it wasn’t just that. It was also—it was full of wonder and delight. I had, like, my early experiences—like my family’s first computer was the Commodore VIC-20—and kind of seeing that. And then there was my first experience using a computer mouse with the early Mac and some of the early Macintoshes or earlier ones. And then my first programming experiences, and thinking about fractals and screensavers and SimCity and all these things. These things were just really, really delightful and interesting. And in thinking about them, they drew together all these different domains. And my goal is to kind of try to rekindle that wonder. I actually am reminded—I don’t think I mentioned this story in the book—but I’m reminded of a story related to my grandfather. So my grandfather, he lived to the age of 99. He was a lifelong fan of science fiction, and he read—he basically read science fiction since, like, the modern dawn of the genre. Basically, I think he read Dune when it was serialized in a magazine. And I remember when the iPhone first came out, I went with my grandfather and my father. We went to the Apple Store, and we went to check it out. We were playing with the phone. And my grandfather at one point says, “This is it. Like, this is the object I’ve been reading about all these years in science fiction.” And we’ve gone from that moment to basically complaining about battery life or camera resolution. And it’s fair to want newer and better things, but we kind of have to take a beat and say, no, no—the things that we have created for ourselves are quite spectacular. And so my book tries to rekindle that sense of wonder. And as part of that process, tries to show that it’s not just this kind of constant march of better camera resolution or whatever it is. It’s also this process of touching upon all these different areas that we think about—whether it’s the nature of life or art or all these other things. And I think that, hopefully, is one way of kind of providing this healthier approach to technology, rekindling this wonder, and ultimately really trying to connect the human to the machine. Ross: Yes, yes, because we have—what I always point out is that we are inventors, and we have created extraordinary things. We are the creators, and we have created things in our own image. We have a relationship with them, and that relationship is evolving. These are human artifacts. Why they matter, and how they matter, is in relationship to us, which, of course, goes to— You, sorry, go on. Sam: Oh no, I was just gonna agree with you. Yeah. I feel like, right, these are human artifacts, so therefore we should think about how can they make us the best versions of humans, or the best versions of ourselves, as opposed to sometimes the worst versions of ourselves. Right? So there’s a sense of—we have to be kind of deliberate about this, but also remember, right, we are the ones who built these things. They’re not just kind of achieving escape velocity, and then we’re stuck with the way in which they make us feel or the way in which they make us act. Ross: All right. Well, you’re going to come back in a moment, and I’m going to ask you precisely that—how do we let technology make us the best we can be? But sort of on the way there, there are a couple of very interesting phrases you use in the book. “Connection machines”—these are connection machines. Also “universal solvent.” You use this phrase both at the beginning and the end of the book. So what do you mean by “universal solvent”? In what way is code a universal solvent? What does that mean? Sam: Yeah, so the idea is—it’s by analogy with water. Water is kind of a universal solvent; it is able to dissolve many, many different things within itself. I think about computing and code and computation as this universal solvent for many aspects of our lives—kind of going back to what I was saying before, when we think about language. It turns out that thinking about code actually can provide insight into how to think about language. If we want to think about certain ideas around how ancient mythological tales are transmitted from generation to generation—it turns out, maybe with a little bit of stretching, but you can actually connect it to code and computation and software as well. And the same kind of thing with biology, or certain aspects of trying to understand reality through simulation. All these things have the potential to be dissolved within computing. Now, it could be that maybe I’m just being overly optimistic with code, like, “Oh, code can do this, but no other thing can do that.” It could be that lots of other fields have the ability to connect. Certainly, I love this kind of interdisciplinary clashing of different ideas. But I do think that the ideas of computation and computing—they are beyond just what we would maybe categorize as computer science or programming or software development or engineering. When we think about these ideas—and it turns out there’s a lot of really deep ideas within the theory of computation, things like that—when we think about those ideas or the areas that they connect with, it really does impinge upon all these different domains: of science, of the humanities, of the social sciences, of really just every aspect of our lives. And so that’s kind of what I’m talking about. And then you also mentioned this kind of, like, this supreme connection machine. And so I quote this from—it was, I believe, the novelist Richard Powers. He’s talking about the power of the novel—like, certain novels can really, in the course of their plot and their story, connect so many different ideas. And I really agree with that. But I also think that we can think the same thing about computing as well. Ross: You know, if we think about physics as the various layers of science—where physics is the study of nature and the universe—and that is basically a set of equations. It is maths. And these are things which are essentially algorithms which we can express in code. But this goes to the social layers of the algorithms that drive society. And I also recall Larry Lessig’s book Code, back from 25 years ago, with the sort of parallels between essentially the code as law and code as software. In fact, a recent innovation in New Zealand has released machine-readable law—legislation basically embedding legislation in code—so that this can now be unambiguous and then read by machines, and so they can implicitly obey what they do. So there’s a similar multiple facets of code, from social structures down to the nature of the universe. Sam: I love that, yeah. And where I do think, yeah, there is something deep there, right? That when we think about—because code, ultimately, it is this very weird thing. We think of it as kind of text, like on a screen, but it is only really code when it’s actually able to be run. And so it’s this kind of thought stuff—these words—but they’re very precise, and they also are then able to act in the world. And so it’s kind of this weird material that’s somewhere between the physical and the informational. It’s definitely more informational, but it kind of hinges on the real world. And in that way, it has this kind of at least somewhat unique property. And as a result, I think it can connect to all these other different domains. Ross: So the three major sections of your book—in the middle one is Thought. So, of course, we can have code as a manifestation of thought. We can have code which shapes thought. And one of the chapters is titled Tools for Thought, which has certainly been a lot of what we’ve looked at in this podcast over a long period of time. So, let’s start to dig into that. At a high level, what do you describe as—what do you see as—tools for thought? Sam: Yeah, I mean, so tools for thought—I mean, certainly, there’s a whole domain of software within this kind of thing. And I actually think that there’s a really long history within this, and this is one of the things I also like thinking about, and I do a lot in the book as well, which is kind of try to understand the deeper history of these technologies—trying to kind of understand where they’ve come from, what are the intellectual threads. Because one of the other interesting things that I’ve noticed is that a lot of interesting trends now—whether it’s democratizing software development or tools for thought or certain cutting-edge things in simulation—these things are not that new. It turns out most of these ideas were present, if not at the very advent of the modern digital computer, then they were at least around relatively soon after. But it was the kind of thing where these ideas maybe were forgotten, or they just took some time to really develop. And so, like, for example, one of the classic beginnings of tools for thought—well, I’ll take a step back. The way to kind of think about tools for thought is probably the best way to think about it is in the context of the classic Steve Jobs line, “the bicycle for the mind.” And so the idea behind this is—I think he talked about it in the 1970s, at least initially—I think it was based on a Scientific American article he read in the ’70s, where there was a chart of, I guess, like the energy efficiency for mobility for different animals. And I think it was, like, the albatross was really efficient, or whatever it was, and some other ones were not so efficient. And humans were pretty mediocre. But then things changed—if you put a human on a bicycle, suddenly they were much, much more energy efficient, and they were able to be extremely mobile without using nearly as much energy. And his argument is that in the same way that a bicycle provides this efficiency and power for mobility for humans, computers can be these bicycles for the mind—kind of allowing us to do this stuff of thought that much more efficiently. Ross: Well, but I guess the thing is, though, is that—yeah, that’s, it’s a nice concept. I think, yeah,  Sam: Oh yeah, it’s very popular.  Ross: The question is, how? Sam: Yes, yeah. So, how does it, how does it work? So the classic place—and I actually discuss even a deeper prehistory—but like, the classic place where people start a lot of this is with Vannevar Bush, his essay in The Atlantic, I think in 1945, As We May Think. And within it—he’s discussing a lot of different things in this article—but within it, he describes this idea of a tool called the Memex, which is essentially a thought experiment. And the way to think about it is, it’s kind of like a desk pseudo-computer that involves, I think, microfilm and projections. But basically, he’s describing a personalized version of the web, where you can connect together different bits of information and articles and things you’re reading and traverse all of this information. And he kind of had this idea for the web—or at least, if you squint a lot. It was not a reality; there was not the technology really quite there yet, although he describes it using the current cutting-edge technology of microfilm or whatever it was. And then people kind of proceeded with lots of different things around hypertext or whatever. But in terms of one of the basic ideas there, in terms of what is that tool for thought—it is ultimately the idea of being able to stitch together and interconnect lots of different kinds of information. Because right now—or I wouldn’t say right now—in the early days of computing, I think a lot of people thought about computers from the perspective of just either managing large amounts of information or being able to step through things in a linear fashion. And there was this other trend saying, no, no—things should be interconnected, and it should be able to be accessed non-linearly, or based on similar topics, or based on, ultimately, the way in which our brains operate. Because our brains are very associative. Like, we associate lots of different things. You’ll say one thing, it’ll spark a whole bunch of different ideas in my mind, and I’ll go off in multiple different directions and get excited about lots of different things. And we should have a way, ultimately, of using computers that enhances that kind of ability—that associative ability. Sometimes maybe complement it, so it’ll make things a little bit more linear when I want to go very associative. But I think that’s ultimately the kinds of tools for thought that people have talked about. But then there’s other ones as well. Like, using kind of more visual methods to allow you to manipulate information, or see or visualize or see things in a different way that allows you to actually think different thoughts. Because ultimately, one of the nice things about showing your work or writing things down on paper is it allows you to have some spatial representation of the ideas that you’re exploring, or write all the things down that maybe you can’t immediately remember in your short-term memory. And ultimately, what it comes down to is: humans are limited creatures. Our memories are not great. We’re distractible. We associate things really well, but it’s not always nearly as systematic as we want. And the idea is—can a computer, as a tool for thought, augment all these things? Make the way in which we think better, as well as offset all the limitations that we have? Because we’re pretty bad when it comes to certain types of thinking. And so I think that is kind of the grand vision. And I can talk about how certain trends with AI are kind of helping actually cash a lot of these promissory notes that people have tried to do for many, many years. But I think that’s kind of one broad way of thinking about how to think of this broad space of tools for thought—which is recognizing humans are finite, and how can we do what we want to do already better, which is think. And to be clear, I don’t want computers to act as sort of a substitute for thought. I enjoy thinking. I think that the process of thought itself is a very rewarding thing. And so I want these kinds of tools to allow me to feel like the best version of the thinking Sam—as opposed to, “Oh no, this kind of thing can think for me. I don’t have to do that.” Ross: So you mentioned—you start off from looking around the sense of how it is you can support or augment the implicit semantic networks of our thinking. These are broad ideas where, essentially, we do think in semantic networks of various kinds. And there are ways in which technology can support it. So I suppose, coming to the present, as you say, AI has been able to bring some of these to fruition. So what specifically have you seen, or do you see emerging around how AI tools can support us in specifically that richer, more associative or complementary type of prostheses? Sam: Yeah, so one basic feature of AI is this idea of being able to embed huge amounts of information in these kind of latent spaces, where there are some massively high-dimensional representations of articles or essays or paragraphs—or just information in general. And the locations of those different things often are based on proximity in some sort of high-dimensional semantic space. And so the way I think about this is—well before a lot of these current AI advances, there was this information scientist by the name of Don Swanson. And I think he wrote this paper—I think it was like the mid-1980s—it was called… Oh, and I’m blanking on it, give me a moment. Oh—it was called Undiscovered Public Knowledge. And the idea behind it is: imagine some scientific paper somewhere in the vast scientific literature that says “A implies B.” Then somewhere else in the literature—could be in the same subfield, could be in a totally different field—there’s another paper that says “B implies C.” And so, if you were to read both papers and combine them, you would know that perhaps “A implies C” by virtue of combining these two papers together. But because the scientific literature is so vast, no one has actually ever read both of these papers. And so there is this knowledge that is kind of out there, but it’s undiscovered—this kind of undiscovered public knowledge. He was not content to leave this as a thought experiment. He actually used the cutting-edge technology of the day, which was—I think—keyword searches and online medical databases. Or I don’t know if it was even online at the time. And he was actually able to find some interesting medical results. I think he published them in a medical journal, which is kind of exciting. This is kind of a very rudimentary thing of saying, “Okay, can we find relationships between things that are not otherwise connected?” Now, in this case, it required keyword searches, and it was pretty limited. Once you eliminate some of those barriers, the ability to stitch together knowledge that might otherwise never be connected is enormously powerful and completely available. And I think AI, through this kind of idea of embedding information within latent spaces, allows for this kind of thing. So the way I think about this is—if you know the specific terms, maybe you can find those specific papers you need. But oftentimes, people are not specifying things in the exact same way. Certainly, if they are in different domains and different fields, there are jargon barriers that you might have to overcome. For example, back when I was a postdoc—I worked in the general field of network science—and I was part of this interdisciplinary email list. I feel like every week, someone would email and say, “Oh, how do I do this specific network metric?” And someone else would invariably email back and say, “Oh, this has been known for 30 years in physics or sociology,” or whatever it was. And it was because people just didn’t even know what to search for. They couldn’t find the information that was already there. And with these much more fuzzy latent spaces, a lot of these jargon barriers are just entirely eliminated. And so I think we now have an unbelievable possibility for being able to stitch together all this information—which will potentially create new hypotheses that can be tested in science, new ideas that could be developed—because these different fields are stitched together. Yeah, there’s so many things. And so that is certainly one area that I think a lot about. Ross: Yeah, so just one—I mean, in that domain, absolutely, there’s extraordinary potential to, as you say, reveal the latent connections between knowledge—complementary knowledge—which is from our vast knowledge we’ve created as humanity. There are many more connections between those to explore, which will come to fruition. This does come to the humans-plus-AI piece, where, on one level, the AI can surface all of these connections which might not have been evident, but then come to the fore. So that is now a critical part of the scientific process. I mean, arguably, a lot of science is collecting what was already there before, and now we’re able to supercharge that. So in this humans-plus-AI world, where’s the role of the human there? Sam: So that’s a good question. I mean, I would say, I’m hesitant to say that there’s any specific task that only a human can do forever. It seems to be—any time you say, “Oh, only humans can do this,” we are invariably proven wrong, sometimes almost instantly. So I kind of say this a lot with a lot of humility. That being said, I do think in the near term, there is a great deal of space for humans to act in this almost managerial role—specifically in terms of taste. Like, what are the interesting areas to focus on? What are the kinds of questions that are important? And then, once you aim this enormously powerful tool in that direction, then it kind of goes off, and it’s merciless in connecting things and providing hypotheses and suggestions and ideas and potential discoveries and things to work on. But knowing the kinds of questions and the kinds of things that are important or that will unlock new avenues—it seems right now (maybe this will no longer be the case soon), but at least right now, I still think there’s an important role for humans to provide that sense of taste or aim, in terms of the directions that we should be focusing on. Ross: So going back to that question we touched on before—how do we as humans be the best we possibly can be? Now that we have—well, I suppose this is more a general, broader question—but also now that we have extraordinary tools, including ones of code in various guises, to assist us, how do we be the best we can be? Sam: Yeah, I think that is the singular question of this age, in this moment. And in truth, I think we should always be asking these questions about, okay, being the best versions of ourselves. How do we create meaning and purpose and things like that? I do think a lot of the recent advances with AI are sharpening a lot of these kinds of things. Going back to what I was saying before—at many moments throughout history, we’ve said, “Oh, humans are distinct from animals in certain ways,” and then we realized, “Oh, maybe animals can actually do some of those kinds of things.” And now, we are increasingly doing the same kind of thing with AI—saying, “Oh, AI can maybe recommend things to purchase, but it can never write crappy poetry,” and guess what? Oh, it actually can write pretty mediocre poetry too. So for me, I kind of view it as—by analogy, there’s this idea, somewhat disparagingly, within theology, of how you define the idea of God. Some people will say, “Oh, it’s simply anything that science cannot explain yet.” This is called the “God of the gaps.” And of course, science then proceeds forward, explaining various things in astronomy, cosmology, evolution, all these different areas. And suddenly, if you ascribe to this idea, your conception of God gets narrower and narrower and might eventually vanish entirely. And I feel like we are doing the same kind of thing when it comes to how we think about AI and humanity. Like, “Oh, here are the things that AI can do, but these are the things that humans can do that AI can never do.” And suddenly, that list gets shorter and shorter. So for me, it’s less about what is uniquely human—because that uniqueness is sort of a moving target—and more about what is quintessentially human. What are the things—and this goes back to exactly your question—what are the things that we truly want to be focusing on? What are the things that really make us feel truly human—like the best versions of ourselves? And those answers can be very different for many people. Maybe you want to spend your time gardening, or spending time with your family, or whatever it is. But certainly, one aspect of this—related to tools for thought—is the idea that I do think that certain aspects of thought and thinking are a quintessentially human activity. Not necessarily unique, because it seems as if AI can actually do, if not real thought, then a very accurate simulacrum of thought. But this is something that does feel quintessentially human—that we actually want to be doing ourselves, as opposed to outsourcing entirely. So I think, as a society, we have to say, “Okay, what are the things that we do want to spend our time doing?” and then make sure that our technologies are giving us that space to do those kinds of things. And I don’t have all the answers of what that kind of computational world will look like exactly, or even how to bend the entire world of big tech toward those ends. I think that is a very large and complicated issue. But I do think that these kinds of questions—the ones you asked me and the ones I’m talking about—these are the kinds of questions we need to really be asking as a society. You’re seeing hints of that, even separate from AI, in terms of how we’re thinking about smartphone usage—especially smartphone usage among children. Like, Jonathan Haidt has been talking about these things over the past several years, and really caused—at least in the United States—kind of a national conversation around, “Okay, when should we be giving phones to children? Should we be giving them phones? What kinds of childhoods do we want our children to have?” And I feel like that’s the same kind of conversation we should be having more broadly for technology: What are the lives we want to have? If so, how can we pick and choose the kinds of technologies we want? And I do think—even though some of these things are out of our hands, in the sense that I cannot unilaterally say, “Oh, large social media giant, change the way your algorithm operates”—they’re not going to listen to me. But I can still say, “Oh, in the absence of you doing the kinds of things that I want, I don’t have to play your game. I don’t have to actually use social media.” So there is still some element of agency in terms of picking and choosing the kinds of technologies you want. Now, it’s always easier said than done, because a lot of these things have mechanisms built in to make you use them in a certain way that is sometimes against your better judgment and the better angels of our nature. But I still think it is worth trying for those kinds of things. So anyway, that’s a long way of saying I feel like we need to have these conversations. I don’t necessarily have all the answers, but I do think that the more we talk about what are the kinds of things that make us feel quintessentially human, then hopefully we can start picking and choosing the kinds of technologies that work for that. So, like, if we love art, what are the technologies that allow us to make better art—as opposed to just creating sort of, I don’t know, AI slop, or whatever people talk about? Depending on the specific topic you’re focusing on, there’s lots of practicalities. But I do think we need to be having this conversation. Ross: So just rounding out, in terms of looking at the ideas in your book—sort of very wide-ranging—what is your advice, or what are your suggestions for people in terms of anything that they could do which will enhance themselves or make them better versions of themselves, or more better suited to the world in which we are living? Sam: That is a great question. And I think I would say it’s related to kind of just being deliberate—whether it’s being deliberate in the technologies you adopt or being deliberate in terms of the kinds of things that you want to be spending your time on. And it’s even beyond technology. It’s more about, okay, what is the—it involves saying, “Okay, what are the kinds of things I want to do, or the kind of life I want to live?” And then pick and choose technology, and the kinds of technology, that really feel like they enhance those kinds of things as opposed to diminish them. Because, I mean, as much as I talk about computation as this universal solvent that touches upon lots of different things—computing, it is not all of life. As much as I think there is the need for reigniting wonder and things like that, not everything should be computational. I think that’s fine—to have spaces where we are a little bit more deliberate about that. But going back to the sense of wonder, I also think ultimately it is about trying to find ways of rekindling that wonder when we use certain aspects of our technologies. Like, if we feel like, “Oh, my entire technological life is spent in this, I don’t know, fairly bland world of enterprise software and social media,” there’s not much wonder there. There’s maybe anger or rage or various other kind of extreme emotions, but there’s usually not delight and wonder. And so I would say, on the practical sense, probably a good rule of thumb for the kinds of technologies that are worth adopting are the ones that spark that sense of wonder and delight. Because if they do that, then they’re probably at least directionally correct in terms of the kinds of things that are maybe a little bit more humane or in line with our humanity. Ross: Fantastic. So where can people go to find out more about your work and your book? Sam: So my website—it’s just my last name, Arbesman. So arbesman.net is my website. And on there, you can read about the book. I actually made a little website for this new book The Magic of Code. It’s just themagicofcode.com. So if you go to that, you can find out more about the book. And if you go on arbesman.net, you can also find links to subscribe to my newsletter and various other sources of my writing. Ross: Fantastic. Loved the book, Sam. Wonderful to have a conversation with you. Thanks so much. Sam: Thank you so much. This was wonderful. I really appreciate it. The post Sam Arbesman on the magic of code, tools for thought, interdisciplinary ideas, and latent spaces (AC Ep5) appeared first on Humans + AI.
undefined
May 21, 2025 • 26min

Bruce Randall on energy healing and AI, embedding AI in humans, and the implications of brain-computer interfaces (AC Ep4)

I feel that the frequency I have, and the frequency AI has, we’re going to be able to communicate based on frequency. But if we can understand what each is saying, that’s really where the magic happens. – Bruce Randall About Bruce Randall Bruce Randall describes himself as a tech visionary and Reiki Master who explores the intersection of technology, human consciousness, and the future of work. He has over 25 years of technology industry experience and is a longtime practitioner of energy healing and meditation. Website: Bruce Randall LinkedIn Profile: Bruce Randall What you will learn Exploring brain-computer interfaces and human potential Connecting reiki and AI through frequency and energy Understanding the limits and possibilities of neural implants Balancing intuition, emotion, and algorithmic decision-making Using meditation to sharpen awareness in a tech-driven world Navigating trust and critical thinking in the age of AI Imagining a future where technology and consciousness merge Episode Resources Companies & Organizations Neuralink Synchron MIT Technologies & Technical Terms Brain-computer interfaces AI (Artificial Intelligence) Agentic AI Neural implants Hallucinations (in AI context) Algorithmic trading Embedded devices Practices & Concepts Reiki Meditation Sentience Consciousness Critical thinking Transcript Ross Dawson: Bruce, it’s a delight to have you on the show. Bruce Randall: Well, Ross, thank you. I’m pleased to be on the show with you. Ross: So you have some interesting perspectives on, I suppose, humanity and technology. And just like to, in brief, hear how you got to your current perspectives. Bruce: Sure. Well, when I saw Neuralink put a chip in Nolan’s head and he could work the computer mouse with his thoughts, and he said, sometimes it goes where it moves on its own, but it always goes where I want it to go. So that, to me, was fascinating on how with the chip, we can do things like sentience and telecommunications and so forth that most humans can’t do. But with the chip, all of a sudden, all these doors are open now, and we’re still human. That’s fascinating to me. Ross: It certainly extends, extending our capabilities. It’s done in smaller ways in the past and now in far bigger ways. So you do have a deep technology background, but also some other aspects to your worldview. Bruce: I do. I’ve sold cloud, I’ve been educated in AI at MIT, and I built my first AI application. So I understand it from, I believe, from all sides, because I’ve actually done the work instead of read the books. And for me, this is fascinating because AI is moving faster than anything that we’ve had in recent memory, and it directly affects every person, because we’re working with it, or we can incorporate it in our body to make us better at what we do. And those possibilities are absolutely fascinating. Ross: So you describe yourself as a Reiki Master. So what is Reiki and how does that work? What’s its role been in your life? Bruce: Well, Reiki Master is you can connect with the universal energy that’s all around us, and it means I have a bigger pipe to put it through me, so I can direct it to people or things. And I’ve had a lot of good experiences where I’ve helped people in many different ways. The Reiki and the meditation came after that, and that brought me inside to find who I truly am and to connect with everything that has a vibration that I can connect with. That perspective, with the AI and where that’s going—AI is a hardware, but it produces software-type abilities, and so does the energy work that I do. They’re similar, but they’re very different. And I believe that everything is a vibration. We vibrate and so forth. So that vibration should be able to come together at some point. We should be able to communicate with it at some level. Ross: So if we look at the current state of research, scientific research into Reiki, there seems to be some potential low-level and small-population results. So it doesn’t seem to be a big tick. It doesn’t—there’s—there does appear to be something, but I think it’s fair to say there’s widespread skepticism in mainstream science about Reiki. So what’s your, I suppose, justification for this as a useful perspectival tool? Bruce: Well, I mean, I’ve had an intervention where I actually saved a life, which I won’t go into here. But my body moved, and I did that, and I said, I don’t know why I’m doing this, but I went with the body movement and ended up saving a life. To me, that proved to me, beyond a shadow of a doubt, that there’s something there other than just what humans can see and feel. And that convinced me. Now, it’s hard to convince anybody else. It’s experiential, so I really can’t defend it, other than saying that I have enough experiences where I know it’s real. Ross: Yeah, and I think that’s reasonable. So let’s come back to that—the analogy or linkage you are painting between the energy, underlying energy and Reiki that you experience, and the AIs, I suppose, augmentation of humans and humanity. Bruce: Well, everything has a vibration or frequency. So music has a frequency. People have a frequency. And AI has a frequency. So when you put AI in somebody, there’s the ability at some point for them to communicate with that AI beyond the electrical signal communication. And if that can be developed with the electrical signal from the AI chip, that person can grow leaps and bounds in all areas—instead of just intelligence—but they have to develop that first to do that. Now, AI is creating—or is potentially creating—another class of people. Whereas Elon said in the first news conference, if you’re healthy and you can afford it, you too can have a chip. So that’s a form of commercialization. You may not need to be a quadriplegic to get a chip. If you can afford it, then you can have a chip potentially too. So that puts commercialization at a very high level. But when it gets normalized and the price becomes more affordable, I see that as being something that more mainstream people can get if they choose to. Now, would there be barriers or parentheses on that—where you can only go so far with it? Or if you get a chip, you can do whatever you want? And those are some of the things that I look at as saying we’re moving forward, but we have to do it thoughtfully, because we have to look at all areas of implications, instead of just how fast can we go and how far can we go. Ross: Yeah, well, I mean, for a long time, I’ve said that the very look at the advancement of brain-computer interfaces—first phase, of course, they’re used to assist those who are not fully abled. And then there’s a certain point when, through safety and potential advantages, people who are not disabled will choose to use them. And so that’s still not a point which we’ve reached—or probably not even close to at this point. But still, the massive constraint is the input-output bandwidths of the brain-computer interfaces of today. Still, the “1000 bits per second” rule, which is very similar—so it’s very low bandwidth—and there’s potential to be able to expand that. But that still is essentially bits. It is low levels of information—input, output. So that’s a different thing to what you are pointing to, where there are things beyond simple information in and out. So, for example, the ability to control the computer mouse with your brain… Bruce: Right. But that’s the first step. And the fact that we got to the first step and we can do that—it’s like we had the Model A, and all of a sudden, a couple decades later, we’ve got these fancy cars. That’s a huge jump in a relatively short period of time. And with all the intelligence of the people and the creativity of the scientists that are putting this together, I do believe that we’re going to get advances in the short and medium-long term that are really going to surprise people. On what we can do as humans with AI—either embedded or connected in some way or fashion—because you can also cut the carotid and put a capsule in, and you’ve got AI bots running throughout your body. Now that’s been proven—that that works—and that’s something that hasn’t gotten a lot of press. But we’ve got other ways that we can access the body with AI, and it’s a matter of: we have to figure out which is best, what the risks are, what the parameters are, and how we best move forward with that. Ross: Yeah, it sounds like you’re referring to Synchron, which is able to insert something into the brainstem through the carotid. But that’s not what’s through the body—that’s simply just an access point to the brain for Synchron. Which is probably a stronger approach than the—well, can be—than the Neuralink swarm, which is directly interfacing with the brain tissue. So what do you—so, one of the—if you think about it as an input-output device, that’s quite simple, as in the sense of, we can take information into our brain, whatever sense. So that’s still being used a bit less. And we can also output it—as in, we can basically take thoughts or directions and use that as outputs to devices. So what specifically—can you point to specific use cases that you would see as the next steps for using BCIs, brain-computer interfaces, with AI? Bruce: Yeah, I think that we’re just in the very beginning of that. And I think that there are ways to connect the human with the AI that can increase where we are right now. I just don’t think we know the best way to do that yet. We’re experimenting in that. And I think there are many other ways that we can accomplish the same thing. It’s in the development stages. We’re really on the upward curve of the bell curve for AI, and we’ve got a long way to go before we get to the top. Ross: Yeah, I asked for specifics. So what specifically do you see as use cases for next steps? Bruce: Well, for specifics, I see in people with science and medical, I think there are significant use cases there where they can process faster and better with AI than we can process right now. That’s pure information. And then they can take their intelligence they have as a human, and analyze that quickly and get it faster. In an ER situation, there is a certain amount of error in that area from mistakes that are made. With AI, that can fine-tune that so you have fewer errors and you can make better choices going forward. There are many other cases like that. You could be on the floor trading, and everything is a matter of ratios and so forth. Or you could be in an office trading in real time on the machines. At that point, you’re looking at a lot of different screens and trying to make a decision. If you had AI with you, that would be able to process—speed your processing time—and you could make better decisions faster, because time is of the essence in both of those examples. And AI could help in that. Now, is that a competitive and comparative advantage? I would say so, but it’s in a good way—especially in the medical field. Ross: Yes, so you’re talking about AI very generically, so in this idea of humans plus AI decision-making. So, essentially, you can have human-only decisions, you can have AI decisions. In many cases, the trading—algorithmic trading—is fully delegated to AI because the humans can’t make the decisions fast enough. So are there any particular structures? What specific ways do you see that AI can play a role in those kinds of decision-making? I mean, you mentioned the things of being able to point to potential errors or flag those, and so on. What are other ways in which you can see decisions being made in medical or financial or other perspectives where there is an advantage to the human and AI collaboration—as opposed to having them both separate—and the ways in which that would happen? Bruce: Well, in the collaboration, AI is getting to the point where it has hallucinations right now, so you have to get around that in order to get this in a more reliable fashion. But once you train AI for a specific vertical, that AI is going to work better in that vertical than in an untrained vertical. So that’s really the magic in how you get that to work better. And then AI, with the genetic, has the ability to make decisions. And you have to gauge that with the human ability to make decisions to make sure that that’s in line. You could always put a check and balance in place where, if the AI wanted to do something in a fast-moving environment and you weren’t comfortable with that, you could say no, or you could let it go. That’s something that could be in an earpiece. It can be embedded. There are many different ways to do that. It could be on a speaker where they’re communicating—that’s an easy way to do it. As far as other ways to do it, I mean, we are auditory—so we see, we hear, and we speak—and that’s how we take in information. That’s what it’s going to be geared to. And those devices are coming on right now to be developed where it all works together. But we’re not there yet. But this is where I see it going in both those environments, where you can have a defined benefit for AI working with humans. Ross: So one of the things which is deeply discussed at the moment is AI’s impact on critical thinking. Many people are concerned that because we are delegating complex thinking to AI, in many cases we become lazy or we become less capable of doing some of that critical thinking. Whereas in other domains, some people are finding ways to use AI to be able to sharpen, or to find different perspectives, or find other ways to add to their own cognition. So what are your perspectives or beliefs, specifically on how it is we can best use AI as a positive complement to our cognitive thinking and critical thinking and our ability to develop it? Bruce: Well, we think at a very fast rate, and scientists don’t understand the brain yet in its full capacity, and we don’t understand AI to its full capacity. So I would say with that, we need to work in both areas to better understand them, to find out how we can get to the common denominator where both are going to work together. Because you’ve got—it’s like having two people—you’ve got, for example, the Agentic AI, which has got somewhat of a personality with data, and then you’ve got us with data and with emotions. Those are hard to mix when you put the emotions in it, right? We also have a gut feel, which is pretty accurate. When you put all that together, you’ve got conflicts here, and you have to figure out how you’re going to overcome that to work in a better system. Now, once you get trust with it, you can just rely on it and move forward. But as humans, we have a hard time giving trust to something when it’s important. We rely on our own abilities more than a piece of technology. So that bridge has to be crossed, and we haven’t crossed that yet. And at the same time, humans have done a pretty good job in some very trying situations. AI hasn’t been tested in those yet, because we’re very early in the stages of AI. When we get to that point, then we’re going to start working together and comparing—and really answer your question. Because right now, you’ve got both sides. They both have valid points, but we don’t yet know who’s right. Ross: Yeah, there’s definitely a pathway to a few elements you raised there. One is in trust. So how do we get justified trust in systems so they can be useful? Conflicts around decision-making, and to what point do we trust in our own validation of our own decision-making or thinking in a way that we can effectively, essentially, patch the better decision-makers through that external perspective or addition? So you have deep practice or meditation, amongst other things. And we have a deluge of information which we are living in, which is certainly continuing to increase. So what would your advice be for how to stay present and sharp and connected and be able to deal with the very interesting times we live in? Bruce: Well, that’s a big question, but I’ll give you a short answer for that. My experience with meditation is I’ve gotten to know myself much better, and it’s fine-tuned who I am. Now, you can listen to a tape and you can make incremental movies with that to relax, but I suggest meditation is a great way to expand in all areas—because it’s expanded in all areas for me. And it’s a preference. It’s an opinion based on experience. And everybody has different paths and would have different experiences in that. It’s an option. But what I tell everybody is—because there are a lot of people that still aren’t into AI to the extent that they need to be—I say take 20 minutes to 30 minutes a day in the vertical that you’re in and understand AI and how it can enable you. Because if you don’t do that, in two years, you’re going to be looking from behind at the people that have, and it’s going to be very hard to catch up. Ross: So, slice of time for studying AI and slice of time for meditation, right? Bruce: Yeah, I do. I do 30 minutes twice a day, and I fit it in for 12 years in a busy schedule. So it’s doable. May not be easy, but it’s doable. Ross: Yes, yes. Well, I personally attest to the benefits of meditation, though I’m not as consistent as you are. But I think, yeah, and that’s where there is some pretty solid evidence—well, very solid evidence—that meditation is extremely beneficial on a whole range of different fronts, including physical health, as well as mental well-being and ability to focus, and many other things that are extremely useful to us in the busy world that we learn…. Bruce: Scientific explanation is correct. Ross: Yeah, yeah. And it’s very, very, very well validated for those that have any doubts. So to round out, I mean, we’ll just paint a big picture. So I’d like to let you go wild. Where—what is the potential? Where can we go? What should we be doing? What’s the future of humanity? Bruce: Well. That’s a huge question. And AI is not there yet. But humans—I see, because I’ve been able to do some very unusual things with my combination—I feel that the frequency I have, and the frequency AI has, we’re going to be able to communicate based on frequency. But if we can understand what each is saying, that’s really where the magic happens. And I see people—their consciousness increasing—just because humanity is increasing. And I think in—I mean, they’re discussing sentience and AI. I don’t know. I mean, I understand it, but I don’t know where they’re going with this. Because if you weren’t born with a soul, you don’t have the sentience that a piece of software has. I mean, it can be very intelligent, but it’s not going to have that, in my opinion. Now, will a hybrid come out with person and AI? Doubtful, but it’s possible. There are a lot of possibilities without a lot of backup for them for the future. But I know that if you promote yourself with meditation and getting to know yourself better, everything else happens much easier than if you don’t. And I think with AI—I mean, the sky’s the limit. What does the military have that we don’t have with AI, right? I mean, there’s a lot of smart people working that aren’t in public with AI, and we don’t know where they are. But we know that they’re making progress, because every once in a while we hear something. And I was watching a video on LinkedIn—they mapped the mouth area, and this person could go through seven different languages while he’s walking and talking, and his lips match the words. That point right there, which was a month ago—I said, now I can’t—I’m not sure if I’m watching somebody actually saying something, or if it’s AI. So we make advancements, and then we look at it and say, who can I believe now? Because it’s hard to tell. Ross: Yes. Bruce: So I hope that gives what I think is possible in the future. Where we go—who knows? Ross: Yeah, the future is always unpredictable, but a little bit more now than it ever has been. And one of the aspects of it is, indeed, the blurring of the boundaries of reality and knowing what is real and otherwise. And so I think this still comes back to—we do know that we exist. There still is a little bit of the “I think, therefore I am,” as Descartes declared, where we still feel that’s valid. And beyond that, all the boundaries of who we are as people, individuals, who we are as humanity, are starting to become a lot less clear than they have been. Bruce: And it will get more or less clear, I think, until it gets clearer. Ross: So thanks, Bruce, for your time and your perspectives. I enjoyed the conversation. Bruce: Thank you, Ross. I appreciate your time, and I enjoyed it also. The post Bruce Randall on energy healing and AI, embedding AI in humans, and the implications of brain-computer interfaces (AC Ep4) appeared first on Humans + AI.
undefined
7 snips
May 14, 2025 • 37min

Carl Wocke on cloning human expertise, the ethics of digital twins, AI employment agencies, and communities of AI experts (AC Ep3)

We’re not trying to replace expertise—we’re trying to amplify and scale it. AI wants to create the expertise; we want to make yours omnipresent. – Carl Wocke About Carl Wocke Carl Wocke is the Managing Director of Merlynn Intelligence Technologies, which focuses on human to machine knowledge transmission using machine learning and AI. Carl consults with leading organizations globally in areas spanning risk management, banking, insurance, cyber crime and intelligent robotic process automation. Website: Emory Business Merlynn-AI LinkedIn Profile: Carl Wocke What you will learn Cloning human expertise through AI How digital twins scale decision-making Using simulations to extract tacit knowledge Redefining employee value with digital models Ethical dilemmas in ownership and bias Why collaboration beats data sharing Keeping humans relevant in an AI-first world Episode Resources Companies / Groups Merlynn Emory Tech and Tools Tom (Tacit Object Modeler) LLMs Concepts / Technical Terms Digital twin Tacit knowledge Human-in-the-loop Knowledge engineering Claims adjudication Financial crime Risk management Ensemble approach Federated data Agentic AI Transcript Ross Dawson: Carl, it’s wonderful to have you on the show. Carl Wocke: Thanks, Ross. Ross: So tell me about what Merlynn, your company, does. It’s very interesting, so I’d like to learn more. Carl: Yeah. So I think the most important thing when understanding what Merlynn is about is that we’re different from traditional AI in that we’re sort of obsessed with the cloning of human expertise. So where your traditional AI looks at data sources generating data, we are passionate about cloning our human experts. Ross: So part of the process, I gather, is to take human expertise and to embed that in models. So can you tell me a bit about that process? How does that happen? What is that process of—what I think in the past has been called knowledge engineering? Carl: Yeah. So we’ve built a series of technologies. The sort of primary technology is a technology called Tom. And Tom stands for Tacit Object Modeler. And Tom is a piece of AI that has been designed to simulate a decision environment. You are placed as an expert into the simulation environment, and through an interaction or discussion with Tom, Tom works out what the heuristic is, or what that subconscious judgment rule is that you use as an expert. And the way the technology works is you describe your decision environment to Tom. Tom then builds a simulator. It populates the simulator with data which is derived from the AI engine, and based on the way you respond, the data evolves. So what’s happening in the background is the AI engine is predicting your decision, and based on your response, it will evolve the sampling landscape or start to close up on the model. So it’s an interaction with a piece of AI. Ross: So you’re putting somebody in a simulation and seeing how they behave, and using their behaviors in that simulation to extract, I suppose, implicit models of how it is they think and make decisions. Carl: Absolutely so absolutely. And I think there’s sort of two main things to consider. The one is Tom will model a discrete decision. And a discrete decision is, what would Ross do when presented with the following environment? And that discrete decision can be modeled within an hour, typically. And the second thing is that there’s no data needed in the process. Validation is done through historical data, if you like. But yeah, it’s an exclusive sort of discussion between you and the AI, if that makes sense. Ross: So when people essentially get themselves modeled through these frameworks, what is their response when they see how the model that’s being created from their thinking responds to decision situations? Do they say, “These are the decisions I would have made?” I suppose there’s a feedback loop there in any case. But how do people feel about what’s been created? Carl: So there is a feedback loop. Through the process, you’re able to validate and test your digital twin. We refer to the models that are created as your digital twin. You can validate the model through the process. But what also happens—and this is sort of in the early days—is the expert might feel threatened. “You don’t need me anymore. You’ve got my decision.” But nothing could be further from the truth, because that digital twin that you’ve modeled is sort of tied to you. It evolves. Your decisions as an expert evolve over time. In certain industries, that happens quicker. But that digital twin actually amplifies your value to the organization. Because essentially what we’re doing with a digital twin is we’re making you omnipresent in an organization—and outside of the organization—in terms of your decisions. So the first reaction is, “I’m scared, am I going to have a job?” But after that, as I said, it amplifies your value to the organization. Ross: So one of the things to dig into there—here—but let’s dig into that for now, which is: what are the mechanics? There are some ways we can say, “All right, my expertise is being captured,” and so then that model can do that work, not me. But there are other mechanisms where it amplifies value by, as you say, being able to be deployed in various ways. So can we unpack that a little bit in terms of those dynamics of value to the person whose expertise has been embodied in a digital twin? Carl: Yeah, Ross, that’s really sort of a sensitive discussion to have, in that when someone has been digitized, the role that they play in the organization is now able to potentially change. So we have customers—banking customers—that have actually developed digital twins of compliance expertise. Those compliance experts can now go and work at the clients of the bank. So the discussion or the relationship between the employer and the employee might well need to be revisited within the context of this technology. Because a compliance expert at a bank knows that they need to work the following hours, they have the following throughput. They can now operate anywhere around the world, theoretically. So the value to the expert within a traditional corporate environment—or employee-employee environment—is going to be challenged. When you look at an expert outside of the corporate environment—so let’s say you’ve got someone who’s a consultant—they are able to digitize themselves and work pretty much anywhere around the world, in multiple organizations. So I do—we don’t have the answer. Whose IP is it? Is another question. We’ve had legal advice on this. Typically, the corporate who employs the employee would be the owner. But if the employee leaves the organization, what happens to the IP? What happens to the digital twin? So as Merlynn, we’ve sort of created this stage. We don’t have the answers, but we know it’s going to get interesting. Ross: Yeah. So Gartner predicted that by 2027, 70% of organizations will be putting something in their employee contracts about AI representations, if I remember the statistics correctly. And then I suppose what the nature of those agreements are is, as you say, still being worked out. And so these are fraught issues. But I think the first thing is to resurface them and be clear that they are issues, and so that they can be addressed in a way which is fair for the individuals as well as the organizations. Carl: I think, Ross, just to add to that as well—the placement of the digital twin is now able to be sort of placed at an operational level, which also changes the profile of work that the employee typically has. So that sort of feeds the statement around being present throughout the organization. So the challenges are going to be, well, I’m theoretically doing a lot more, and therefore I understand the value I’m contributing. But yes, absolutely an interesting space to watch right now. Ross: And I think there’s an interesting point here where machine learning is domain-bounded based on the dataset that it has been trained on. And I think that any expertise from an individual—I mean, people, of course, build a whole body of expertise in a particular domain because they’ve been working, essentially—but what they have also done at the same time is enhanced their judgment, which I would suggest is almost always cross-domain judgment. So a person’s judgment is still something they can apply across multiple domains. You can embody it within a specific domain and capture that in a system, but still, the human judgment is—and will remain, I think, indefinitely—a complement to what any AI system can do. Carl: Absolutely. I think when you look at the philosophical nature of expertise, an expert—and this is sort of the version according to Carl here—is someone who cannot necessarily and readily explain their expertise. If you could defend your expertise through data, then you wouldn’t be needed anymore, and you wouldn’t actually be an expert anymore. So an expert sort of jumps the gaps that we have within data. What we found—and Merlynn has been running as an AI business for the last nine, ten years now, so we’ve been in the space for a while—is that the challenge with risk is that risk exists because I haven’t got enough data. And where I have a risk environment, there’s a drain on the expertise resource. So experts are important where you have data insufficiency. So absolutely, to your point, I think the nature of expertise—when one looks at the value of expertise, specifically when faced with areas that have inherent risk—we cannot underestimate the value of someone making that judgment call. Ross: So to ground this a little bit, I know you can’t talk too much about your clients, but they include financial services, healthcare, and intelligence agencies around the world. And I believe you have come from a significantly risk background. So without necessarily being too explicit, what are some examples of the use cases, or where the domains in which organizations are finding this useful and relevant—and the match for the ability to extract or distill expertise? Carl: So we focused on four main areas as a business, and these are areas that we qualify because they involve things that need to be done. As a business, we believe it makes business sense to get involved in things that the world needs help with. So we focused on healthcare, banking, insurance, and law enforcement. I’ll speak very high-level on all of these. In healthcare, we’ve deployed our technology over the last four or five years, creating synthetic or digital doctors making critical decisions. In the medical environment, you can follow a textbook, and there’s a moment where you actually need a second opinion or you need a judgment call. We never suggest replacing anything that AI is doing at the moment, or any of these phenomenal technologies. The LLMs out there—we think—are phenomenal technologies. We just think there’s a layer missing, which is: we’ve reached this point, and we’ve got to make that judgment call. We would value the input of a professor or an expert—domain expert. So would there be benefit in that? In the medical space—treatment protocols, key decisions around being admitted—those are environments where you’ve got a protocol, but you don’t always get it right. And the value of a second opinion—our technology plays that second opinion role. Where you’re about to do the following, but it might not be the correct approach. In the medical world, there are two industries where we don’t think we’re going to make money, but we know we need to do it. And medical is one of them. Imagine a better world where we can have the right decision available at the right time, and we’ve got the technology to plan that decision. So when you talk about telemedicine, you can now have access to a multitude of decisions in the field. What would a professor from a university in North America say? Having said that, we work with the Emerys of the world—Emory Medical, Emory University—building these kinds of technologies. So that’s medical. On the insurance side, we’ve developed our technology to assist in the insurance industry in anything from claims adjudication, fraud, payments. You can imagine the complexity of decisions that are found within the processes in insurance. In banking, we primarily focus on financial crime, risk, compliance, money laundering, terrorist financing-type interventions. If I can explain the complexity of the banking environment: you’ve got all manner of AI technology that’s deployed to monitor transactions. A transaction is flagged, and that flagged transaction needs to be adjudicated by a human expert. That’s quite telling of the state of AI, where you do all of the heavy lifting, but you have that moment where you need the expert. And that really is a bottleneck. Our technology clones your champion—or best-of-breed—expert within that space. You go from a stuck piece of automation to something that can actually occur in real time. And then the last one is within the law enforcement space. So we sponsor, here in South Africa, a very innovative collaboration environment, which comprises law enforcement agencies from around the world. We’ve got federal law enforcement agencies in North America. We’ve got the Interpols, Europols. We’ve got the Federal Police—Australian Federal Police—who participate. So law enforcement from around the world, where we have created what they refer to as a safe zone, and where we have started to introduce our technology to see if we can help make this environment better. The key being the ability to access expertise between the different organizations. Ross: So in all of these cases that you are drawing—modeling—people who are working for these organizations, or are you building models which are then deployed more broadly? Carl: Yeah, so in the line—well, in fact, across all of them—you know, there’s two answers to that. The one is that organizations that deploy technology will obviously build a library of digital twin expertise and deploy that internally. What we’re moving towards now is a platform that we’ve launched where organizations can collaborate as communities to fight, you know, joint risk. I’ll give you an example to sort of make that clearer. So we won an innovation award with Swift. So Swift is a sort of a payments-type platform, monitoring-type platform. They’ve got many roles that they play. They’ve got 12,000 banks, and the challenge that they posed was: how do we get the banks to collaborate better? And what we suggested was, if you attack one bank, what if you can draw on the expertise of the other banks? So if you’ve got a cyberattack or you’ve got some kind of financial crime unfolding, what if there’s a way for you to pool the expertise? And I think that model allowed us to win that challenge, which answers the second part of the question, which is: do you bring expertise from outside of the organization? We see a future where collaboration needs to take place, where we face common risk, common challenges. So the answer is both. Ross: Yes, I can. I mean, there are some analogs of federated data, where you essentially take data which is not necessarily exposing it fully but be able to structure it so that’s available as a pool—for example, the MELLODY Consortium in healthcare. But I think there are other ways. And so there’s Visa as well—it has some kind of a system for essentially sharing data on risk, which is aggregated and made available across the network. And of course, you know, there are then the choices to be made inside organizations around what you share to be available, what you share in an anonymized or hidden fashion, or what you don’t share at all. And essentially, there’s more and more value in ecosystems. And I think I would argue there’s more and more value, particularly in risk contexts, to the sharing to make this valuable for everyone. Carl: Ross, if I can just add to that, I mean, you can share data, which has got so many compliance challenges. You can share models that you created with the data, which I think is being exploited or explored at the moment. The third is, I can share my experts. Because who do you turn to when things go off script? My experts. So they’re all valid. But the future—certainly, if we want to survive—I mean, we have sight of the financial crime that’s being driven out there. It’s a war. And at times I wonder if we’re winning the war. So we have to, if we want to survive, we have to find ways to collaborate in these critical environments. It’s critical. And yet, we’re hamstrung by not being able to share data. I’m not challenging that—I think it’s important that that is protected. But when you can’t share data, what am I sharing? I go to community meetings in the form of conferences, you know, from time to time, and share thoughts and ideas. But that’s not operational. It’s not practical. So we have to share our experts. As Merlynn, we see expertise—and that second-opinion, monitoring, judgment-type resource—as so critical. It’s critical because it’s needed when things go off script. We have to share this. So, yeah. Ross: Yeah. So, moving on to Step—you also have this concept, I’m not sure, maybe we’ve decided to put it in practice—of an AI employment agency. So what is that? What does it look like? What are the considerations in that? Carl: Yeah. So, the AI employment agency is a platform that we’ve actually established. So, I’m going to challenge you on the word “concept”—the platform’s running. It’s not open to the public, but it’s a marketplace—an Amazon marketplace—of digital twins. So if I want to hire a compliance officer, and I’m a bank here in South Africa, I can actually go and hire expertise from a bank in America. I can hire expertise from a bank in Europe. So, the concept or the product of the AI employment agency is a platform which facilitates creation and consumption. As an expert, we see a future where you can create a digital version of your expertise. And as a consumer—being the corporates, in fact, I suppose individuals would also be consumers—at the moment it’s corporates, but corporates can come and access that expertise. And a very interesting thing happens. I’ll give you a practical example out of a banking challenge. Very often, a bank has a thing called a “spike,” which is a new name added to a world database that looks for the undesirables. The bank has got to check their client base for potential matches, and that’s an instant sort of drain on expert resource. What you could do with the employment agency is I could hire an expert, bring them into the bank for the afternoon to solve the challenge, and then just as readily let them go—or fire them out of that process. So I think, just to close off on that, the fascination for me is: as we get older, hopefully we get wiser, and hopefully we stay up to date. But that skill—what happens to that skill? What if there’s a way for us to mobilize that skill and to allow people to earn off that skill? So the AI employment agency is about digitizing expertise and making it available within a marketplace. We’re going to open it up probably within the next 12 months. At the moment, it’s operational. It’s making a lot of people a lot of money, but we’ve got to be very careful once we open the gates. Ross: But I think one of the underlying points here is that you are pointing to this humans-plus-AI world, where these digital twins are complements to humans, and where and how they’re being deployed. Carl: Yeah. I think the—you know, I often see the areas where we differ from traditional AI approaches. And again, not negating or suggesting that it’s not the approach. But when you look at a traditional AI approach, the approach is to replace the function. So replace the function with an AI component. The function would be a claims adjuster. And the guardrails around that—that’s a whole discussion around the agentic AI and the concerns around that. It brings hallucination discussions and the like. Our version of reality is—we’re dealing with a limitation around access to expertise, not necessarily expertise. Whereas AI wants to create the expertise, we want to amplify and scale the expertise. So they’re different approaches to the same challenge. And what we found is that both of them can live in the same space. So AI will do its part, and we will bring the “What does Ross think about the following?” moment, which is that key decision moment. Ross: So I guess one of the issues of modeling—creating digital twins of humans—is that humans are… they may be experts, but they’re also fallible. There are some better than others, some more expert than others, but nobody is perfect. And as a—part of that is, people are biased. They have biases in potentially a whole array of different directions. So does this—all of the fallibility and the bias and the flaws of humanity—get embedded in the digital twin? Or if so, or if not, how do you deal with that? Carl: Well, Ross, you might lose a whole lot of listeners now, but bias is a—well, let’s look at expertise. Expertise is a point of view that I have that I can’t validate through data. So within a community, they’ll go, “Carl’s an expert,” but we can’t see it in the data, and therefore he might be biased. So the concept of expertise—I see the world through positive bias, negative bias. A bias is a position that you hold that, as I said, is not necessarily accepted by the broader community, and expertise is like that. An expert would see something that the community has missed. So, you know, I live in South Africa. If you stop on the side of the road, it’s probably a dangerous exercise. But if there’s an animal, I’m going to stop on the side of the road. And that might be a sort of bad bias, good bias. “Why did you do that?”—you put your family at risk and all of those things. So I can play out a position on anything as being positive and negative. But I think we’ve got to be very careful that we don’t dehumanize processes by saying, “Well, you’re just biased,” and I’m going to take you out of the equation or out of the process. In terms of people getting it right, people getting it wrong, good day, bad day—our technology is deployed in terms of an ensemble approach, where you would have a key decision. I can build five digital twins to check on each other and monitor it that way. You can build a digital twin to monitor yourself. So we’ve built trading environments where the digital twin will monitor you as the trader, given that you’re digital twinned, to see whether you’re acting out of sorts—for whatever reason. So bias—as I said, I hope I haven’t alienated any of your listeners—but bias is a… we’ve got to be very careful that we don’t use whatever mechanism we can to get rid of anything that allows people to offer that expertise into a process or transaction. Ross: Yeah, no. Well, that makes sense. And I suppose what it points to, though, is the fact that you do need diversity—as in, you can’t just have a single expert. You shouldn’t have a single human. You bring diverse—as diverse as possible—perspectives of humans together. And that’s what boards are for, and that’s why you’re trying to build diversity into organizations, so you do have a range of perspectives. And, you know, as you say, positive or useful biases can be… the way you’re using the term bias is perhaps a bit different than others, in saying it is just something which is different from the norm. And—well—I mean, which goes to the point of: what is the norm, anyway? But I think what this points to then is, if we can have a diverse range of experts—be they human or digital twins—then that’s when you design the structures where those, whatever those distinctiveness—not using the word “bias”—but say, those distinctive perspectives can be brought together into a more effective framing and decision. Carl: Absolutely, Ross If I can sort of jump in and give you an interesting dilemma—the dilemma of fair business is something that… fairness is going to be decided by your customer. So the concept of actually having a panel of experts adjudicating your business—because they say they think that this is fair. Look at an insurance environment. Imagine your customers are adjudicating whether you should have, in fact, paid out the claim—even though you didn’t. That’s a form of bias. It’s an interpretation or an expectation of a customer to a corporate. So I think, again, it just reinforces the value of bias—or expertise-slash-bias—because at the end of the day, I believe organizations are going to be measured against fairness of trade. Now for AI—imagine the difficulty to find data to define fairness. Because your fair is different from my fair. I have different fairness compared to my neighbor. How are we going to define that? So again, that means there are so many individual versions of this, which is why I use the example of: organizations should actually model their customers and place them as an adjudicator into their processes or into their organizations. Ross: Yeah. Well, I think part of the point here is, in fact, since AI embodies bias—human bias—because it’s trained on human data, it basically embodies human biases or perspectives, whatever. So this is actually helping us to surface some of these issues around just saying, “Well, what is bias?” There is—it’s hard to say there is any objective… you know, there are obviously many subjective views on what bias is or how it could be mitigated. These are issues which are on the table for organizations. So just to round out—where do you see… I mean, the horizons are not very far out at the moment because it is moving fast—but what do you see as sort of the things on your mind for the next little while in the space you’re playing? Carl: So I think if I look at the—two things. One thing that concerns me about the current technology drive is that we are building very good ways to consume things, but we’re not building very good ways to make things. And what I mean by that is—we’ve got to find ways for us as humans to stay relevant. If we don’t, we’re not going to earn. It’s as simple as that. And if we don’t, we’re not going to spend. So it’s a very simplistic view, but I think it’s critical. It’s critical for us to keep humans relevant. And I think people—humans—are relevant to a process. So we’ve just got to find a mechanism for them to keep that relevance. And if you’re relevant, you’re going to earn. I don’t see a world where you’re going to be fed pizzas under the door, and you’re going to be able to order things because everything’s taken care of for you. That just doesn’t stack up for me. So I think that’s a challenge. I think the moment that we’ve arrived at now—which is an important moment—is the moment of human-in-the-loop. How do we keep people in the loop? And human-in-the-loop is the guardrail for the agentic AI, for the LLMs, the Gen AIs of the world. That’s a very, very important position we need to reinforce. And when one reinforces human-in-the-loop, you also bring relevance back to people. And then you also allow things like empathy, fairness of trade, ethics—to start to propagate through technology. So I think the future for me—you know, I get out of bed, and sometimes I’m really excited about what the technology landscape holds. And then I’m worried. So I think it’s going to work out when people realize: what are we racing towards here? So again, concepts like human-in-the-loop—the guardrails—that are starting to become more practical. So today, I’m excited, Ross. And let’s see what the future holds. Ross: Yes. And I think it’s out of shape, because if we read this with the human-first attitudes, I think we’ll get there. So where can people go to find out more about your work? Carl: So you can go to merlynn-ai.com—so it’s M-E-R-L-Y-N-N dash A-I dot com. You can also mail me at Carl@merlynn-ai.com if you want to have a discussion. And, you know, good old Google—there’s a lot of information about us on the web. So, yeah.  Ross: Fantastic. Thank you for your time and your insights, Carl. It’s a fascinating journey you’re on. Carl: Thanks, Ross. Thanks very much. The post Carl Wocke on cloning human expertise, the ethics of digital twins, AI employment agencies, and communities of AI experts (AC Ep3) appeared first on Humans + AI.
undefined
May 7, 2025 • 33min

Nisha Talagala on the four Cs of AI literacy, vibe coding, critical thinking about AI, and teaching AI fundamentals (AC Ep2)

“The floor is rising really fast. So if you’re not ready to raise the ceiling, you’re going to have a problem.” – Nisha Talagala About Nisha Talagala Nisha Talagala is the CEO and Co-Founder of AIClub, which drives AI literacy for people of all ages. Previously, she co-founded ParallelM where she shaped the field of MLOps, with other roles including Lead Architect at Fusio-io and CTO at Gear6. She is the co-author of Fundamentals of Artificial Intelligence – the first AI textbook for Middle School and High School students. Website: Nisha Talagala Nisha Talagala LinkedIn Profile: Nisha Talagala What you will learn Understanding the four C’s of AI literacy How AI moved from winter to wildfire Teaching kids to build their own AI from scratch Why professionals must raise their ceiling The role of curiosity in using generative tools Navigating context and motivation behind AI models Embracing creativity as a key to future readiness Episode Resources People Andrej Karpathy Organizations & Companies AIClub AIClubPro Technical Terms AI Artificial General Intelligence ChatGPT GPT-1 GPT-2 GPT Neural network Loss function Foundation models AI life cycle Crowdsourced data Training data Iteration Chatbot Dark patterns Transcript Ross Dawson: Nisha, it’s a delight to have you on the show. Nisha Talagala: Thank you. Happy to be here. Thanks for having me. Ross: So you’ve been delving deep, deep, deep into AI for a very long time now, and I would love to hear, just to start, your reflections on where AI is today, and particularly in relation to humans. Nisha: Okay, absolutely. So I think that AI has been around for a very long time. And there was a long time which was actually called AI winter, which is effectively that very few people working on AI—only the true believers, really. And then a few things kind of happened. One of them was that the power of computers became so much greater, which was really needed for AI. And then the data also, with the internet and our ability to store and track all of this stuff, the data also became really plentiful. So when the compute met the data, and then people started developing software and sharing it, that created kind of like a perfect storm, if you will. That enabled people to really see that AI could do things. Previously, AI experiments were very small, and now suddenly companies like Google could run really big AI experiments. And often what happened is that they saw that it worked before they truly knew why it worked. So this entire field of AI kind of evolved, which is, “Hey, it works. We don’t actually know why. Let’s try it again and see if it works some more,” kind of thing. So that has been going on now for about a decade. And so, AI has been all around you for quite a long time. And then came ChatGPT. And not everyone knows, but ChatGPT is actually not the first version of GPT. GPT-1 and GPT-2 were pretty good. They were just very hard to use for someone who wasn’t very technical. And so, for those who are technical—one thing is, you had to—actually, it was a little bit like Jeopardy. You had to ask your question in the form of an incomplete sentence, which is kind of fun in the Jeopardy sort of way. But normally, we don’t talk to people with incomplete sentences hoping that they’ll finish that sentence and give us something we want to know. So ChatGPT just made it so much easier to use, and then suddenly, I think it just kind of burst on the mainstream. And that, again, fed on itself: more data, more compute, more excitement—going to the point that the last few years have really seen a level of advancement that is truly unprecedented, even in the past history of AI, which is almost already pretty unprecedented. So where is it going? I mean, I think that the level—so it’s kind of like—so people talk a lot about AGI and generalized intelligence and surpassing humans and stuff like that. I think that’s a difficult question, and I’m not sure if we’ll ever know whether it’s been reached. Or I don’t know that we would agree on what the definition is there, to therefore agree whether it’s been reached or not reached. There are other milestones, though. For example, standardized testing has already been taken over by AI. AI’s outperform on just about every level of standardized test, whether it’s a college test or a professional test, like the US medical licensing exam. It’s already outperforming most US doctors in those fields. And it’s scoring well on tests of knowledge as well. And also making headway in areas that are traditionally considerably challenged—areas like mathematics and reasoning have become issues. So I think you’re dealing with a place where, what I can tell you is that the AIs that I see right now in the public sphere rival the ability of PhD students I’ve worked with. So it’s serious. And I think it’s a really interesting question of—I think the future that I see is that we have to really be prepared for tools that are as capable, if not in some areas more capable than we are. And then figure out: What is the problem that we are trying to solve in that space? And how do we work collaboratively with the tools? I think picking a fight with the tools is unwise. Ross: Yeah, yeah. And I guess my broader view is that the intent of being able to create an AI of humans as a reference point was always misguided. I mean to say, all right, we want to create intelligence. Well, the only intelligence we know is human, so let’s try to mimic that and to replicate what it does as much as possible. But this goes to the point, as you mentioned, of augmentation, where on one level, we can say, all right, we can compare humans versus AI on particular tests or so on. But there are, of course, a multitude of ways in which AIs can augment humans in their capabilities—cognitive and intellectual and otherwise. So where are you seeing the biggest potentials in augmenting intelligence or cognition or thinking or positive intent? Nisha: Absolutely. So I think, honestly, the examples sort of—I feel like if you look for them, they’re kind of everywhere. So, for example, just yesterday—or the day before yesterday—I wrote an article about vibe coding. Vibe coding is a term coined by Andrej Karpathy, which is essentially the way he codes now. And he’s a very famous person who, obviously, is a master coder. So he has alternatives—lots of ways that he could choose to write code. And his basic point is that now he talks to the machine, and he basically tells it what he wants. Then it presents him with something. And then he says, “I like it. Change this, change that, keep going,” right? And I definitely use that model in my own programming, and it works really well. So really, it comes down to: you have something to offer. You know what to build. You know when you don’t like something, right? You have ideas. This is the machine that helps you express them, and so on and so forth. So if you do that, that’s a very good way of doing augmented. So you’re creating something, and sometimes, when you see a lot of options presented to you, you’re able to create something better just because you can see it. Like, “Oh, it didn’t take me three weeks to create one. Suddenly I have fifteen, and now I know I have more cycles to think about which one I like and why.” So that’s one example—just of creation collaboratively. Examples in medicine just abound. The ability to explore molecules, explore fits, find new candidates for drugs—it’s just unbelievable. I think in the next decade, we will see advancements in medicine that we cannot even imagine right now, just because of that ability to really formulate a problem, give a machine a task, have it come back, and then you iterate on it. And so I think if we can just tap humans into that cycle and make that transition—so that we can kind of see a bigger problem—then I think there’s a lot of opportunity. Ross: So, which—that leads us to the next thing. So the core of your work is around AI literacy and learning. And so it goes to the question of: AI is extraordinarily competent in many domains. It can augment us. So what is—what are the foundational skills or knowledge that we require in this world? Do we need to understand the underlying architectures of AI? What do we need to understand—how to engage with generative AI tools? What are the layers of AI literacy that really are going to be important in coming years? Nisha: Very good question. So I can tell you that kind of early on in our work, we defined AI literacy as what we call the four C’s. We call them concepts, context, capability, and creativity. Ross: Sorry, could you repeat this? Nisha: Yes—concepts, context, capability, and creativity. Ross: Awesome. Nisha: So, concept is—you really should know something about the way these tools are created. Because as delightful as they are, they are not perfect. And a good user who’s going to use it for their own—who’s going to have a good experience with it—is going to be able to pick where and how to interact with it in ways that are positive and productive, and also be able to pick out issues, and so forth. And so what I mean by concept is: the reliance of AI on data and being able to ask critical questions. “Okay, I’m dealing with an AI. Where did it get its data? Who built it? What was their motivation?” Like these days, AIs are so complex that what I tell my students is: you don’t know what it’s trying to do. What is its goal? It’s sitting there talking to you. You didn’t pay for it—so what is it trying to accomplish? And the easiest way to find out is: figure out who paid for it and figure out what it is they want. And that is what the AI is trying to accomplish. Sometimes it’s to engage you. Sometimes it’s to get information from you. Sometimes it’s to provide you with a service so that you will pay, in which case the quality of its service to you will matter, and such like that. But it’s really important, when you’re dealing with a computer or any kind of service, that you understand the motivations for it. What is it being optimized for? What is it being measured on? And so forth. So there’s kind of concepts like that—about how these tools are created. That does not mean everyone has to understand the nuances of how a neural network gets trained, or what it means to have a loss function, or all these things. That’s suitable for some people, but not necessarily for everyone. But everyone should have some conceptual understanding. Then context. Ross: Or just gonna say, those interesting patterns on dark patterns. A paper in dark patterns on AI, which came out last week, I think, in one of the domains was second fancy, where, essentially, as you suggest, AI can say, “You’re wonderful” in all sorts of guises, which, amongst other things, makes you like it or more to use it more. Nisha: Oh yes, they definitely have. They definitely want you to keep coming back, right? You suddenly see that. And it’s funny, because I was having some sort of an interaction with—I’m not gonna name which company wrote the model—and it said something like, “Yeah, we have to deal with this.” And I’m like, there’s no we here. It’s just me. When did we become we? You’re just trying just a little too hard to get on my good side here. So I just kind of noticed that. I’m like, not so good. But so concepts, to me, effectively means that—underlying the fundamental ways that these programs are built, how they rely on data, what it means for an AI to have a brain—and then the depth depends entirely on the domain. Context, for me, is really the fact that these things are all around us, and therefore you truly do want to know that they are behind some of the tooling that you use, and understand how your information is shared, and so forth. Because there’s a lot of personal decisions to be made here, and there are no right answers. But you should feel like you have the knowledge and the agency to make your own choices about how to handle tools. So that’s what I mean by context. It’s particularly important for young people to appreciate—context. Ross: And I think for professionals as well, because their context is, you know, making decisions in complex situations. And if they don’t really appreciate the context—and the context of the AI—then that’s, that’s not a good thing. Nisha: Absolutely. And then capability—really, it varies very much on domain. But capability is really about: are you going to be able to function, right? Are you going to be able to do a project using these tools? Or do you need to build a tool? Do you need to merge the tools? Do you need to create your own tools? So in our case, for young people, for example—because they don’t have a domain yet—we actually teach them how to build AI from scratch. So one of the very common things that we do is: almost in every class, starting from third grade, they build an AI in their first class completely from scratch. And they train it with their own data, and they see for themselves how its opinions change with the information they give it. And that’s a very powerful exercise because—so what I typically ask students after that exercise is, I ask them two questions. First question is: did it ever ask you if what you were teaching it was true? And the answer is always, no. You can teach it anything, and it will believe you. Because they keep teaching it information, and children being children, will find all sorts of hilarious things to teach a machine, right? And then—but then—they realize, oh, truth is not actually a part of this. And then the next question, which is really important, is: so what is your responsibility in this whole thing? Your responsibility is to guide the machine to do the right thing, because you already figured out it will do anything you ask. Ross: That’s really powerful. Can you tell me a little bit more about precisely how that works, and when you say, getting them to build their own AI? Nisha: So we have built a tool. It’s called Navigator, and it’s effectively a web-based front end to industry standard tools like TensorFlow and scikit-learn. And it runs on the cloud. Then we give each of our students accounts on it, and depending on how we do it, they can either—anonymized accounts, whatever we need to protect their privacy. At large-scale installations with schools, for example, it’s always anonymous. Then what happens is they go in, and they’re taken through the steps of building an AI. We give them a few datasets that are kid-friendly. So one other thing to remember when you’re teaching young people is a lot of the data that’s out there is not friendly to young people, so we maintain a massive repository of kid-friendly datasets. A very common case that they run is a crowdsourced dataset that we crowdsourced from children, which are sentences about happiness and sadness. So a child’s view—like chocolate might be happy, broccoli might be sad, things like that. But nothing sad—children can relate to. So they start teaching about happy and sad. And one of the first things that they notice is—those of them that have written programs before—this is kind of hard to write a program for. What word would you be looking for? There’s so many words. Like, I can’t use just the word happy. I might say, “I feel great.” I didn’t use the word happy, but I’m clearly happy. So they’re like, “Oh, so there’s something here—more than just looking for words. You have to find a pattern somehow.” And if you give it enough examples, a pattern kind of emerges. So then they train the AI—it takes about five minutes. They actually load up the data, they train an AI, they deploy it in the cloud, and it presents itself as a little chatbot, if you will, that they can type in some sentences and ask it whether it thinks they’re happy or sad. And when it’s wrong, they’re like, “Oh, it’s wrong now.” Then there’s a button they can press that says, “I don’t think you’re right.” And then it basically says, “Oh, interesting. I will learn some more.” They can even teach it new emotions. So they teach it things like, “I’m hungry,” “I’m sleepy,” “I’m angry,” whatever it is. And it will basically pick up new categories and learn new stuff. So after the first five minutes, when they interact with it—within about 15 minutes—every child has their own entire, unique AI that reflects whatever emotions they chose to teach and whatever perspective. So if you want to teach the AI that your little brother is the source of all evil, then it will do that. And stuff like that. And then after a while, they’re like, “Oh, I know how this was created. I can see its brain change.” And now you can ask it questions about what does this even mean when we have these programs. Ross: That is so good. Nisha: So that’s what I mean. And it has a wonderful reaction in that it takes away a lot of the—it makes it tangible. Takes away a lot of the fear that this is some strange thing. “I don’t know how it was made.” “I made it. I converted it into what it is. Now I understand my agency and my responsibility in this situation.” So that’s capability—and it’s also creativity in an element—because every single one of our projects, even at third grade, we encourage a creative use of their own choosing. So when the children are very young, they might teach an AI to learn all about an animal that they care about, like a rabbit. In middle school, they might be looking more at weather and pricing and stuff like that. In high school, they’re doing essentially state-of-the-art research. At this point, we have a massive number of high school students who are professionally published. They go into conferences and they speak next to PhDs and professors and others, and their work is every bit as good and was peer-reviewed and got in entirely on merit. And that, I think, tells me what is possible, right? Because part of it is that when the tools get more powerful, then the human brain can do more things. And the sooner you put— And the beautiful thing about teaching K–12 is they are almost fearless. They have a tremendous amount of imagination. They start getting a little scared around ninth grade—kicks in: “Oh, maybe I can’t do this. Maybe this isn’t cool. I’m going to be embarrassed in front of my friends.” But before that, they’re almost entirely fearless. They have fierce imagination, and they don’t really think anything cannot be done. So you get a tool in front of them, and they do all sorts of nifty things. So then I assume these kids, I’m hoping, will grow up to be adults who really can be looking at larger problems, because they know the tools can handle the simpler things. Ross: That is, that is wonderful. So this is a good time just to pull back to the big picture of your initiatives and what you’re doing, and how all of these programs are being put into the world? Nisha: Yeah, absolutely. So we do it in a number of different ways. Of course, we offer a lot of programs on our own. We engage directly with families and students. We also provide curriculums and content for schools and organizations, including nonprofits. We provide teacher training for people who want to launch their own programs. We have a professional training program, which is essentially—we work with both companies and individuals. In our companies, it’s basically like they run a series of programs of their choosing through us. We work both individually with the people in the company—sometimes in a more consultative manner—as well as providing training for various employees, whether they’re product managers, engineers, executives. We kind of do different things. And then individuals—there are many individuals who are trying to chart a path from where they are to where—first of all, where should they be, and then, how can they get there? So we have those as well. So we actually do it kind of in all forms, but we also have a massive content base that we provide to people who want to teach as well. Ross: And so what’s your geographical scope, primarily? Nisha: So we’re actually worldwide. The company—we started out in California. We went remote due to COVID, and we also then started up an office in Asia around that time. So now we’re entirely remote—everywhere in the world. We have employees primarily in the US and India and in Sri Lanka, and we have a couple of scattered employees in Europe and elsewhere. And then most of our clients come from either the US or Asia. And then it’s a very small amount in Europe. So that’s kind of where our sweet spots are. Ross: Well, I do hope your geographical scope continues to increase. These are wonderful initiatives. Nisha: Thank you.  Ross: So just taking that a step further—I mean, this is obviously just this wonderful platform for understanding AI and its role in having development capabilities. But now looking forward to the next five or ten years—what are the ways in which, for example, people who have not yet exposed themselves to that, what are the fundamental capability sets in relation to work? So, I mean, part of this is, of course, people may be applying their capabilities directly in the AI space or technology. But now, across the broader domain of life, work—across everything—what are the fundamental capabilities we need? I mean, building on this understanding of the layers of AI, as you’ve laid out? Nisha: Yeah, so I think that, you know, a general sort of—so if we follow this sort of the four C’s model, right—a general, high-level understanding of how AI works is helpful for everyone. And I mean, you know, and I mean things like, for example, the relationship between AI and data, right? How do AI models get created? One of the things I’ve learned in my career is that—so there’s some sort of thing as an AI life cycle, like, you know, how does an AI get built? And even though there are literally thousands of different kinds of AI, the life cycle isn’t that different. There’s like this relationship between data, the models, the testing, the iteration. It’s really helpful to know that, because that way you understand—when new versions come out—what happened. Yeah, what can you expect, and how does information and learning filter through? You know, context is very critical—of just being aware. And these days, context is honestly not that complicated. Just assume everything that you’re—everything that you interact with—has an AI in it. Doesn’t matter how small it is, because it’s mostly, unfortunately, true. The capability one is interesting. What I would suggest for the most broad-based audience is—really, it is a good idea to start learning how to use these foundation models. So I’m talking about the—you know—these models that are technically supposed to be good at everything. And one of the things—the one thing I’ve kind of noticed, dealing with particularly professionals, is—sometimes they don’t realize the tool can do something because it never occurred to them to ask, right? It’s one of those, like—if somebody showed you how to use the tool to, you know, improve your emails, right? You know the tool can do that. But then you come along and you’re looking for, I don’t know, a recipe to make cookies. Never occurs to you that maybe the tool has an opinion on recipes for cookies. Or it might be something more interesting like, “Well, I just burned a cookie. Now, what can I do? What are my options? I’ve got burnt cookies. Should I throw out the burnt cookies? Should I, you know, make a pie out of them?” Whatever it is, you know. But you can always drop the thing and say, “Hey, I burnt a cookie. Burned cookies.” And then it will probably come back and say, “Okay, what kind of cookies did you burn? How bad did you burn them?” You know, and this and that. “And here are 10 things you can do with them.” So I think the simplest thing is: just ask. The worst thing it’ll do is, you know, it will come back with a bad answer. And you will know it’s a bad answer because it will be dumb. So some of it is just kind of getting used to this idea that it really might actually take a shot at doing anything. And it may have kind of a B grade in almost anything—any task you give it. So that’s a very mental shift that I think people need to get used to taking. And then after that, I think whatever they need to know will sort of naturally evolve itself. Then from a professional standpoint, I think—I kind of call it surfing the wave. So sometimes people would come to me and say, “Hey, you know, I’m so behind. I don’t even know where to begin.” And what I tell them is: the good news is, whatever it is that you forgot to look up is already obsolete. Don’t worry about it. It’s totally gone. You know, it doesn’t matter. You know, whatever’s there today is the only thing that matters. You know, whatever you missed in the last year—nobody remembers it anymore anyway. So just go out there. Like, one simple thing that I do is—if you use, like, social media and such—you can tailor your social media feed to give you AI inputs, like news alerts, right, or stuff that’s relevant to you. And it’s a good idea to have a feel for: what are the tools that are appropriate in your domain? What are other people thinking about the tools? Then just, you know, pick and choose your poison. If you’re a professional working for a company—definitely understand the privacy concerns, the legal implications. Do not bring a tool into your domain without checking what your company’s opinions are. If the company has no opinions—be extra careful, because they don’t know, but they don’t know. So just—there’s a concern about that. But, you know, just be normal. Like, just think of the tool like a stranger. If you’re going to bring them into the house, then, you know, use your common sense. Ross: Well, which goes to the point of attitude. And part of it’s how—this—how do we inculcate that attitude of curiosity and exploration and trying things, as opposed to having to take a class, go in a classroom before you know what to do? And you have to find your own path by—learn by doing. But that takes us to that fourth step of creativity, where—I mean, obviously—you need to be creative in how you try to use the tools and see what you learn from that. But also, it goes back to this idea of augmenting creativity. And so, we need to be creative in how we use the tools, but also there are ways where we can hopefully create this feedback loop, where the AI can help us augment or expand our creativity without us outsourcing to it. Nisha: Absolutely. And I think part of this is also recognizing that—here’s the problem. If you’re—particularly if you’re a professional—this is less an issue for students because their world is not defined yet. But if you’re a professional, there is a ceiling of some kind in your mind, like “this is what I’m supposed to do,” right? And the floor is wherever you’re standing right now. And your value is in the middle. The floor is rising really fast. So if you’re not ready to raise the ceiling, you’re going to have a problem. So it’s kind of one of those things that is not just about the AI. You have to really have a mental shift—that I have to be looking for bigger things to do. Because if you’re not looking for bigger things to do, unfortunately, AI will catch up to whatever you’re doing. It’s only a matter of time. So if you don’t look for bigger things—that’s why the areas that feel like medicine are flourishing—is because there are so many bigger problems out there. And so, some of it is also looking at your job and saying, “Okay, is this an organization where I can grow? So if I learn how to use the AI, and I’m suddenly 10x more efficient at my job, and I have nothing left to do—will they give me more stuff to do?” If they don’t, then I think you might have a problem. And so forth. So it’s one of those—you have to find—there’s always a gap. Because, look, we’re a tiny little planet in the middle of a massive universe that we don’t know the first thing about. And as far as we know, we haven’t seen anyone else. There are bigger problems. There are way, way bigger problems. It’s a question of whether we’ve mapped them. Ross: Yeah, we always need perspective. So looking forward—I mean, you’re already, of course, having a massive positive impact through what you are doing—but if you’re thinking about, let’s say, the next five years, since that’s already pretty much beyond what we can predict, what are the things that we need to be doing to shape a better future for humans in a world where AI exists, has extraordinary capabilities, and is progressing fast? Nisha: I think really, this is why I focus so much on AI literacy. I think AI literacy is critical for every single human on the planet, regardless of their age or their focus area in life. Because it’s the beginning. It’s going away from the fear and really being able to just understand just enough. And also understanding that this is not a case where you are supposed to become—everyone in the world is going to become a PhD in mathematics. That’s not what I mean at all. I mean being able to realize that the tool is here to stay. It’s going to get better really fast. And you need to find a way to adapt your life into it, or adapt it into you, or whichever way you want to do it. And so if you don’t do that, then it really is not a good situation. So I think that’s where I put a lot of my focus—on creating AI literacy programs across as many different dimensions as I can, and providing— Ross: With an emphasis on school? Nisha: So we have a lot of emphasis on schools and professionals. And recently, we are now expanding also to essentially college students who are right in the middle tier. Because college students have a very interesting situation—that the job market is changing very, very rapidly because of AI. So they will be probably the first ones who see the bleeding edge. Because in some ways, professionals already have jobs—yes—whereas students, prior to graduating from college, have time to digest. It’s this year’s and next year’s college graduates who will really feel the onslaught of the change, because they will be going out in the job market for the first time with a set of skills that were planned for them before this happened. So we do focus very much on helping that group figure out how to become useful to the corporate world. Ross: So how can people find out more about your work and these programs and initiatives? Nisha: Yeah, so we have two websites. Our website for K–12 education is aiclub.world. Our website for professionals and college students—and very much all adults—is aiclubpro.world. So you can look there and you can see the different kinds of things we offer. Ross: Sorry, could you repeat the second URL? Nisha: It’s aiclubpro.world. Ross: aiclubpro.world. Got it? That’s fantastic. So thank you so much for your time today, but also your—the wonderful initiative. This is so important, and you’re doing a marvelous job at it. So thank you.  Nisha: Really appreciate it. Thank you for having me. The post Nisha Talagala on the four Cs of AI literacy, vibe coding, critical thinking about AI, and teaching AI fundamentals (AC Ep2) appeared first on Humans + AI.
undefined
Apr 30, 2025 • 13min

HAI Launch episode

“This is about how we need to grow and develop our individual cognition as a complement to AI.” – Ross Dawson About Ross Dawson Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload. Website: Ross Dawson Advanced Human Technologies LinkedIn Profile: Ross Dawson Books Thriving on Overload Living Networks 20th Anniversary Edition Living Networks Implementing Enterprise 2.0 Developing Knowledge-Based Client Relationships: Leadership in Professional Services Developing Knowledge-Based Client Relationships, The Future of Professional Services Developing Knowledge-Based Client Relationships What you will learn Tracing the evolution of the podcast name and vision How chatgpt shifted the AI conversation overnight Why humans plus AI is more than just a rebrand The mission to amplify human cognition through AI Exploring collective intelligence and team dynamics Rethinking work, strategy, and value creation with AI Envisioning a co-evolved future for humans and machines Episode Resources Books  Thriving on Overload Technologies & Technical Terms AI agents Artificial intelligence Intelligence amplification Cognitive evolution Collective intelligence Strategic thinking Strategic decision-making Value creation Organizational structures Transhumanism AI governance Existential risk Critical thinking Attention Awareness Skill development Transcript Ross Dawson: This is the launch episode of the Humans Plus AI podcast, formerly the Amplifying Cognition podcast, and before that, the Thriving on Overload podcast. So in this brief episode, I will cover a bit of the backstory and a bit of where we got to where we are today, and calling this Humans Plus AI now—why I think it is so important, what it is we are going to cover, and framing a little bit this idea of Humans Plus AI. So the backstory is that the podcast started off as Thriving on Overload. It was the interviews I did for my book Thriving on Overload. The book came out in September 2022. By then, I was still continuing with the Thriving on Overload podcast, continuing to explore this idea of how we can amplify our thinking in a world of unlimited information. Essentially, our brains are finite, but in a world of infinite information, we need to learn the skills and the capabilities to be as effective as possible. And COVID—we’ll come back to that—but that is a fundamental issue today, which is the reason I wrote the book. Just three months after the book came out was what I call the ChatGPT moment, when there’s crystallizing progress in AI where I think just about every single researcher and person who’d been in the AI space was surprised or even amazed by the leap in capabilities that we achieved with that model—and of course, so much more since then. So I quickly wanted to consolidate my thinking, and immediately came on this phrase Humans Plus AI, which reflects a lot of my work over the years. I have been literally writing about AI, the role of AI agents, and particularly AI and work—for, well, in some ways, a couple of decades. But this was a moment where I felt I had to bring all of my work together. So fairly soon, I decided I needed to rebrand the podcast to be not just Thriving on Overload. But I still was tied to that theme. So I decided, let’s make this Amplifying Cognition, trying to get that middle ground with integrating the ideas of Humans Plus AI. How could humans and AI together be as wonderful as possible, but also this idea of Thriving on Overload—this individual cognition—how do we amplify our possibilities? There was a long list of different names that I was playing with, and one of the other front runners was, in fact, Amplifying Humanity. And in a way, that’s really what my mission is all about. And what all of these podcasts—the podcast and its various names—is about: how do we amplify who we are, our capabilities, our potential? Of course, the name Amplifying Humanity sounds a bit diffused. It’s not very clear. So it wasn’t the right name. Or not—there was certainly no right title at the time. But now, when I take this and say, well, we’re going to call this Humans Plus AI, in a way, I think that the Thriving on Overload piece of that is still as relevant—or even more relevant. That is part of the picture as we bring humans and AI together. This is about how we need to grow and develop our individual cognition as a complement to AI. So in fact, when I talk Humans Plus AI, Thriving on Overload, and Amplifying Cognition are really baked into that idea. So the broad frame of Humans Plus AI is simply: we have humans. We are inventors. We have created extraordinary technologies for many years, and the culmination of that at this point is something that is analogous to our own intelligence and cognitive capabilities. So this could be seen as challenging, and I think there are, of course, many things that we have to navigate through this. But it is also very much about: what could we do together? The originator, the creator—which is us—and that which we have created. We need to find how these together can be integrated, can be complementary to, can create more possibilities than ever before. There are many earlier thinkers—prominently Doug Engelbart—who talk about intelligence amplification. And again, that’s really what AI should be about: amplifying our capabilities and possibilities. There are, of course, many, many risks and challenges with AI, including in governance—conceivably existential risk—in terms of all sorts of ethical issues that we need to address. And I think it’s wonderful there are many people focusing on that. My particular mission is to be as positive as possible, to be able to focus singularly not on the negatives, whilst acknowledging and understanding those, but looking at what could be possible—who we could become in terms of our capabilities as well as our humanity—and moving forward and trying to provide some kind of a beacon or a light or something to look to in this positive vision for what is possible from humans and AI together. So this starts with the individual, where we can use AI to develop our skills and our capabilities. We need skill to be able to use that. We want to cover some of the attitudes, what education is required, what are the tools we can use, but also look at other ways to augment ourselves which aren’t necessarily tied to technology. Still coming back to issues such as awareness, attention, critical thinking—these are all the things that will keep us complements to the technologies as well as possible. In organizations, there’s many potentials for organizations to reshape, to reform, and bring together humans and AI. Looking at how teams form, looking at ideas of collective intelligence—which, of course, the podcast has looked at for a long time. To look at the impact of AI, particularly in professional services, the impact of AI on business models and value creation and new organizational structures. And while many people talk about the one-person billion-dollar company, that’s interesting—what’s more interesting is how you get a group of people, small or large, complemented by AI, to create more value than ever before. This also will look at strategic thinking. So I’ve been focusing very much on AI and strategic decision-making. AI for strategy. Also looking at AI and investment processes. How do we use AI to allocate capital better than ever before, making sure that we are making the right decisions? So one of the core themes of the podcast will be using these—AI for strategy, strategic thinking, investment—sort of the bigger picture thinking, and being quite specific around that: the approaches, the tactics, the strategies, the techniques whereby everyone from individual entrepreneurs to boards to organizations can be more effective. We will certainly be delving into work and the nature of how work evolves with both humans and AI involved—what are the structures for how that can happen effectively, what are the capabilities required, how we will see that evolution, and what are some of the structures for sharing value amongst people. And looking at this bigger, broader level of society—this cognitive evolution. How will our evolution evolve? What is the co-evolution of humans and AI? How can we build this effective collective intelligence at a species level? How can we indeed build collective wisdom? How can AI support us in being wiser and being able to shape better pathways for ourselves, for communities, for nations, for societies, for humanity? And also looking at the future—what is the future of intelligence? What is the future of humanity? What is the future of what comes beyond this? And just the reality—of course, we are moving closer to a transhuman world, where we are going beyond what we have been as humans to who we will be, not least through being complemented by AI. So that’s some of the many themes that we’ll be exploring. All of them fascinating, deeply important, where this is all the frontiers—where there are no guidelines, there are no established practices and books and things that we can look at. This is being created as we go. So this is a forum where we will try as much as possible to uncover and to share the best of the thinking and the ideas that are happening in the world in creating the best positive potential from humans and AI together. So if you want to keep on listening to some of these wonderful conversations I’m having, then please make sure to subscribe to the podcast. Love to hear any feedback you have. One way is where I spend most of my online time—is LinkedIn, my own personal profile. Or we have the page, LinkedIn page, which we’re just renaming from Amplifying Cognition to Humans Plus AI. If you really want to engage, then please join the community. There will always be free sections of the community. In fact, all of it is still free for now, and you’ll find like-minded people. If you find any interest at all in these topics, you’ll find lots of other people who are delving deep with lots to share. So thank you for listening. Thank you for being part of this journey. I think this is a very, very exciting time to be alive, and if we focus on the positive potential, we have a chance of creating a— So thank you for being part of the journey. Catch you somewhere along the way. The post HAI Launch episode appeared first on Humans + AI.
undefined
Apr 23, 2025 • 34min

Kunal Gupta on the impact of AI on everything and its potential for overcoming barriers, health, learning, and far more (AC Ep86)

“Maybe the goal isn’t to eliminate the task or the human—but to reduce the frustration, the cognitive load, the overhead. That’s where AI shines.” – Kunal Gupta About Kunal Gupta Kunal Gupta is an entrepreneur, investor, and author. He founded and scaled global digital advertising AI company Nova as Chief Everything Officer for 15 years, with teams and clients across 30+ countries. He is author of four books, most recently 2034: How AI Changed the World Forever. Website: Kunal Gupta Kunal Gupta LinkedIn Profile: Kunal Gupta Book: 2034: How AI Changed Humanity Forever What you will learn Hosting secret AI dinners to spark human insight Using personal data to take control of health Why cognitive load is the real bottleneck When AI becomes a verb, not just a tool Reducing frustration through everyday AI The widening gap between AI capabilities and adoption Empowering curiosity in an AI-shaped world Episode Resources Books 2034: How AI Changed Humanity Forever Technical Terms & Concepts AI AI literacy Agentic AI Cognitive load LLMs (Large Language Models) Reference ranges Automation Browser agents Voice agents Data normalization Longevity-based testing Health data Cloud computing Social media adoption Generative AI Transcript Ross Dawson: Kunal, it is awesome to have you on the show. Kunal Gupta: Thanks, Ross. Nice to see you. Ross: So you came out with a book called 2034: How AI Changed Humanity Forever. So love to hear the backstory. Yes, that’s the book. So what’s the backstory? How did this book come about? Kunal: Yeah, I’ve written a few books, but this is definitely the most fun to write and to read and reread, and at some points, to rewrite. So back in November 2022, ChatGPT launches. There’s this view—okay, this is going to change our world, not sure how. So in the ensuing months, I had a number of conversations with friends and colleagues asking, “Hey, like, how does this change everything?” I asked people very open-ended questions, and the responses were all over the place. To me, what I realized was we actually just don’t know, and that’s the best place to be—when we don’t know but are curious. So I started to host dinners, six to ten people at a time in my apartment. I was in Portugal at the time, and London as well. Over the course of 2023, I hosted over 250 people over a couple dozen dinners. The setup was really unique in that nobody knew who else was coming. Nobody was allowed to talk about work, nobody was allowed to share what they did, and no phones were allowed either. So that meant really everybody was present. They didn’t need to be anybody, they didn’t need to be anywhere, and they could really open up. All of the conversations were recorded. All the questions were very open-ended along the lines of—really the subtitle of the book—like, how does AI change humanity? And we got into all sorts of different places. So over the course of the dinners in the year, recorded everything, had to transcribe it, and working with an editor, we manually went through the transcripts and identified about 100 individual ideas that came out of a human. And it’s usually some idea, inspiration, or some fear or insecurity. And we turned that into a book which has 100 different ideas, ten years into the future, of how AI might take how we live, how we work, how we date, how we eat, how we walk, how we learn, how we earn—and absolutely everything about humanity. Ross: So, I mean, there’s obviously far more in the book than we can cover in a short podcast, but what are some of the high-level perspectives? It’s been a bit of time since it’s come out, and people have had a chance to read it and give feedback, and you’ve reflected further on it. So what are some of the emergent thinking from you since the book has come out? Kunal: Yeah, I probably hear from a reader or two daily now, sharing lots of feedback. But the most common feedback I hear is that the book has helped change the way they think about AI, and that it’s helped them just think more openly about it and more openly about the possibilities. And that’s where introducing over 100 ideas across different aspects of society and humanity and industries and age groups and demographics is really meant to help open up the mind. I think in the face of AI, a lot of parts of society were closed or resistant to its potential impacts, or even fearful. And the book is really designed to open up the mind and drop some of the fear and really to be curious about what might happen. Ross: So taking this—taking sort of my perennial “humans plus AI” frame—what are some of the things that come to mind for you in terms of the potential of humans plus AI? What springs to mind first? Kunal: Those that say yes and are open and curious about it—I really think it’s an accelerant in so many different parts of life. I’ll give an example of AI being used in government. I gave the fictitious example of Tokyo electing the first AI mayor, and how that went and what the implications of that were. I gave examples in Europe of AI being used to reduce bureaucracy and streamline all the processes. Government is an example of something that touches all of our lives in a very impactful way, and AI being used to help make better decisions—more objective decisions, decisions that aren’t tied to ego or a four-year cycle—I think could lead to better outcomes for the aggregate of any given society or country or city. That’s one example. Education is another clear example, in terms of how young people learn, but then also how old people learn. There are a couple of ideas around AI—this idea of AI literacy for not just young people, but also old people—and some interesting ways that comes to life. So those are a few examples covering a spectrum of how AI and humans can come together. Ross: So coming back to present and now and here. So what, in what ways are you using AI to amplify what you’re doing? Or where is your curiosity taking you? Kunal: Absolutely everything. And my fiancée gets annoyed that I’m talking some days to ChatGPT more than I am to her. And we live together. We call ChatGPT my friend, because it gets embarrassing to just say ChatGPT so much within a single day. So, “as I was talking to my friend,” “I was asking my friend,” etc. There’s a few areas of my life that I’m very focused on these days. I’d say health is a big one, and optimizing my health, understanding my health, testing. So making sense of kind of my health data beyond the basic blood tests. I’ve done lots of longevity-based testing and take lots of supplements. So going deeper and geeking out on that has been a lot of fun. Ross: So just digging into that. So do you collect data which you then analyze, or is this text-based, or is this using data to be able to feed into the systems? Kunal: So my interest on health started probably four years ago. Had some minor health issues that triggered me to start to do a bunch of testing. And then, being a tech guy, I got fascinated by the data that I was starting to collect in my body. So it happened, but four years of very consistent blood work, gut health, sleep data, with all the fitness and sleep trackers, smart scale, and lots, lots more. So I’d say that’s one part—is I have a couple years’ worth of data. I think the second part that I found interesting, because I’ve had a lot of data, is to use my own data as the baseline versus some population average, which is a different gene pool and a different geographic location. So seeing just the changes in my data over time, and then using reference ranges as one comparison point has been helpful. And then, I see lots of specialists for different health issues that I’ve dealt with over the years. And I have found AI, prompted the right way with the right data, as effective, if not more effective, than the human specialists. So I do walk into my specialist appointments now with a bunch of printouts, and I essentially fact what they tell me oftentimes in real time with ChatGPT and other AI tools. And that gives me just a lot more confidence in things I’m putting into my body, and things I’m doing to my body. Ross: How do the doctors respond to that? Kunal: I’m definitely unique in that sense—at least the specialists I see, they’re not used to it. I would say probably like three to five doctors lean in and ask me how did I collect it, and want copies of the printouts. And two out of five are a little dismissive. And that’s not surprising, I guess. Ross: There’s just this recent data showing—comparing the patient-perceived outcomes from doctors—where basically they perceive the quality of the advice from the AI to be a little bit better than the doctors, and the empathy way, way better than doctors. Kunal: Yeah, yeah, I trust in my experience as well. Ross: So, but now you’re uploading spreadsheets to the LLMs or other raw data? Kunal: Spreadsheets and PDF reports. And that’s the annoying part, actually. I’ve done a couple dozen different tests on different parts of my body and get reports in all these different formats. It’s all in PDFs from all these providers, and they give their own explanations using their own reference data. So it’s hard to make sense of it. And I live between Australia and Portugal, so even a blood test in Europe versus blood tests in Australia—different metrics, different measurement systems, different reference ranges. So AI has helped me normalize the different formats of data. Ross: Yeah, but of course, you have to have that antenna into putting it in and asking it to normalize, and then be able to get your baseline out of that. Kunal: So I’d say it’s just like this theme is—for the listeners or viewers—it’s just feeling empowered. And health is a very sensitive topic, one that oftentimes, when we have issues, we feel helpless for them. And the support to help has helped me feel more empowered and more motivated, frankly, to improve my health. Ross: Yeah, well, I mean, just as a tiny, tiny example—my father went into some tests a little while ago, and we got back the report. It was going to be interpreted by the specialist when he went to visit them a week or two later. So I was actually able to get some kind of sense of what this cryptic report meant before waiting to find out the specialist’s interpret for us. Kunal: Yeah, there’s so much anxiety that could exist in waiting, and the unknown. So even if the known is good or bad, just the known is helpful versus the unknown. Ross: So in terms of cognition, or thinking, or creating, or ideation—or, I suppose, a lot of the essence of what you do as an entrepreneur and thinker and author—so what… So let’s get tactical here. What are some of the lessons learned, and tools you use, or how you use them, or approaches which you’ve found particularly useful? Kunal: I’ll give a very simple example that hopefully is relatable for many people. But it’s figured a much deeper reflection for me—realizing I need to think differently. And as an adult, it’s harder to change the way we think. So for my partner’s father, who turned 70 earlier this year, we threw and hosted a big party on a boat in the Sydney Harbor. And three days before the party, I went to my partner. I was like, “We should have a photo booth on the boat.” And she dismissed it, saying, like, “This is three days. We don’t have time. There’s already too much work to do for the party.” She was feeling stressed. And the creative and entrepreneur in me—I heard it, but I didn’t listen to it. So then I went to GPT and I said, “Is it actually allowed to have a photo booth on a boat?” And it’s like, “Yes.” “Okay, can I get a photo booth vendor in three days, in Sydney?” And the answer was yes. I’m like, “Okay, who are 10 photo booth vendors in Sydney?” And it gave me 10 vendors. And then I was about to click into the first website, and then I just had this reaction. I was like, “This is too much work.” So then I said, “How can I contact all of these vendors?” And it gave me their phone numbers and email addresses. Then I was about to click the email address—and again, I was like, “Still too much work.” I was feeling quite impatient. So then I paused for a minute, and then I said, “Give me the email addresses, separated by commas.” And then I opened up Gmail, put the email addresses in BCC, and wrote up just a three-line email saying, “This is the date, this is the location, need a photo booth. Give me a proposal.” Within three hours, I had four proposals back, showed them to my partner, she picked one that she liked, and it was done. So the old way of doing that would have taken so many phone calls and missed calls and conversations and just a noise and headache. And this new way literally took probably less than seven minutes of my time, and we got to a solution. So that’s an example. To abstract it out now—there’s so many perceived barriers to the old way of doing things. And I think in simple daily life tasks, I’m still learning and challenging myself to just think differently of how to approach it. Ross: So, what you describe is obviously what many people say is the image for agentic AI. You should have an agent where you can just say to them exactly—give them the brief—and it will be able to go and do everything which you described. But at the same time, speaking in early April 2025, agents are still not quite there—as in, we don’t have any agent right now which could do precisely what you’ve said. So where do you see that pathway in terms of agents being able to do these kinds of tasks? And how is it we use them? Where does that lead us? Kunal: This is such an interesting moment because we don’t know that fun part. So we may end up with browser agents—agents that go, open up a browser, click in the browser, and use it on the user’s behalf. And that might be with like 70% accuracy, and then 80%, and then 90%, and then it gets to “good enough” to schedule and manage things. We might end up with agents that make phone calls—and there’s lots of demos flying around the internet—that make bookings and coordinate details and appointments on our behalf. Or it may be just a little simpler than that, which may be more realistic—kind of like the photo booth example I gave—which is an agent to just help us think through how to get the task done. And maybe it’s not eliminating the task, but reducing the task. And I think we have a role to play there, as the human user, and the AI has a role to play. Understanding how to get the best of both versus the worst of both. The worst of both is impatience on the human and then incompetence on the AI—and then throwing the whole thing out. I do think there’s a world where it’s the best of both. And probably reframing the goal, which is not to eliminate the human, it’s not to eliminate the task for the human, but to reduce the frustration, reduce the cognitive load, reduce the overhead—the time it takes to get something done. And software development—we can get into it, if you’d like—is, I think, an example where that’s starting to show itself. It’s not eliminating the human, but it’s reducing the cognitive load and the time and the headache involved. Ross: So this goes—it’s a very, very big question, very big and broad question—but this idea of reducing cognitive load, freeing up time so that, you know, the various ways we can put that is that it allows us to move to higher-order, more complex tasks and thinking and creativity, or to give us time to do other things. And I think there may be other frames around what that does, but if we are freeing up cognitive load, what do you see as the opportunities from that space? Kunal: Yeah, I see cognitive load as the critical path right now. I mean, there’s so many ideas to explore and technologies to try, but there’s a cognitive load to learn it. And I think we have a while to go where we won’t find interesting, creative, or productive uses for our excess cognitive load—probably at least another… We won’t—there won’t be an excess because, even as AI frees us up, there’s going to be more. There’s still such a big backlog of things we’re interested in, curious in, that we want to apply our cognitive load to—whether it’s productive in an economic sense, or productive in a health sense, or productive in a friendship sense, or productive in a learning sense. So maybe that’s the way to frame it—is that it’ll become multidimensional. It won’t be purely an economic motivation of work. And there may be other motivations that we have, but are often suppressed or not expressed, because the economic one takes place of this. Ross: Yeah, no. I mean, that goes, I think, to what is one of the greatest fallacies in this—people predicting future techno-unemployment—is that there’s a fixed amount of work. And if we take away work by machines, then there’s not gonna be much left to do with humans. Well, there’s always more to do, and there’s more to create and spend our time. So there’s no fixed amount of work or ideation or thinking or whatever. But I think I like this idea that we are—humans are—curious. We are inventors, we are thinkers, and we are… I think this curiosity is a—if AI can help us or guide us or support us in being more curious because we are able to, amongst other things, learn things quickly, which would have previously required taking a degree, or whatever it may be—then that is a massive bonus for humanity. Kunal: Yeah, yeah, completely. I am curious—your take. Something I am worried about is if that curiosity becomes of a passive nature versus active. Passive meaning Netflix and Instagram and TikTok, with the consumption on these more passive platforms growing. And we saw that in the pandemic. We had a bunch of people who were not working, maybe getting some small paychecks from the government, and the response on aggregate was to consume versus create. And so I do worry—is what if the curiosity just turns into more scrolling and browsing, versus something that’s that, you know.  Ross: This goes to my last chapter of Thriving on Overlord, where I essentially talk about cognitive evolution—essentially saying we’re getting this… it’s evolution or devolution, in the sense of the most default path for our brain is to just continue to get easy stimulus. And so, essentially, there are plenty of people who start spending all their day scrolling on TikTok, or whatever equivalent they have. Whereas, obviously, there are some who say, “Well, all of this information abundance means that I can do whatever I want, and I will go and explore and learn and be more than I ever could be before.” And so you get this divergence. I think there’s a very, very similar path here with AI, where there are people using AI as the… A lot of recent research is pointing to reduced cognitive functioning because we are offloading. And I often say, the greatest risk with AI is overreliance—where we just sort of say, “Oh, that’s good enough. I don’t need to do anything anymore.” And I think that’s a very real thing. And of course, many other people are using these as tools to augment themselves, achieve far more, be more productive, learn faster. But I think one of the differences between the simple information space in which we’ve been living and the AI space we’re now living in is that AI is interactive. We can ask the questions back. TikTok or TV screen and so on—you, well, you can create your TikTok. Sure, that’s great if you do that. But the AI is inherently interactive. It doesn’t mean that we use it in a useful way. I mean, the recent Anthropic economic index picked out “directive” as one of what it called “automation,” where it says, “Do this,” and so it’s just doing that—as opposed to a whole array of other ones, which are more around learning, or iterating, and having conversations, and so on, which are more the augmenting style. And there is still this balance, where quite a few are just getting AI to do things. But now we have far more opportunity than with the old tools to be participatory. Kunal: Yeah. I, yesterday, was using an AI web app, and I got stuck, and I had my first AI voice agent customer support call. So I just hit “Call,” was immediately connected—no wait time. And then I described my problem, and it guided me through a few steps. And then I wasn’t able to resolve it—which I assumed was going to be the case—but at the end, it gave me the email address for the startup behind the product, where I couldn’t find the email address anywhere on the website. They probably do that on purpose. But it was probably like a two-minute interaction, and it was a very pleasant, friendly, instant conversation. And I didn’t mind it. After that, I noticed—okay, this is the future. My customer service requests and support requests are going to be with AI and voice agents, and they’ll be instant, and the barriers will come down. Some will be less shy to ask for help. Where today, the idea of calling for customer support feels so daunting, this actually felt quite effortless. Fine. It’ll become more interactive. Ross: Yeah. Well, it is effort to type, and whatever the format people prefer—whether it’s typing or speaking or having a video person to interact with—I mean, these are all ways where we can get through problems or get to resolution faster and faster. And I think this idea of the personalized tutor—I mean, I’ve always, since way before generative AI, always believed that potentially the single biggest opportunity from AI was personalized education. Because we are all different. We all learn differently, we all have different interests, and we all get stuck. In classrooms—those who go to school—it’s the same for everyone, with, if you’re lucky, a fraction of a teacher’s time for personalized interaction. So that’s this—again, that takes the willingness and the desire to learn. But now we have access to what will be, very soon, some of the best, nicest, most interactive tutoring—well, not human. And I think that is critically different. But that requires simply, then, just the desire. Kunal: Yeah, I mean, on the desire—I’m curious for your take on this. I’ve noticed the capabilities of AI are growing at a very fast rate, and it feels like it’s at a faster rate than the adoption of AI. So, like, the capabilities are growing at a faster rate than the adoption of the capabilities. And the gap is getting bigger. I was part of the smartphone revolution—2007, 2008—and built my business at that moment. And that was an example where the capabilities were higher than the adoption, but we quickly caught up. And then social media—same thing. Capabilities were ahead of the consumer, but the consumer caught up. Cloud computing—same again. Capabilities grew, and then enterprises caught up pretty fast. So in previous tech waves, in my lifetime at least, there’s been an initial gap between capabilities and adoption, but it’s narrowed. And here, this feels like the opposite. It feels like the reverse—where the capabilities and the adoption, the gap is getting bigger. And I’m curious if you agree with that. And, I guess more importantly, what are the implications of that? And, I guess, opportunities. Ross: Well, what I think is there’s always been this spectrum of uptake—from internet through to every other technology—and sort of how the early adopter through to the laggards. And now that is becoming far more accentuated, in that there are plenty of people who have never tried an AI tool at all, and there’s plenty of people that spend their days, like you, interacting with the systems and learning how to use it better. And this is an amplifier, as in, those who are on the edge are more able to learn more and be able to keep closer to the edge. And those who are not involved are legally getting more behind. And this is one of the very concerning potentials for augmenting divides that we have in society—between wealth and income and access to opportunity. So I think it is real. I think that it’s… it is the nature of it, as it starts to increase over time itself. Kunal: Yeah, yeah. In the book, I talk about AI—this moment when AI goes from being a noun to a verb. And, like, we’ve learned to speak, to walk, to write, to read, and then to AI—introducing this idea of AI literacy. And it boggles my mind that in a lot of parts of the world, schools are banning AI for kids. And that horrifies me, knowing that this is going to be as important as reading and writing. Ross: Yeah, no, I think that’s absolutely true. So in our recent episode with Nisha Talaga, she runs basically AI literacy programs across schools around the world, and she’s doing some extraordinary work there. And it’s really inspiring—and doing obviously a very good job at bringing those principles. But yeah, I think that’s really true, and I think that’s a great sort of conclusion, and bringing that journey from the book and what we’ve looked at—and, I suppose, these next steps of how it is we use these tools, as you say, as a verb, not a noun. So where can people go to find out more about your work? Kunal: Yeah. So it’s my book 2034, and my other books—find them all on Amazon, Audible, free on Spotify, like the AI-narrated version of my voice reading them to you. And then my website, kunalgupta.live, and I have an AI newsletter called pivot5.ai—the number five—and that’s a daily newsletter that goes to a few hundred thousand people and kind of top-line summarized for a business leadership audience. Ross: Awesome. Thanks so much. Really appreciate your time, your insights. Kunal: Thank you. The post Kunal Gupta on the impact of AI on everything and its potential for overcoming barriers, health, learning, and far more (AC Ep86) appeared first on Humans + AI.
undefined
7 snips
Apr 16, 2025 • 40min

Lee Rainie on being human in 2035, expert predictions, the impact of AI on cognition and social skills, and insights from generalists (AC Ep85)

In this engaging discussion, Lee Rainie, Director of the Imagining the Digital Future Center, dives into the implications of AI on work and identity. He raises critical points about human traits at risk of obsolescence and the potential for overreliance on machines. Rainie emphasizes the importance of creativity and emotional intelligence amidst technological advancements, urging listeners to reflect on future societal norms. He shares insights on developing a more comprehensive understanding of expert predictions while reminding us that humans inherently seek value and connection.
undefined
Apr 9, 2025 • 0sec

Kieran Gilmurray on agentic AI, software labor, restructuring roles, and AI native intelligence businesses (AC Ep84)

“Let technology do the bits that technology is really good at. Offload to it. Then over-index and over-amplify the human skills we should have developed over the last 10, 15, or 20 years.” – Kieran Gilmurray About Kieran Gilmurray Kieran Gilmurray is CEO of Kieran Gilmurray and Company and Chief AI Innovator of Technology Transformation Group. He works as a keynote speaker, fractional CTO and delivering transformation programs for global businesses. He is author of three books, most recently Agentic AI. He has been named as a top thought leader on generative AI, agentic AI, and many other domains. Website: Kieran Gilmurray X Profile: Kieran Gilmurray LinkedIn Profile: Kieran Gilmurray BOOK: Free chapters from Agentic AI by Kieran Gilmurray Chapter 1 The Rise of Self-Driving AI  Chapter 2: The Third Wave of AI  Chapter 3 – Agentic AI Mapping the Road to Autonomy Chapter 4- Effective AI Agents What you will learn Understanding the leap from generative to agentic AI Redefining work with autonomous digital labor The disappearing need for traditional junior roles Augmenting human cognition, not replacing it Building emotionally intelligent, tech-savvy teams Rethinking leadership in AI-powered organizations Designing adaptive, intelligent businesses for the future Episode Resources People John Hagel Peter Senge Ethan Mollick Technical & Industry Terms Agentic AI Generative AI Artificial intelligence Digital labor Robotic process automation (RPA) Large language models (LLMs) Autonomous systems Cognitive offload Human-in-the-loop Cognitive augmentation Digital transformation Emotional intelligence Recommendation engine AI-native Exponential technology Intelligent workflows Transcript Ross Dawson: Hey, it’s fantastic to have you on the show. Kieran Gilmurray: Absolutely delighted, Ross. Brilliant to be here. And thank you so much for the invitation, by the way. Ross: So agentic AI is hot, hot, hot, and it’s now sort of these new levels of how it is we — these are autonomous or semi-autonomous aspects of AI. So I want to really dig into — you’ve got a new book out on agentic AI, and particularly looking at the future of work. And particularly want to look at work, so amplifying cognition. So I want to start off just by thinking about, first of all, what is different about agentic AI from generative AI, which we’ve had for the last two or three years, in terms of our ability to think better, to perform our work better, to make better decisions? So what is distinctive about this layer of agentic AI? Kieran: I was going to say, Ross, comically, nothing if we don’t actually use it. Because it’s like all the technologies that have come over the last 10–15 years. We’ve had every technology we have ever needed to make more work, more efficient work, more creative work, more innovative, to get teams working together a lot more effectively. But let’s be honest, technology’s dirty little secret is that we as humans very often resist. So I’m hoping that we don’t resist this technology like the others we have slowly resisted in the past, but they’ve all come around to make us work with them. But this one is subtly different. So when you say, look, agentic AI is another artificial intelligence system. The difference in this one — if you take some of the recent, what I describe as digital workforce or digital labor, go back eight years to look at robotic process automation — which was very much about helping people perform what was meant to be end-to-end tasks. So in other words, the robots took the bulky work, the horrible work, the repetitive work, the mundane work and so on — all vital stuff to do, but not where you really want to put your teams, not where you really want to spend your time. And usually, all of that mundaneness sucked creativity out of the room. You ended up doing it most of the day, got bored, and then never did the innovative, interesting stuff. Agentic is still digital labor sitting on top of large language models. And the difference here is, as described, is that this is meant to be able to act autonomously. In other words, you give it a goal and off it goes with minimal or no human intervention. You can design it as such, or both. And the systems are meant to be more proactive than reactive. They plan, they adapt, they operate in more dynamic environments. They don’t really need human input. You give them a goal, they try and make some of the decisions. And the interesting bit is, there is — or should be — human in the loop in this. A little bit of intervention. But the piece here, unlike RPA — that was RPA 1, I should say, not the later versions because it’s changed — is its ability to adapt and to reshape itself and to relearn with every interaction. Or if you take it at the most basic level — you look at a robot under the sea trying to navigate, to build pipelines. In the past, it would get stuck. A human intervention would need to happen. It would fix itself. Now it’s starting to work itself out and determine what to do. If you take that into business, for example, you can now get a group of agentic agents, for example, to go out and do an analysis of your competitors. You can go out and get it to do deep research — another agentic agent to do deep research, McKinsey, BCG or something else. You can get another agent to bring that information back, distill it, assemble it, get an agent to create it, turn that into an article. Get another agent to proofread it. Get another agent to pop it up onto your social media channels and distribute it. And get another agent to basically SEO-optimize it, check and reply to any comments that anyone’s making. You’re sort of going, “Here, but that feels quite human.” Well, that’s the idea of this. Now we’ve got generative AI, which creates. The problem with generative AI is that it didn’t do. In other words, after you created something, the next step was, well, what am I going to do with my creation? Agentic AI is that layer on top where you’re now starting to go, “Okay, not only can I create — I can decide, I can do and act.” And I can now make up for some of the fragility that exists in existing processes where RPA would have broken. Now I can sort of go from A to B to D to F to C, and if suddenly G appears, I’ll work out what G is. If I can’t work it out, I’ll come and ask a person. Now I understand G, and I’ll keep going forever and a day. Why is this exciting — or interesting, I should say? Well-used, this can now make up for all the fragility of past automation systems where they always got stuck, and we needed lots of people and lots of teams to build them. Whereas now we can let them get on with things. Where it’s scary is that now we’re talking about potential human-level cognition. So therefore, what are teams going to look like in the future? Will I need as many people? Will I be managing — as a leader — managing agentic agents plus people? Agentic agents can work 24/7. So am I, as a manager, now going to be expected to do that? Its impact on what type of skills — in terms of not just leadership, but digital and data and technical and everything else — there’s a whole host of questions. There is as much as there is new technology here Ross. Ross Dawson: Yeah, yeah, absolutely. And so, I mean, those are some of the questions, though, I want to, want to ask you the best possible answers we have today. And in your book, you do emphasize this is about augmenting humans. It is around how it is we can work with the machines and how they can support us, and human creativity and oversight being at the center. But the way you’ve just laid out, there’s a lot of what is human work, which is overlap from what you’ve described. So just at a first step, thinking about individuals, right? Professionals, knowledge workers — and so they have had, there’s a few layers. You’ve had your tools, your Excels. You’ve had your assistants which can go and do tasks when you ask them. And now you have agents which can go through sequences and flows of work in knowledge processes. So what does that mean today for a knowledge worker who is starting to have, where the enterprise starts to bring them in? Or they say, “Well, this is going to support it.” So what are the sorts of things which are manifest now for an individual professional in bringing these agentic workforce play? What are the examples? What are ways to see how this is changing work? Kieran Gilmurray: Yeah, well, let’s dig into that a little bit, because there’s a couple of layers to this. If you look at what AI potentially can do through generative AI, all of a sudden, the question becomes: why would I actually hire new trainees, new labor? On the basis that, if you look at any of the studies that have been produced recently, then there’s two roles, two setups. So let me do one, which is: actually, we don’t need junior labor, because junior labor takes a long time to learn something. Whereas now we’ve got generative AI and other technologies, and I can ask it any question that I want, and it’s going to give me a pretty darned good answer. And therefore, rather than having three and four and five years to train someone to get them to a level of competency, why don’t I not just put in agentic labor instead? It can do all that low-ish level work, and I don’t need to spend five years learning. I immediately have an answer. Now that’s still under threat because the technology isn’t good enough yet. It’s like the first scientific calculator version — they didn’t quite work. Now we don’t even think about it. So there is a risk that all of a sudden, agentic AI can get me an answer, or generative AI can get me an answer, that previously would have taken six or eight weeks. Let me give you an example. So I was talking to a professor from Chicago Business School the other day, and he went to one of his global clients. And normally the global client will ask about a strategy item. He would go away — him and a team of his juniors and equals would research this topic over six or twelve weeks. And then they would come back with a detailed answer, where the juniors would have went round, done all the grunt work, done all the searching and everything else, and the seniors would have distilled it off. He went — he’s actually written a version of a GPT — and he’s fed it past strategy documents, and he fed in the client details. Now he did this in a private GPT, so it was clean and clear, and in two and a half hours, he had an answer. It literally — his words, not mine — he went back to the client and said, “There you go. What do you think? By the way, I did that with generative AI and agentics.” And they went, “No, you didn’t. That work’s too good. You must have had a team on this.” And he said, “Literally not.” And he’s being genuine, because I know the guy — he’d put his reputation on it. So all of a sudden, now all of those roles that might have existed could be impacted. But where do we get then the next generation of labor to come through in five and six and ten years’ time? So there’s going to be a lot of decisions need made. As to: look, we’ve got Gen AI, we’ve potentially got agentic AI. We normally bring in juniors over a period of time, they gain knowledge, and as a result of gaining knowledge, they gain expertise. And as a result of gaining expertise, we get better answers, and they get more and more money. But now all of Gen AI is resulting in knowledge costing nothing. So where you and I would have went to university — let’s say we did a finance degree — that would have lasted us 30 years. Career done. Tick. Now, actually, Gen AI can pretty much understand, or will understand, everything that we can learn on a finance degree, plus a politics degree, plus an economics degree, plus, plus, plus — all out of the box for $20 a month. And that’s kind of scary. So when it comes to who we hire, that opens up the question now: do we have Gen AI and agentic labor, and do we actually need as many juniors? Now, someone’s going to have to press the buttons for the next couple of years, and any foresighted firm is going to go, “This is great, but people plus technology actually makes a better answer.” I just might not need as many. So now, when it comes to the actual hiring and decision-making — as to how am I going to construct my labor force inside of an organization — that’s quite a tricky question, if and when this technology, Gen AI and agentics, really ramps through the roof. Ross Dawson: I mean, these are — I mean, I think these are fundamentally strategic choices to be made. As in, you — I mean, it’s, crudely, it’s automate or augment. And you could say, well, all right, first of all, just say, “Okay, well, how do we automate as many of the current roles which we have?” Or you can say, “Oh, I want to augment all of the current roles we have, junior through to senior.” And there’s a lot more subtleties around those strategic decisions. In reality, some organizations will be somewhere between those two extremes — and a lot in between. Kieran Gilmurray: 100%. And that’s the question. Or potentially, at the moment, it’s actually, “Why don’t we augment currently?” Because the technology isn’t good enough to replace. And it isn’t — it still isn’t. And no, I’m a fan of people, by the way — don’t get me wrong. So anyone listening to this should hear that. I believe great people plus great technology equals an even greater result. The technology, the way it exists at the moment, is actually — and you look at some research out from Harvard, Ethan Mollick, HBR, Microsoft, you name it, it’s all coming out at the moment — says, if you give people Gen AI technology, of which agentic AI is one component: “I’m more creative. More productive. And, oddly enough, I’m actually happier.” It’s breaking down silos. It’s allowing me to produce more output — between 10 to 40% — but more quality output, and, and, and. So at the moment, it’s an augmentation tool. But we’re training, to a degree, our own replacements. Every time we click a thumbs up, a thumbs down. Every time we redirect the agentics or the Gen AI to teach it to do better things — or the machine learning, or whatever else it is — then technically, we’re making it smarter. And every time we make it smarter, we have to decide, “Oh my goodness, what are we now going to do?” Because previously, we did all of that work. Now, that for me has never been a problem. Because for all of the technologies over the decades, everybody’s panicked that technology is going to replace us. We’ve grown the number of jobs. We’ve changed jobs. Now, this one — will it be any different? Actually — and why I say potentially — is you and I never worried, and our audience never worried too much, when an EA was potentially automated. When the taxi driver was augmented and automated out of a job. When the factory worker was augmented out of a job. Now we’ve got a decision, particularly when it comes to so-called knowledge work. Because remember, that’s the expensive bit inside of a business — the $200,000 salaries, the $1 million salaries. Now, as an organization, I’m looking at my cost base, going, “Well, I might actually bring in juniors and make them really efficient, because I can get a junior to be as productive as a two-year qualified person within six months, and I don’t need to pay them that amount of money.” And/or, actually, “Why don’t I get rid of my seniors over a period of time? Because I just don’t need any.” Ross Dawson: Things that some leaders will do. But, I mean, it comes back to the theme of amplifying cognition. The sense of — the real nub of the question is, yes, you can sort of say, “All right, well, now we are training the machine, and the machine gets better because it’s interacting. We’re giving it more work.” But it’s really finding the ways in which the nature of the way we interact also increases the skills of the humans. And so John Hagel talks about scalable learning. In fact, Peter Senge used to talk about organizational learning — and that’s no different today. We have to be learning. And so, saying, “Well, as we engage with the AI — and as you rightly point out — we are teaching and helping the AI to learn,” we need to be able to build the process and systems and structures and workflows where the humans in it are not static and stagnant as they use AI more, but they’re more competent and more capable. Kieran Gilmurray: Well, that’s the thing we need to do, Ross. Otherwise, what we end up with is something called cognitive offload — where now, all of a sudden, I’ll get lazy, I’ll let AI make all of the decisions, and over time, I will forget and not be valuable. For me, this is a question of great potential with technology. But the real question comes down to: okay, how do we employ that technology? And to your point a second ago — what do we do as human beings to learn the skills that we need to learn to be highly employable? To create, be more innovative, more creative using technology? Ross Dawson: I answered the question you just asked. Kieran Gilmurray: 100%, and this is — this is literally the piece here, so— Ross: That’s the question. So do you have any answers to that? Kieran: No, of course. Of course. Well, mine is — it’s that. So, for me, AI will be — absolutely — and AI is massive. And let me explain that, because everybody thinks it’s been around. If we look at generative AI for the last couple of years — but AI has been around for 80-plus years. It’s what I call an 80-year-old overnight success story. Everybody’s getting excited about it. Remember, the excitement is down to the fact that I can now interact with — or you interact with — technology in a very natural sense and get answers that I previously couldn’t. So now, all of a sudden, we’re experts in everything across the world. And if you use it on a daily basis, all of a sudden, our writing is better, our output’s better, our social media is better. So the first bit is: just learn how to use and how to interact with the technology. Now, we mentioned a moment ago — but hold on a second here — what happens if everybody uses it all the time, the AI has been trained, there’s a whole host of new skills? Well, what will I do? Well, this for me has always been the case. Technology has always come. There’s a lot less saddlers than there are software engineers. There might be a lot less software engineers in the future. So therefore, what do we do? Well, my one is this. All of this has been the same, regardless of the technology: let technology do the bits that technology is really good at. Offload to it. You still need to understand or develop your digital, your AI, your automation, your data literacy skills — without a doubt. You might do a little bit of offloading, because now we don’t actually think about scientific calculators. We get on with it. We don’t go into Amazon and automatically work out all of our product sets, because it’s got a recommendation engine. So therefore, let it keep doing all its stuff. Whereas, as humans, I want to develop greater curiosity. I want to develop what I would describe as greater cognitive flexibility. I want to use the technology — now that I’ve got this — how can I produce even better, greater outputs, outcomes, better quality work, more innovative work? And part of that is now going, “Okay, let the technology do all of its stuff. Free up tons of hours,” because what used to take me weeks takes me days. Now I can do other stuff, like wider reading. I can partner with more organizations. I can attempt to do more things in the day — whereas in the past, I was just too busy trying to get the day job done. The other bits I would be saying: companies need to develop emotional intelligence in people. Because now, if I can get the technology to do the stuff, now I need to engage with tech. But more importantly, I’m now freed up to work across silos, to work across businesses, to bring in different partner organizations. And statistically, only 36% of us are actually emotionally intelligent. Now, AI is an answer for that as well — but emotional intelligence should be something I would be developing inside of an organization. A continuous innovation mindset. And I’d be teaching people how to communicate even better. Notice I’m letting the tech do all the stuff that tech should do regardless. Now I’m just over-indexing and over-amplifying the human skills that we should have developed over the last 10, 15, or 20 years. Ross Dawson: Yeah. And so, your point — this comes about people working together. And so I think that was one of the — certainly one of the interesting parts of your book is around team dynamics. So there’s a sense of, yes, we have agentic systems. This starts to change the nature of workflows. Workflows involve multiple people. They involve AI agents as well. So as we are thinking about teams — as in multiple humans assisted by technology — what are the things which we need to put in place for effective team dynamics and teamwork? Kieran Gilmurray: Yeah, so — so look, what you will see potentially moving forward is that mixture of agentic labor working with human labor. And therefore, from a leadership perspective, we need people — we need to teach people — to lead in new ways. Like, how do I apply agentic labor and human labor? And what proportion? What bits do I get agentic labor to do? What bits do I get human labor to do? Again, we can’t hand everything over to technology. When is it that I step in? Where do I apply humans in the loop? When you look at agentic labor, it’s going to be able to do things 24/7, but as people, we physically and humanly can’t. So, how — when am I going to work? What is the task that I’m going to perform? As a leadership or as a business — well, what are the KPIs that I’m going to measure myself on, and my team on? Because now, all of a sudden, my outputs potentially could be greater, or I’m asking people to do different roles than they’ve done in the past, because we can get agentic labor to do it. So there’s a whole host of what I would describe as current management consideration. Because, let’s be honest — like when we introduced ERP, CRM, factory automation, or something else — it just changed the nature of the tasks that we perform. So this is thinking through: where is the technology going to be used? Where should we not use it? Where should we put people? How am I going to manage it? How am I going to lead it? How am I going to measure it? These are just the latest questions that we need to answer inside of work. And again, from a skillset perspective — from both a leadership and getting my human labor team to do particular work, or how I onboard them — how do I develop them? What are the skills that I’m now looking for when I’m doing recruitment? What are the career paths that I’m going to put in place, now that we’ve got human plus agentic labor working together? Those are all conversations that managers, leaders, and team leaders need to have — and strategists need to have — inside of businesses. But it shouldn’t worry businesses, because again, we’ve had this same conversation for the last five decades. It’s just been different technology at different times, where we had to suddenly reinvent what we do, how we do it, how we measure it, and how we manage it. Ross Dawson: So what are specifics of how teams, team dynamics might work in using agentic AI in a particular industry or in a particular situation? Or any examples? So let’s ground this. Kieran Gilmurray: Yeah, so let’s — let me ground it in physical robots before I come into software robots, because this is what this is: software labor, not anything else. When you look at how factories have evolved over the years — so take Cadbury’s factory in the UK. At one stage, Cadbury’s had thousands and thousands of workers, and everybody ended up engaging on a very human level — managing people, conversations every day, orchestration, organization. All of the division of labor stuff happened. Now, when you go into Cadbury’s factory, it’s hugely automated — like other factories around the world. So now we’re having to teach people almost to mind the robots. Now we have far less people inside of our organizations. And hopefully — to God — this won’t happen in what I’d describe as a knowledge worker park, but we’re going to teach people how to build logical, organized, sequential things. Because to break something down into a process to build a machine — it’s the same thing when it comes to software labor. How am I going to break it and deconstruct a process down into something else? So the mindset needed to actually put software labor into place varies compared to anything else that we’ve done. Humans were messy. Robots can’t be. They have to be very logical pieces. In the past, we were used to dealing with each other. Now I’m going to have to communicate with a robot. That’s a very different conversation. It’s non-human. It’s silicon — not carbon. So how do I engage with a robot? Am I going to be very polite? And I see a lot of people saying, “Please, would you mind doing the following?” No — it’s a damn robot. Just tell it what to do. My mindset needs to change. So if I take, in the past, when I’m asking someone to do something, I might say, “Give me three things” or “Can you give me three ideas?” Now, I’ve got an exponential technology where my expectations and requests of agentic labor are going to vary. But I need to remember — I’m asking a human one thing and a bot another. Let me give you an example. I might say to you, “Ross, give me three examples of…” Well, that’s not the mindset we need to adopt when it comes to generative AI. I should be going, “Give me 15, 50, 5,000,” because it’s a limitless vat of knowledge that we’re asking for. And then I need to practice and build human judgment — to say, “Actually, I’m not going to cognitively offload and let it think for me and just accept all the answers.” But I’m now going to have to work with this technology and other people to develop that curiosity, develop that challenging mindset, to suddenly teach people how to do deeper research, to fact-check everything that I’m being told. To understand when I should use a particular piece of information that’s been given to me — and hope to God it’s not biased, not hallucinated, or anything else — but it’s actually a valuable knowledge item that I should be putting into workflow or a project or a particular document or something else. So again, it’s just working through: what is technology? What’s the technology in front of me? What’s it really good at? Where can I apply it? And understanding that — where should I put my people, and how should I manage both? What are the skills that I need to teach my people — and myself — to allow me to deal with all of this potentially fantastic, infinite amount of knowledge and activity that will hopefully autonomously deliver all the outcomes that I’ve ever wanted? But not unfettered. And not left to its own devices — ever. Otherwise, we have handed over human agency and team agency — and that’s not something or somewhere we should ever go. The day we hand everything to the robots, we might as well just go to the care home and give up. Ross Dawson: We’ll be doing that soon. So around now, let’s think about leadership. So, I mean, you’ve alluded to that in quite a few — I mean, a lot of it has been really talking about some of the questions or the issues or the challenges that leaders at all levels need to engage with. But this changes, in a way, the nature of leadership. As you say, you’ve got digital labor as well as human labor. The organization has a different structure. It impacts the boundaries of organizations and the flows of information and processes — cross-organizational boundaries. So what is the shift for leaders? And in particular, what are the things that leaders can do to develop their capabilities for a somewhat different world? Kieran Gilmurray: Yeah, it’s interesting. So I think there’ll be a couple of different worlds here. Number one is, we will do what we’ve always done, which is: we’ll put in a bit of agentic labor, and we’ll put in a bit of generative AI, and we’ll basically tweak how we actually operate. We’ll just make ourselves marginally more efficient. Because anything else could involve the redesign and the restructure of the organization, which could involve the restructure and the redesign of our roles. And as humans, we are very often very change-resistant. Therefore, I don’t mind technology that I understand, and I don’t mind technology that makes me more productive, more creative. But I do mind technology that could actually disrupt how I lead, where I actually fit inside of the organization, and something else. So for those leaders, there’s going to be a minimal amount of change — and there’s nothing wrong with that. That’s what I call the “taker philosophy,” because you go: taker, maker, shaper — and I’ll walk through those in a second — which is, I’ll just take another great technology and I’ll be more productive, more creative, more innovative. And I recommend every business does that at this moment in time. Who wouldn’t want to be happier with technology doing greater things for you? So go — box number one. And therefore, the skills I’m going to have to learn — not a lot of difference. Just new skills around AI. In other words, understanding bias, hallucinations, understanding cognitive offloading, understanding where to apply the technology and not. And by “not,” I mean: very often people put technology at something that has no economic value. Waste time, waste money, waste energy, get staff frustrated — something else. So those are just skills people have to learn. It could be any technology, I’ve said. The other method of doing this is almost what I describe as the COVID method. I need to explain that statement. When COVID came about, we all worked seamlessly. It didn’t matter. There were no boundaries inside of organizations. Our mission was to keep our customers happy. And therefore, it didn’t matter about the usual politics, the usual silos, or something else. We made things work, and we made things work fast. What I would love to see organizations doing — and very few do it — is redesign and re-disrupt how they actually work. And I’m sitting there going, it’s not that I’m doing what I’m doing and I’ve now got a technology — “Where do I add it on?” — as in two plus one is equal to three. What I’m sitting going and saying is: How can I fundamentally reshape how I deliver value as an organization? And working back from the customer — who will pay a premium for this — and therefore, if I work back from the customer, how do I reconstruct my entire business in terms of leadership, in terms of people, in terms of agentic and human labor, in terms of open ecosystems and partnerships and everything else — to deliver in a way that excites and delights? If we take the difference between bookstore and Amazon — I never, or rarely, go into a bookstore anymore. I now buy Amazon almost every time, not even thinking about it. If I look at AI-native labor — they’re what I describe as Uber’s children. Their experiences of the world and how they consume are very different than what you and I have constructed. Therefore, how do I create what you might call AI-native intelligent businesses that deliver in a way that is frictionless and intelligent? And that means: intelligent processes, intelligent people, using intelligent technology, intelligent leadership — forgetting about silos and breakdowns and everything else that exists politically inside of organizations — but applying the best technology. Be it agentics, be it automation, be it digital, be it CRM, ERP — it doesn’t really matter what it is. Having worked back from the customer, design an organization to deliver on its promise to customers — to gain a competitive advantage. And those competitive advantages will be less and less. I can copy all the technology quicker. Therefore, my business strategy won’t be 10 years. It possibly won’t be five. It might be three — or even less. But my winning as a business will be my ability to construct great teams. And those great teams will be great people plus great technology — to allow me to deliver something digitally and intelligently to consumers who want to pay a premium for as long as that advantage lasts. And it might be six months. It might be twelve months. It might be eighteen months. So now we’re getting to a phase of almost fast technology — just like we have fast fashion. But the one thing we don’t want to do is play loose and fast with our teams. Because ultimately, I still come back to the core of the argument — that great people who are emotionally intelligent, who’ve been trained to question everything that they’ve got, who are curious, who enjoy working as part of a team in a culture — and that piece needs to be taken care of as well. Because if you just throw robots at everything and leave very few people, then what culture are you actually trying to deliver for your staff and for your customers? How do I get all of this work to deliver in a way that is effective, is affordable, is operationally efficient, profitable — but with great people at the core, who want to continue being curious, creating new and better ways of delivering in a better organization? Not just in the short term — because we’re very short-termist — but how do I create a great organization that endures over the next five or ten years? By creating flexible labor and flexible mindsets, with flexible leaders organizing and orchestrating all this — to allow me to be a successful business. Change is happening too quickly these days. Change is going to get quicker. Therefore, how do I develop an adaptive mindset, adaptive labor force, and adaptive organization that’s going to survive six months, twelve months — and maybe, hopefully to God, sixteen months plus? Ross Dawson: Fantastic. That’s a great way to round out. So where can people find out more about your work? Kieran Gilmurray: Yeah, look, I’m on LinkedIn all the time — probably too much. I should get an agentic labor force to sort that out for me, but I’d much prefer authentic relationships than anything else. Find me on LinkedIn — Kieran Gilmurray. I think there are only two of me: one’s in Scotland, who is related some way back, and the Irish one. Or www.kierangilmurray.com is where I publish far too much stuff and give far too much stuff — things — away for free. But I have a philosophy that says all boats rise in a floating tide. So the more we share, the more we give away, the more we benefit each other. So that’s going to continue for quite some time. I have a book out on agentic AI. Again, it’s being given away for free. Ross, if you want to share it, please go for it, sir, as well. As I said, let’s continue this conversation — but let’s continue this conversation in a way that isn’t about replacing people. It’s about great leadership, great people, and great businesses that have people at their core, with technology serving us — not us serving the technology. Ross: Fabulous. Thanks so much, Kieran. Kieran: My pleasure. Thanks for the invite. The post Kieran Gilmurray on agentic AI, software labor, restructuring roles, and AI native intelligence businesses (AC Ep84) appeared first on Humans + AI.
undefined
Apr 2, 2025 • 0sec

Jennifer Haase on human-AI co-creativity, uncommon ideas, creative synergy, and humans outperforming (AC Ep83)

“We humans often tend to be very restricted—even when we are world champions in a game. And I’m very optimistic that AI will surprise us, with very different ways of solving complex problems—and we can make use of that.” – Jennifer Haase About Jennifer Haase Dr. Jennifer Haase is a researcher at the Weizenbaum Institute, and lecturer at Humboldt University and University of the Arts Berlin. Her work focuses on the intersection of creativity, Artificial Intelligence, and automation, including AI for enhancing creative processes. She was named as one the 100 most important minds in Berlin science. Website: Jennifer Haase Jennifer Haase   LinkedIn Profile: Jennifer Haase What you will learn Stumbling into creativity through psychology and tech Redefining creativity in the age of AI The rise of co-creation between humans and machines How divergent and reverse thinking fuel innovation Designing AI tools that adapt to human thought Balancing human motivation with machine efficiency Challenging assumptions with AI’s unconventional solutions Episode Resources Websites & Platforms jenniferhaase.com ChatGPT Concepts & Technical Terms Artificial Intelligence (AI) Human-AI Co-Creativity Generative AI Large Language Models (LLMs) ChatGPT GPT-4 GPT-3.5 GPT-4.5 Business Informatics Psychology Creativity Divergent Thinking Convergent Thinking Mental Flexibility Iterative Process Everyday Creativity Alternative Uses Test Creativity Measures Creative Performance Transcript Ross Dawson: Jennifer, it’s a delight to have you on the show. Jennifer Haase: Thanks for inviting me. Ross: So you are diving deep, deep, deep into AI and human co-creativity. So just to hear—just back a little bit—sort of how you’ve embarked on this journey. I mean, love to—we can fill in more about what you’re doing now. But how did you come to be on this journey? Jennifer: I would say overall, it was me stumbling into tech more and more and more. So I started with creativity. My background is in psychology, and I learned about the concept of creativity in my Bachelor studies, and I got so confused, because what I was taught was nothing like what I thought creativity was—or how it felt to me. It took me years to understand that there are a bunch of different theories, and it was just one that we were taught. But that was the spark of the curiosity for me to try to understand this concept of creativity. And I did it for years. Then, by pure luck, I started a PhD in Business Informatics, which is somewhat technical. The lens of how I looked at creativity shifted from the psychological perspective more into the technical realm, and I looked at business processes and how they are advanced by general technology—basic software, basically. Then I morphed—also, by sheer luck—I morphed into computer science from a research perspective. And that coincided with ChatGPT coming around, and this huge LLM boom happened two, three years ago. And since then, I’m deeply in there. I just fell, fell in this rabbit hole. Ross: Yeah, well, it’s one of the most marvelous things. So the very first use case for most people, when they first use ChatGPT, is: write a poem in the style of whatever, or essentially creative tasks. And pretty decently does those to start off—until you sort of started to see the limitations at the time. Jennifer: Yeah, and I think it did so much. It’s so many different perspectives. I think we—as I said, I studied creativity for quite a while—but it was never as big of a deal, let’s say. It was just one concept of many. But since AI came around, I think it really threatened, to some part, what we understood about creativity, because it was always thought of as this pinnacle of humanness—right next to ethics. And I think intelligence had its bumps two or three decades ago, but for creativity, it was rather new. So the debate started of what it really means to be creative. I think a lot of people also try to make it even bigger than it is. But I think it is as simple as—a lot about creativity is, for example, in terms of poets—poetry is language understanding, right? And so LLMs are really good at it. And it’s just the case. It’s fine. I think we can still live happy lives as humans, although technology takes a lot over. Ross: Yes. So humans are creative in all sorts of dimensions. AI has complementary—let’s say, also different—capabilities in creativity. And in some of your research, you have pointed to different levels of how AI is supporting us in various guises—through being a tool and assistant, through to what you described as the co-creation. So what does that look like? What are some of the manifestations of human-AI co-creativity, which implies peers with different, complementary capabilities? Jennifer: Yeah, I think the easiest way to look at it is if you imagine working creatively with another person who is really competent—but the person is a technical version of it, and usually we call that AI, right? Or generative AI these days. So the idea is that you can work with a technical tool from an eye-to-eye level. Really, the tool would have a—well, now we’re getting into the realm of using psychological terms, right—but the tool would have a decent enough understanding so it would appear competent in the field that you want to create. I think the biggest difference we see to most common tools that we have right now—which I would argue are not on this level yet—tools like ChatGPT and others, they follow your lead, right? If you type in something, they will answer, sometimes more or less creatively. But you can take that as inspiration for your own creativity and your own creative process. And that really holds big potential. It’s great. But what we are envisioning—and seeing in some parts already happening in research—I think this is the direction we’re going to and really want to achieve more: that we have tools that can also come up with ideas, or important input for the creative problem. Not—when I say on their own—I don’t mean that they are, I don’t know, entities that just do. But they contribute a significant, or really a significant part of the creative process. Ross: So, I mean, we’ll come back a little bit to the distinctions between how AI creativity contrasts to human creativity. But just thinking about this co-creative process—from your research or other research that you’re aware of—what are the success factors? What are the things which mean that that co-creation process is more likely to be fruitful than not? Jennifer: I think it starts really with competence. And I think this is something, in general, we see that generative AI just became extremely good at, right? They know, so to speak, a lot and tailor a lot of knowledge, and that is very, very helpful—because we need broad associations, coming from mostly different fields, and connect that to come up with something we consider new enough to call it creative. That is a benefit that is beyond human capabilities, right? What we see right now those tools are doing—that is one part. But that is not all. What you also need is the spark of: why would something need to be connected? And I think that is especially where raising the creative questions, coming up with the goal that you want to achieve something too, is still the human part. But—it doesn’t need to be. That’s all I’m saying. But still, it is. Ross: So, I mean, there are some—very crude workflows, as in, you get AI to ideate, then humans select from those, and then they add other ideas, or you get humans and then AI sort of combines, recombines. Are there any particular sequences or flows that seem to be more effective? Jennifer: It’s interesting. I think this is also an interesting question for human creative work alone, even without technology—like, how do you achieve the good stuff, right? And I think what you just described, for me, would be kind of like a traditional way of: oh, I have a need, or I have a want—like, I want to create something, or I want to solve something, or I need a solution for a certain problem. And I describe that, and I iterate a best solution, right? This is part of what we call the divergent thinking process. And then, at a certain point, you choose a specific solution—so you converge. But I think where we have mostly the more interesting creative output—for humans and now also especially with AI—is that you kind of reverse the process. So let’s assume you have a solution and you need to find issues for it. For example, you have an invention. I think—yeah, I think it was that there’s this story told about the Post-its, you know, the yellow Post-its. So they were kind of invented because someone came up with glue that does not stick at all—like, really bad glue. And they had this as the final product. Now it’s like, “Okay, where can you make use of it?” And then they came up with, “Oh, maybe, if you put it on paper, you can come up with these sticky notes that just glue enough.” So they hold on surfaces, but they don’t stick forever, so you can easily erase them. They’re very practical in our brainstorming work, for example. And this kind of reverse thinking process—it’s much more random. And for many people, it’s much more difficult to open up to all the possibilities that can be. What I’ve seen is that if you try to poke LLMs with such very diverse, open questions, it can be very interesting what kind of comes out there. Ross: Though, to your point, I mean, this is the way—the human frames, the AI can respond. But the human needs to frame—as in, “Here is a solution. What are ways to be able to apply?” Jennifer: And all the examples—like, what I’m thinking of right now—is what is working with the tools that we have with LLMs. And I think what you were asking me before about the fourth level that we described with this co-creation—these are tools that work a bit differently. These are tools that, for now, mostly exist in research because you still need a high level of computational knowledge. So, the work that I did—the colleagues that I work with—are from computer science or mathematicians who program tools that know some rules of the game, or some—let’s call them—boundary conditions of our creative problem that we are dealing with. And then the magic—or the black box magic—of AI is happening. And something comes out. And sometimes we don’t really understand what was going on there. We just see the results. And then, with such results, we can iterate. Or maybe something goes in the direction as we assume could be part of the solution. So it becomes this iterative process between an LLM or AI tool doing something, we’re seeing the results, saying yes or no, nudging it into different directions, and so, overall, coming up with a potentially proper solution. This is—at least in the examples that we see. And if you have such a process and look over it, like what was happening, often what we see is that LLMs or AI tools in general—with their, let’s call it, broad knowledge, or the very intense, broad computational capacities that they have—they do stuff differently than we as humans tend to do stuff. And this is where it becomes interesting, right? Because now we are not bounded in this common way of thinking and finding associations, or iterating smaller solutions. Now we have this interesting artificial entity that finds very different ways of solving complex problems—and we can make use of that. Of course, we can learn from that. Ross: Absolutely. And I think you’ve pointed to some examples in your papers. I mean—other, sort of, I suppose we’ve been quite conceptual—so examples that you can give of either what people have done, or projects you’ve been involved with, or just types of challenges? Jennifer: I think—to explain the mechanism that I’m talking about—I think the first creative, artificial example, like the real, considered properly creative example, was when AlphaGo, the program developed to play Go—the game similar to, or somewhat similar to, chess but not chess—when this tool was able to come up with moves, like play moves, which were very uncommon. Still within the realm of possibilities, but very, very uncommon to how humans used to play. And so, I think what this new was back in 2016, right? When this happened—when DeepMind, from Google, built this tool and kind of revolutionized AI research. What it showed us is exactly this mechanism of these tools. Although they are still within the realm of possibilities—still within what we consider the rules, right, of the game—it showed some moves which were totally uncommon and surprising. And I think this shows us that we humans often tend to be very restricted. Even when we are world champions in a game, we are still restricted to what we commonly do—what is considered a good rule of thumb for success. And I’m very optimistic that AI will surprise us, like in this direction—with this mechanism—quite a lot in the future. Ross: Yeah, and certainly, related to what you’re describing, some similar algorithms have been applied to drug discovery and so on. Part of it is the number-crunching, machine learning piece, but part of it is also being able to find novel ways of folding proteins or other combinations which humans might not have envisaged. Jennifer: Yeah, exactly. And exactly—it’s in part because these machines are just so much more advanced in how much, or how many, information they can hold and combine. This is, in part, purely computational. It’s a bit unfair to compare that to our limited brains. But it’s not just that. It’s not just pure information, right? It’s also how this information is worked upon, or the processes—how information is combined, etc. So I think there are different levels of how these machines can advance our thinking. Ross: So one of the themes you’ve written about is designing for synergies—how we can design so that we are able to be complementary, as opposed to just delegating or substituting with AI. So what are those design factors, or design patterns, or mentalities we need? Jennifer: Well, I will propose, first up—I think it’s extremely complicated. Not complicated, but it will become a huge issue. Because, let’s say, if technology becomes so good—and we see that right now already with LLMs like ChatGPT—it’s so easy for us. And I mean that in a very neutral way. But lazy humans as we are—I think we are inherently lazy—it’s really tough for us to keep motivated to think on our own, to some degree at least, and not have all the processes overtaken by AI. So, saying that, I think the most essential, most important part whenever we are working with LLMs is: we have to keep our motivation in the loop—and our thinking to some degree in the loop—within the process. And so, we need a design which engages us as humans. I think it’s easily seen right now with LLMs. When you need the first step in—like typing some kind of prompt, or even in a conversation—you have to initiate it, right? You have to come up with, maybe even, your creative task at first. And I think this will always be true, because we humans control technology by developing it, right? But even when you’re more on the user end—forcing us to be in the loop, and thinking it through, and controlling the output, etc.—is one part. But I think what it also needs, especially for the synergy, is for the technology to adapt to us—to serve us, so to speak. And I think this is an aspect that is a little bit underdeveloped right now. What do I mean by that? I want a tool that serves me in my thinking. It should be competent enough that I perceive it as a buddy—eye to eye. That is the vision that I have. But I still always want the control. And I want it to adapt to me, and that I don’t have to adapt too much to the tool. Right now, we’re mostly just provided with tools that we need to learn how to deal with. We need to understand how prompting works, etc., etc. And I want that reversed. I want tools which are competent enough to understand, “Okay, this is Jenny. She is socialized in this way. She usually speaks German,”—whatever kind of information would be important to get me involved and understand me better. I think this is the vision for synergy that I’m thinking of. Ross: No, I really like that. The idea of designing for engagement, because instead of saying, yeah, why is it going to make us want to be engaged and continue the process and want to want to be involved, as opposed to doing the hard work of telling the—keep on telling the AI to do stuff. Jennifer: Yes, and also sometimes—I mean, I work a lot with ChatGPT and other similar tools—and sometimes I’m like, I found myself, I hope I don’t spoil too much, but sometimes I find myself copy-pasting too much because there’s nothing left for me to do. And to some degree, it can happen that the tools are too good, right? Because they are meant to create the output as the output, but they are not meant to be part of this iterative thinking process. I think you can design it much better and easier to go hand in hand with what I’m thinking and what I want to advance. Maybe. Ross: Yeah, yes, otherwise the onus is on the human to do it all. So in one of your papers, you identify—you used a number of the different models, and I believe you found that GPT-4 was the best for a variety of ideation tasks. But you’ve also done some more recent research. I’d love to hear about strengths, weaknesses, or different domains in which the different models are good, or— Jennifer: Yeah, that’s quite interesting, right? Because—okay, so going back to the start of the big—let’s call it the big boom of LLMs, right? I think it was early ’23, right, when ChatGPT came around. End of ’22. Okay, so it took a while when it reached Germany—it was for us. No, just joking. But okay, so around this time, what we found was intense debates arguing that, although these tools are generative, they cannot be creative. And that was the stance held tightest—maybe especially from creativity researchers and mostly psychologists, right? As I mentioned before, it’s a little bit of this fear that too much is taken over by technology. I think that is a strong contributor—even among researchers. So what we went out to do is—we basically wanted to ask LLMs the same creativity measures as we would do for humans. Like, when you want to know if a person holds potential for creative thinking, you ask them creative questions, and they have to perform—if they want to. And that’s exactly what we did with LLMs. Back in the day, we did it with the LLMs that were easily reachable and free in the market—like ChatGPT. And now, we really redid it with the current LLMs, with the current versions. And—I don’t know if you’ve seen that—but most LLMs are advertised, when the new versions come out, usually they are advertised with: they are more competent, and they are more creative. And so we questioned that. Is that really true? Is ChatGPT 4.5, for example—the current version—is it more creative than 3.5 back in the day? And what we find is—it’s so messy, actually. Because for some tools, yes, they are a bit more creative than they used to be two years ago. But the picture is really not clear. You cannot really tell or say or argue that the current versions we are having are more creative than two years ago—or even more creative than humans. It’s been interesting. We’re not really sure why. But all we can say is that, on average, these tools are as good at coming up with everyday-like uses or everyday-like ideas for everyday problems. They are, on average, as good as humans—random humans picked from surveys. And I think that is good news, right? Because LLMs are easier to ask than random humans most of the time. But the promise that they become more and more creative with every new release, in our perspective, does not hold up. So that is the bigger, bigger picture. Let’s start there. Ross: So that’s very interesting. So this is using some of the classic psychological creativity tests. And so you’re applying what has for a long time been used for assessing creativity in humans, and simply applying exactly the same test to LLMs? Jennifer: And to be fair, within the creativity research community, we agree that those tests are not good. Okay, they’re really pragmatic. We totally agree on that, so we do not have to fight for this point. But it’s commonly what we use to assess human potential for creative thinking—or even more concise, for divergent thinking—which is only one important, but just one aspect, of the whole creative journey, let’s say. And it basically just asks how good you are, on the spot, at coming up with alternative uses for everyday products like a shoe or toothbrush or newspaper. And of course, you can come up with obvious uses. But then there are the creative ones, which are not so easy to think of, right? And LLMs are good at that. They will deliver a lot of ideas, and quite a few of those are considered original compared to human answers. We also now used another test, which is a little bit more arbitrary even, but it proved to be somewhat of a good predictor for creative performance overall. And that is: you are asked to come up with 10 words which are as different from each other as possible. So very pragmatic again. And these LLMs—as they, you know, know one thing, and that is language—are, again, quite good at that on average. But it’s not that you see that they are above average, or that a specific LLM would be above average. We see some variety, but the picture, I would say, is not too clear. And also, to mention—which was a little bit surprising to us, actually—is that those LLMs, we asked them several times, like, a lot of times, and the variance in terms of originality—the variance is quite huge. So if you ask an LLM like ChatGPT for creative ideas, sometimes you can have quite a creative output, and sometimes it’s just average. Ross: So you did say that you’re comparing them to random humans. So does that mean that generally perceived-to-be-creative humans are significantly outperforming the LLMs on these tasks? Jennifer: Yeah, yeah. So, but the thing is, there is usually no creative human per se. So there’s nothing about a human that makes a human per se creative. We tend to differ a little bit on how well we perform on such tasks. Yes, we do differ in our mental flexibility, let’s say. But a creative individual is usually an individual which found a very good fit between their thinking, their experience, and the kind of creative task they’re doing. And just think about it, because this creativity can be found in all sorts of domains, right? And people can be good or less good in those domains, and that correlates highly with the creativity. So when we ask about the general, like, the ideas for everyday tasks, there is not really the creative individual, right? They are motivated individuals, which makes a huge difference for creativity measures. If you’re motivated and engaged, that is something we take as granted. For LLMs, I guess if you compare them, the motivation is there. But what we see in terms of the best answers—the most original answers in our data sets—most of the time, not all, but most of the time, come from humans. Ross: Very interesting. So, this is the Amplifying Cognition podcast, so I want to sort of round up by asking: all right, so what’s the state of the nation or state of the world, and where we are moving in terms of being able to amplify and augment human cognition, human creativity? So I suppose that could be either just, improving human creativity, or collaborating, or, you know, this co-creativity. Jennifer: I think the potential for significant improvements and amplifications has never been better. But I think at the same time as I’m saying that, I think the risks have never been higher. And that is because, as I said, we are lazy people. That’s just what humanist means—and that is fine—but it also means that we have a great risk of using these technologies not for us, but being used by them, basically, right? So we can use ChatGPT and other tools to do the task for us, or we can use them to do the task more efficiently and better with them. I think this difference can be very gradual, very minor, but it makes the whole difference between success and big dependencies—and potentially failure. Ross: Yeah, and I think you make a point—which I often also do—which is over-reliance is the biggest risk of all, potentially. Where, if we start to just sort of say, “This is good, I’ll let the AI do the task, or the creativity, or whatever,” it’s dangerous on so many levels. Jennifer: Because it does good enough most of the time, right? Technology became so good for many tasks—not all, but many tasks—that it does it good enough. And I think that is exactly where we have the potential to become so much better, right? Because if you now take the time and effort that we usually would put into the task itself, we could just improve on all levels. And that is the potential I’m talking about. I think a lot is to be advanced, and a lot is to be gained—if we play it right. Ross: And so, what’s on your personal research agenda now? Jennifer: Oh, I fell into this agentic LLM hole. Yeah, no, no—it’s not just looking at individual LLMs, but to chain them and combine them into bigger, more complex systems to have—or work on—bigger and complex issues, mostly creative problems, and see where the thinking of me and the tool, yeah, excels, basically, right? And where do I, as a human, have to step in to fine-tune specific bits and pieces and really find the limits of this technology if you scale it up? That’s my agenda right now. Ross: I’m very much looking forward to reading the research as you publish it.  Jennifer: Thank you.  Ross: Is there anywhere people can go to find out more about your work? Jennifer: Yeah, I collect everything on jenniferhaase.com. That’s my web page. It’s hugely up to date there, and you can find talks and papers. Ross: Fabulous. Love the work you’re doing. Jennifer, thanks so much for being on the show and sharing. Jennifer: Thank you very much. It was—yeah, I love to talk about that, so thanks for inviting me. The post Jennifer Haase on human-AI co-creativity, uncommon ideas, creative synergy, and humans outperforming (AC Ep83) appeared first on Humans + AI.
undefined
Mar 26, 2025 • 0sec

Pat Pataranutaporn on human flourishing with AI, augmenting reasoning, enhancing motivation, and benchmarking human-AI interaction (AC Ep82)

“We should not make technology so that we can be stupid. We should make technology so we can be even smarter… not just make the machine more intelligent, but enhance the overall intelligence—especially human intelligence.” –Pat Pataranutaporn About Pat Pataranutaporn Pat Pataranutaporn is Co-Director of MIT Media Lab’s new Advancing Humans with AI (AHA) research program, alongside Pattie Maes. In addition to extensive academic publications, his research has been featured in Scientific American, MIT Tech Review, Washington Post, Wall Street Journal, and other leading publications. His work has been named in TIME’s “Best Inventions” lists and Fast Company’s “World Changing Ideas.” Websites: MIT Media Lab AI (AHA)   LinkedIn Profile: Pat Pataranutaporn What you will learn Reimagining ai as a tool for human flourishing Exploring the future you project and long-term thinking Boosting motivation through personalized ai learning Enhancing critical thinking with question-based ai prompts Designing agents that collaborate, not dominate Preventing collective intelligence from becoming uniform Launching aha to measure ai’s real impact on people Episode Resources People Hal Herschfeld Pattie Maes Elon Musk Organizations & Institutions MIT Media Lab KBTG ACM SIGCHI Center for Collective Intelligence Technical Terms & Concepts Human flourishing Human-AI interaction Digital twin Augmented reasoning Multi-agent systems Collective intelligence AI bias Socratic questioning Cognitive load Human general intelligence (HGI) Artificial general intelligence (AGI) Transcript Ross Dawson: Pat, it is wonderful to have you on the show. Pat Pataranutaporn: Thank you so much. It’s awesome to be here. Thanks for having me. Ross: There’s so much to dive into, but as a starting point: you focus on human flourishing with AI, exactly. So what does that mean? Paint the big picture of AI and how it can help us to flourish as who we are and our humanity. Pat: Yeah, that’s a great question. So I’m a researcher at MIT Media Lab. I’ve been working on human-AI interaction before it was cool—before ChatGPT took off, right? So we have been asking this question for a long time: when we focus on artificial intelligence, what does it mean for people? What does it mean for humanity? I think today, a lot of conversation is about how we can make models better, how we can make technology smarter and smarter. But does that mean that we can be stupid? Does it mean that we can just let the machine be the smart one and let it take over? That is not the vision that we have at MIT. We believe that technology should make humans better. So I think the idea of human flourishing is an umbrella term that we use to describe different areas where we think AI could enhance the human experience. For me in particular, I focus on three areas: how AI can enhance human wisdom, enhancing wonder, and well-being. So: 3 W’s—wisdom, wonder, and well-being. We work on many projects to look into these areas. For example, how AI could allow a person to talk to their future self, so that they can think in the longer term, to see that future more vividly. That’s about enhancing wonder and wisdom. We think a lot about how AI can help people think more critically and analyze information that they encounter on a daily basis in a more comprehensive way. And you know well-being, we have many projects that look at how AI can improve human mental health, positive thinking, and things like that. But at the end, we also focus on AI that doesn’t lead to human flourishing, to balance it out. We study in what contexts human-AI interaction leads to negative outcomes—like people becoming lonelier or experiencing negative outcomes such as false memories, misinformation, and things like that. As scientists, we’re not overly optimistic or pessimistic. We’re trying to understand what’s going on and how we can design a better future for everyone. That’s what we’re trying to focus on. Yeah? Ros: Fabulous. And as you say, there are many, many different projects and domains of research which you’re delving into. So I’d like to start to dive into some of those. One that you mentioned was the Future You project. So I’d love to hear about what that is, how you created it, and what the impact was on people being able to interact with their future selves. Pat: Totally. So, I mean, as I said, right, the idea of human flourishing is really exciting for us. And in order to flourish, like, you cannot think short term. You need to think long term and be able to sort of imagine: how would you get there, right? So as a kid, I was interested in sort of a time machine. Like, I loved dinosaurs. I wanted to go back into the past and also go into the future, see what would happen in the future, like the exciting future we might have. So I really love this idea of, like, having a time machine. And of course, we cannot do a real time machine yet, but we can make a simulation of a time machine that uses a person’s personal data and can extrapolate that, and use other data to kind of see, okay, if the person has this current behavior, things that they care about, what would happen down the road—like what would happen in the future. So we built an AI simulation that is a digital twin of a person. And we first ask people to kind of provide us with some basic information: their aspiration, things that they want to achieve in the future. And then we use the current behavior that they have to kind of create what we call a synthetic memory, or a memory that that person might have in the future, right? So normally, memory is something that you already experienced. But in this case, because we want to simulate the future self, we need to build memory that you did not experience yet but might actually experience in the future. So we use language model combined with the information that the person gives us to create this sort of intermediary representation of person experience, and then feed that into a model that then allows us to create human-like conversation. And then we also age the image of the person. So when the person uploads the image, we also use a visual model that can kind of create an older representation of that person. And then combine these together, we are creating an AI-simulated future self that people can have a conversation with. So we have been working with psychologists—Professor Hal Herschfeld from UCLA—who looks at the concept of future self-continuity, which is a psychological concept that measures how well a person can vividly imagine their future self. And he has shown that if you can increase this future self-continuity, people tend to have better mental health, better financial saving, better decision, because they can kind of think for the long term, right? So we did this experiment where we created this future self system and then tested it with people and compared it with a regular chatbot and having no intervention at all. And we have shown that this future self intervention can increase future self-continuity and also reduce people’s anxiety as well. So they become much more of a future thinker—not only think about today’s situation, but can see the possibility of the future and have better mental health overall. So I think this is really exciting for us, because we built a new type of system, but also really showed that it had a positive impact in the real world. Ross: What were the ranges of ages of people who were involved in this research? Pat: Yeah, so right now, the prototype that we developed is for younger population—people that just finished college or people that just finished high school, people that still need to think about what their future might look like, people that still would benefit from having ability to kind of think in the longer term. And right now, we actually have a public demo that everyone can use. So people can go to our website and then actually start to use it. You can also volunteer the data for research as well. So this is sort of in the wild, or in the real world study. That’s what we are doing right now. So if people like to volunteer the data, then we can also use the data to kind of do future research on this topic. But right now, the system has been used by people in over 190 countries, and we are really excited for this research to be in the real world and have people using it. Ross: Fabulous. We’ll have the link in the show notes. So, one of the other interesting aspects raised across your research is the potential positive impact of AI on motivation. I think that’s a really interesting point. Because, classically, if you think about the future of education, AI can have custom learning pathways and so on. But the role of the human teachers, of course, is to inspire and to motivate and to engage and so on. So I’d love to hear about how you’re using AI to develop people’s positive motivation. Pat: Yeah, that’s a really great question. And I totally agree with you that the role of the teacher is to inspire and create this sort of positive reinforcement or positive encouragement for the student, right? We are not trying to replace that. Our research is trying to see what kind of tools the teacher can use to improve student motivation, right? And I think today, a lot of people have been asking, like, well, we have AI that can do so many things—why do we need to learn, right? And we believe at MIT that learning is not just for the benefit of getting a job or for the benefit that you will have a good life, but it’s good for personal growth, and it’s also a fun process, right? Learning something allows you to feel excited about your life—like, oh, you can now do this, even though AI can do that. I mean, a car can also go from one place to another place, but that doesn’t mean we should stop walking, right? Or you can go to a restaurant and a professional chef can cook for you, but it’s also a very fun thing to cook at home, right? With your loved ones or with your family, right? So I think learning is a really important process of being human, and AI could make that process even more interesting and even more personal, right? We really emphasize a lot on the idea of personalized learning, which means that learning can be tailored to each individual. People are very different, right? We learn in different ways. We care about different things. And learning is also about connecting the dots—things that we already know and new things that we haven’t learned before. How do we connect that dot better? So we have built many AI systems that try to address these. The first project we looked at was what happens if we can create virtual characters that can work with teachers to help students learn new materials. They can be a guest lecturer, they could be a virtual tutor that students can interact with in addition to their real teacher, right? And we showed that by creating characters based on the people that students like and admire—like, at that time, I think people liked Elon Musk a lot (I don’t know about now; I think we would have a different story)—but at that time, Elon Musk was a hero to many people. So we showed that if you learn from virtual Elon Musk, people have a higher level of learning motivation, and they want to learn more advanced material compared to a generic AI. So personalization, in this case, really helped with enhancing personalized feeling and also learning motivation and positive learning experience. We have shown this across different educational measures. Another project we did was looking at examples, right? When you learn things, you want examples to help you understand the concept, right? Sometimes concepts can be very abstract, but when you have examples, that’s when you can start to connect it with the real world. Here we showed that if we use AI to create examples that resonate with the student’s interests—like if they love Harry Potter, or, I don’t know, like Kim Kardashian, or whatever—Minecraft or whatever things that people like these days, right? Well, I feel like an old person now, but yeah, things that people care about. If you create an example using elements that people care about, we can also make the lesson more accessible and exciting for people as well, right? So this is a way that AI could make learning more positive and more fun and engaging for students. Yeah. Ross: So one of the domains you’ve looked at is augmented reasoning. And so I think it’s a particularly interesting point now. In the last six months or so, we’ve all talked about reasoning models with large language models—or perhaps “reasoning” in quotation marks. And there are also studies that have shown in various guises that people do seem to be reducing their cognitive engagement sometimes, whether they’re overusing LLMs or using them in the wrong ways. So I’d love to hear about your research in how we can use AI to augment reasoning as well as critical thinking capabilities. Pat: That’s a great question. I mean, that’s going back to what I said, right? Like, what does it mean for humans to have smart models around us? Does it mean we can be stupid? I think that’s a degradation of humans, right? We should not make technology so that we can be stupid. We should make technology so we can be even smarter, right? So I think the end goal of having a machine or models that can do reasoning for us, rather than enhance our reasoning capability—I think that’s the wrong goal, right? And again, if you have the wrong outcome or the wrong measurement, you’re gonna get the wrong thing. So first of all, you need to align the goal in the right direction. That’s why, in my PhD research, I really want to focus on things that ultimately have positive impact on people. AI models continue to advance, but sometimes humans don’t advance with the AI models, right? So in this case, reasoning is something that’s very, very critical. You can trace it back to ancient Greek. Socrates talked a lot about the importance of questioning and asking the right question, and always using this critical thinking process—not trusting things at face value, right? We have been working on systems—again, the outcome of human-AI interaction can be influenced by both human behavior and AI behavior, right? So we can design AI systems that engage people in critical thinking rather than doing the critical thinking for them. That could be very dangerous, right? These systems right now don’t really have real reasoning capability. They’re doing simulated reasoning. And sometimes they get it right because, on the internet, people have already expressed reasoning and thinking processes. If you repeat that, you can get to the right answer. I mean, the internet is bigger than we imagined. I think that’s what the language models show us—that there’s always something on the internet that allows you to get to the right answer. You have powerful models that can learn those patterns, right? So these models are doing simulated reasoning, which means they don’t have real understanding. Many people have shown that right now—that even though these systems perform very well on benchmarks, in the real world they still fail, especially with things that are very unique and very critical, right? So in that case, the model, instead of doing the reasoning for us, could make us have better reasoning by teaching us the critical thinking process. And there are many processes for that. Many schools of thought. We have looked at two processes. One of them is in a project called Variable Reasoner. We made a wearable device—like wearable smart glasses—with an AI agent that runs the process of verifying statements that people listen to and identify and flag when the statement people listen to has no evidence to support, right? This is really, really important—especially if you love political speeches, or you love watching advertisements or TikTok. Because right now, social media is filled with statements that sound so convincing but have no evidence whatsoever. So this type of system can help flag that. Because, as humans, we tend to go—or we tend to follow along—if things sound reasonable, sound correct, sound persuasive, we tend to go with them. But things that sound persuasive or sound correct doesn’t mean it’s correct, right? It can use all sorts of heuristics and other fallacies to get you to fall into that trap. So our system—the AI—can be the system that follows things along and helps us flag that for us. We have shown that when people wear these glasses, when the AI helps them think through the statements they listen to, people tend to agree more with statements that are well-reasoned and have evidence to support, right? So we can show that we can nudge people to pay more attention to the evidence part of the information they encounter. That’s one project. Another project—we borrowed the technique from Socrates, the ancient Greek philosopher. We showed that if the AI doesn’t give the answer to people right away but rather asks a question back—it’s kind of counterintuitive, like, well, but people need to arrive at that information for themselves— We showed that when the AI asked questions, it improved people’s ability to discern true information from false information better than AI giving the correct answer. Which some people might ask: why is that the case? And I think it’s because people already have the ability. Many of us already have the ability to discern information. We are just being distracted by other things. So when the AI asks a question, it can help us focus on things that matter—especially if the AI frames the information in a way that makes us think, right? For example, if there is a statement like: “Video games lead to people becoming more violent,” and the evidence is “a gamer slapped another last week.” For example— If the AI starts to frame that into: “If one person stabs another person, does that mean that every gamer will become violent after playing video games?” And then you start to realize that, oh, now there’s an overgeneralization. You’re using the example of one to overgeneralize to everyone, right? If the AI frames the statement into a question like this, some people will be able to come up with the answer and discern for themselves. And this not only allows them to reach the right and correct answer but also strengthens their process as well, right? It’s kind of like AI creating or scaffolding our critical thinking so that our critical thinking muscle can be strengthened, right? So I think this is a really important area of research. And there are many more research coming out that show how we can design AI systems that enhance critical thinking rather than doing the critical thinking for us. Ross: So in a number of other domains, there’s been research which has showed that whilst in some contexts AI can produce superior cognition or better thinking abilities, when the AI is withdrawn, they revert back. So one of the things is not only using AI in the enhancement process, but post-AI—to actually enhance the norms. When you don’t have the AI, that you’re still able to enhance your critical thinking. So has that been demonstrated, or is that something you would look at? Pat: Yeah, that’s a really important question. We haven’t looked at a study in that sort of domain—what happens when people stop using the AI, or what happens when the AIs are being removed from people—but that’s something that is part of the research roadmap that we are doing. At MIT right now, there’s a new research effort called AHA. We want to create aha moments, but AHA also stands for Advancing Humans with AI. And the emphasis is on advancing humans, right? AI is the part that’s supposed to help humans advance. So the focus is on the humans. We have looked at different research areas. We’ve already been doing a lot of work in this, but we are creating this roadmap for what future AI researchers need to focus on—and this is part of it. This is the point that you just mentioned: the idea of looking at what happens when the AI is removed from the equation, or when people no longer have access to the technology. What happens to their cognitive process and their skills? That is a really important part that is part of our roadmap. And so, for the audience out there—this April 10 is when we are launching this AHA research program at MIT. We have a symposium that everyone can watch. It’s going to be streamed online on the MIT Media Lab website. You can go to aha.media.mit.edu, and see this symposium. The theme of this symposium is: Can we design AI for human flourishing? And we have great speakers from OpenAI, Microsoft. We have great thinkers like Geraldine, Tristan Harris, Sherry Turkle, Arianna Huffington, and many amazing people who are joining us to really ask this question. And hopefully, we hope that this kind of conversation will inspire the larger AI researchers and people in the industry to ask the important question of AI for human flourishing—not just AI for AI’s sake, or AI for technological advancement’s sake. Ross: Yeah, I’ve just looked at the agenda and the speakers—this is mind-boggling. Looks like an extraordinary conference, and I’m very much looking forward to seeing the impact that that has. So one of the other things I’m very interested in is this intersection of agents—AI agents, multi-agents—and collective intelligence. And as I often say, and you very much manifested in your work, this is not about multi-agent as a stack of different AI agents around. It’s saying, well, there are human agents, there are AI agents—so how can you pull these together to get a collective intelligence that manifests the best of both? A group of people and AI working together. So I’d love to hear about your directions and research in that space. Pat: Yeah, there is a lot of work that we are doing. And in fact, my PhD advisor, Professor Pattie Maes, is credited as one of the pioneers of software agents. And she is actually receiving the Lifetime Achievement Award in ACM SIGCHI, which is the special interest group in human-computer interaction—this is in a couple of months, actually. So it’s awesome and amazing that she’s being recognized as the pioneer of this field. But the question of agents, I think, is really interesting, because right now, the terminology is very broad. AI is a broad term. AGI is an even broader term. And “agent”—I don’t know what the definition is, right? I mean, some people argue that it’s a type of system that can take action on behalf of the user, so the user doesn’t need to supervise. This means doing things autonomously. But there are different degrees of autonomy—like things that may require human approval, or things that can just do things on their own. And it can be in the physical world, or the digital world, or in between, right? So the definition of agent is pretty broad. But I think, again, going back to the question of what is the human experience of interacting with this agent—are we losing our agency or the sense of ownership? We have many projects that look into and investigate that. For example, in one project, we design new form factors or new interaction paradigms for interacting with agents. This is a project we worked on with KBTG, which is one of the largest banks in Asia, where we’re trying to help people with financial decisions. If you ask a chatbot, you need to pass back and forth a lot of information—like you need a bank statement, or your savings, or all these accounts. A chatbot is not the right modality. You could have an AI agent that interacts with people in the task—like if you’re planning your financial spending, or investment, or whatever. The AI could be another hand or another pointer on screen. You have your pointer, right? But the AI can be another pointer, and then you can talk to that pointer, and you can feel like there are two agents interacting with one another. And we showed that—even just changing, using the same exact model—but changing the way that information is flowing and visualized to the user, and the way the user can interact with the agent, rather than going from one screen, then going to the chatbot, typing something, and then going back… Now, the agent has access to what the user is doing in real time. And because it’s another pointer, it can point and highlight things that are important at the moment to help steer the user toward things that are critical, or things they should pay attention to, right? We showed that this type of interaction reduces cognitive load and makes people actually enjoy the process even more. So I think the idea of an agent is not a system by itself. It’s also the interaction between human and agent—and how can we design it so that it feels like a collaborative, positive collaboration, rather than a delegation that feels like people are losing some agency and autonomy, right? So I think this is a really, really important question that we need to investigate. Yeah? Ross: Well, the thing is, it is a trust—a relationship of trust, essentially. So you and it. So there’s the nature of the interface between the human, who is essentially trusting an agent—an agent to act on their behalf—and they’re able to do things well, that they’re able to represent them well, that they check nothing’s missed. And so this requires a rich—essentially, in a way—emotional interface between the two. I think that’s a key part of that when we move into multi-agent systems, where you have multiple agents, each with their defined roles or capabilities, interacting. This comes, of course—MIT also has a Center for Collective Intelligence. I mean, I’d love to sort of wonder what the intersections between your work and the Center for Collective Intelligence might be. Pat: Well, one thing that I think both of our research groups focus on is the idea of intelligence not as things that already happen in technologies, but things that happen collectively—at the societal level, or at the collective level. I think that should be the ultimate goal of whatever we do, right? You should not just make the machine more intelligent, but how do we enhance the overall intelligence? And I think the question also is: how do we diversify human intelligence as well, right? Because you can be intelligent in a narrow area, but in the real world, problems are very complex. You don’t want everyone to think in the same way. I mean, there are studies showing that on the individual level, AI can make people’s essays better. But if you look across different essays written by people assisted by AI, they start to look the same—which means that there is an individual gain, but a collective loss, right? And I think that’s a big problem, right? Because now everyone is thinking in the same way. Well, maybe everyone is a little bit better, but if they’re all the same, then we have no diverse solution to the bigger problems. So in one project that we looked into is how do we use AI that has the opposite value as a person—to help make people think more diversely. If you like something, the AI could like the other thing, and then make the idea something in between. Or, if you are so deep into one thing, the AI could represent the broader type of intelligence that gets you out of your depth, basically. Or, if you are very broad, maybe the AI will go in deep in one direction—so complementing your intelligence in a way. And we have shown that this type of AI system can really drive collaboration in a direction that is very diverse—very different from the user. But at the same time, if you have an AI that is similar to the person—like has the same value, same type of intelligence—it can make them go even deeper. In the sense that if you have a bias toward a certain topic, and the AI also has a bias in the same topic as you, it can make that go even further. So again, it’s really about the interaction—and what type of intelligence do we want our people to interact with? And what are the outcomes that we care about, whether it’s individual or collective? I think these are design choices that need to be studied and evaluated empirically. Yeah. Ross: That’s fantastic. I mean, I have a very deep belief in human uniqueness. I think we’re all far more unique than almost anybody realizes. And society basically makes us look and makes us more the same. So AI is perhaps a far stronger force in sort of pulling us together—society already is that, yeah. But I mean, to that point of saying, well, I may have a unique way of thinking, or just unique perspectives—and so, I mean, you’re talking about things where we can actually draw out and amplify and augment what it is that is most unique and individual about each of us. Pat: Right, totally. And I mean, I think the former CEO of Google, right, he has said at one point that, why would an individual—why would a person—want to talk to another person when you can talk to an AI that is 100,000 million people at the same time, right? But I feel like that’s a boring thing. Because the AI could take on any direction. It doesn’t have an opinion of its own, right? But because a human is limited to our own life experience until that point, it gives us a unique perspective, right? When things are everything, everywhere, all at once, it’s like generic and has no perspective of its own. I think each individual person—whether it’s the things they’re living through, things that influence their life, things they grew up with—has that sort of story that made them unique. I think that’s more— to me, that is more interesting, and I think it’s what we should preserve, not try to make everything average out. So for me, this is the thing we should amplify. And again, I talk a lot about human-AI interaction, because I feel like the interaction is the key—not just the model capability, but how it interacts with people. What features, what modality it actually uses to communicate with people. And I think this question of interaction is so interdisciplinary. You need to learn a lot about human behavior, psychology, AI engineering, system design, and all of that, right? So I think that’s the most exciting field to be. Ross: Yeah, It’s fantastic. So in the years to come, what do you find most exciting about what the Augmenting Humans with AI group could do? Pat: Well, I mean, many big ideas or aha moments that we want to create—definitely. We have actually an exciting project announcing tomorrow with one of the largest AI organizations or companies in the world. So please watch out for that. There’s new, exciting research in that direction, happening at scale. So there’s a big project that’s launching tomorrow, which is March 21. So if this is after that, yeah. I think one thing that we are working on is—we’re collaborating with many organizations, trying to focus and make them not just think about AGI, but think about HGI: Human General Intelligence. You know, what would happen to human general intelligence? We want everyone to flourish—not machines to flourish. We want people to flourish, right? To kind of steer many of the organizations, many of the AI companies, into thinking this way. And in order to do that, we first need a new type of benchmark, right? We have a lot of benchmarks on AI capabilities, but we don’t have any benchmarks on what happens to people after using the AI, right? So we need new benchmarks that can really show if the AI makes people depressed, empowers, or enhances these human qualities—these human experiences. We need to design new ways to measure that, especially when they’re using the AI. Second, we need to create an observatory that allows us to observe how people are evolving—or co-evolving—with AI around the world. Because AI affects different groups of people differently, right? We had a study showing that—this is kind of funny—but people talk about AI bias, that it’s biased toward certain genders, ethnicities, and so on. We did a study showing that, if you remove all the factors, just by the name of people, the AI will have a bias based on the name—or just the last name, right? If you have a famous last name, like Trump or Musk, the AI tends to favor those people more than people who have a generic or regular last name. And this is kind of crazy to me, because you can get rid of all the demographic information that we say causes bias, and just the name of a person already can lead to that bias. So we know that AI affects people differently. We need to design this type of observatory that we will deploy around the world to measure the impact of AI on people over time—and whether that leads to human flourishing or makes things worse. We don’t have empirical evidence for that right now. People are in two camps: the optimistic camp, saying AI is going to bring prosperity, we don’t need to care, we don’t need to regulate. And another group saying AI is going to be the worst thing—existential crisis, human extinction. We need to regulate and kill and stop. But we don’t have real scientific empirical evidence on humans at scale. So that’s another thing that MIT’s Advancing Human-AI Interaction is going to do. We’re going to try to establish this observatory so that we can inform people with scientific evidence. And finally, what I think is the most exciting thing: right now, we have so many papers published on AI—more than any human can read, maybe more than any AI can be trained on. Because every minute there’s a new paper being published, right? And people are not knowing what is going on. Maybe they know a little bit about their area, or maybe some papers become very famous, but we want to design an Atlas of Human-AI Interaction—a new type of AI for science that allows us to piece together different research papers that come out so that we have a comprehensive view of what is being researched. What are we over-researching right now? We had a preliminary version of this Atlas, and we showed that people right now do a lot of research on trust and explanation—but less so on other aspects, like loneliness. For example, that AI chatbots might make people lonely—very little research has gone into that. So we have this engine that’s always running. When new papers are being published, the knowledge is put into this knowledge tree. So we see what areas are growing, what areas are not growing, every day. And we see this evolve as the research field evolves. Then I think we will be able to have a better comprehension of when AI leads to human flourishing—or when it doesn’t—and see what is being researched, what is being developed, in real time. So these are the three moonshot ideas that we care about right now at MIT Media Lab. Yeah. Ross Dawson: Fantastic. I love your work—both you and all of your colleagues. This is so important. I’m very grateful for what you’re doing, and thanks so much for sharing your work on The Amplifying Cognition Show. Pat Pataranutaporn: Thank you so much. And I’m glad that you are doing this show to help people think more about this idea of amplifying human cognition. I think that’s an important question and an important challenge for this century and the future century as well. So thank you for having me. Bye. The post Pat Pataranutaporn on human flourishing with AI, augmenting reasoning, enhancing motivation, and benchmarking human-AI interaction (AC Ep82) appeared first on Humans + AI.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app