
Joel Pearson on putting human first, 5 rules for intuition, AI for mental imagery, and cognitive upsizing (AC Ep25)
Humans + AI
Applying change management to AI adoption
Joel explains change-management principles for organizations and the importance of national change plans.
“This is the first time, really, humanity’s had the possibility open up to create a new way of life, a new society—to create this utopia. And I really hope we get it right.”
–Joel Pearson
About Joel Pearson
Joel Pearson is Professor of Cognitive Neuroscience at the University of New South Wales, and founder and Director of Future Minds Lab, which does fundamental research and consults on Cognitive Neuroscience. He is a frequent keynote speaker, and is author of The Intuition Toolkit.
Website:
LinkedIn Profile:
University Profile:
What you will learn
- How AI-driven change impacts society and the importance of preparing individuals and organizations for it
- Key principles from neuroscience and psychology for effective AI-specific change management
- The SMILE framework for when to trust intuition versus AI recommendations
- Why designing AI to augment, not replace, human skills is essential for a thriving future
- How visual mental imagery and AI-generated visuals can support cognition and personal development
- The risks and opportunities of outsourcing thinking to AI, and strategies for maintaining critical thinking
- The role of metacognition and emotional self-awareness in utilizing AI effectively and ethically
- Emerging therapeutic and creative potentials of AI in personal transformation and human flourishing
Episode Resources
Transcript
Ross Dawson: Joel, it is awesome to have you on the show.
Joel Pearson: My pleasure Ross. Good to be here with you.
Ross: So we live in a world of pretty fast change where AI is a significant component of that, and you’re a neuroscientist, and I think with a few other layers to that as well. So what’s your perspective on how it is we are responding and could respond to this change engendered by AI?
Joel: Yeah, so that’s the big question at the moment that I think a lot of us are facing. There’s a lot of change coming down the pipeline, and I think it’s going to filter out and change, over a long enough timeline, a lot of things in a lot of people’s lives—every strata of society. And I don’t think we’re ready for that, one, and two, historically, humans are not great at change. People resist it, particularly when they don’t have control over it or don’t initiate it. They get scared of it.
So I do worry that we’re going to need a lot of help through some of these changes as a society, and that’s sort of what we’ve been trying to focus on. So if you buy into the AI idea that, yes, first the digital AI itself is going to take jobs, it’s going to change the way we live, then you have the second wave of humanoid robots coming down the pipeline, perhaps further job losses. And just, you know, we can go through all the kinds of changes that I think we’re going to see—from changes in how the economy works, how education works, what becomes the role of a university. In ten years, it’s going to be very different to what it is now, and just the quality of our life, how we structure our lives, what we have in our homes. All these things are going to change in ways that are, one, hard to predict, and two, the delta—the change through that—is going to be uncomfortable for people.
Ross: So we need to help people through that. So what’s involved? How do we help organizations through this?
Joel: We know a lot about change through the long tradition of corporate change management, even though it’s a corporate way to say it. But we do know that most companies go through this. When they want to change something, they get change management experts in and go through one of the many models on how to change these things, and most of them have certain things in common. Often they start with an education piece, or getting everyone on the same page—why is this happening, so people understand. You help people through the resistance to the change. You try things out. You socialize these changes to make them very normal—normalizing it. And we know that if you have two companies, let’s say, and one has help with the change and one doesn’t, there’s about a 600% increase in the success of that change when you help the company out. So if you apply that to AI change in a company or a family or a whole nation like Australia, the same logic should hold, right? If we want to go through a big national change—not immediately, but over a ten, fifteen, twenty-year period—then we are going to need change plans to help everyone through this, to help understand what’s happening, what the choices might be. And so that’s kind of the lens I look at the whole thing through—a change, an AI-specific change management kind of piece. Easier said than done.
We probably need government to step up there and start thinking about that. There are so many different scenarios. One would be, what happens in ten or fifteen years if we are looking at, you know, 50% unemployment? Then that’s a radical change to the spaces we live in, the cities, our lifestyles, and we can unpack that further. A lot of people think of universal basic income as this idea, a bit like retirement, or this flavor, like they do when they outsource to AI—that once you outsource, or once AI does a job and you have some other sort of backed income, then you get to do nothing. And that worries me a lot, because we know that retirement is really bad for your health—not just mental health, but physical health. There’s a higher likelihood that you’ll get sick and die after you retire. And so we see this strange thing where people say they want to do nothing, but when they do nothing, it’s actually really bad for their health.
Ross: Yeah, humansplusAI, I believe very much that AI is a complement to humans. It is not a replacement, if we design it effectively. And it’s really about designing well—how is it that we make, you know, the individual skills, what organizations function at, at a societal level—how can we make it that AI is not designed or enacted as a replacement to humans, but is a complement to augment us.
Whether that’s in our work activities now, where we are rewarded, but also in whatever else we are working on. So I think that there’s, you know, not—you know, there are whatever chances there are that we start to have more people who need support because they’re not rewarded for work. But really, it’s around saying, how can we design, as much as possible, the implementation and use of AI so that it can augment and complement us, so that we expand our abilities, express abilities, and be rewarded for that?
Joel: Yeah, I’m with you 100%. I mean, I guess the problem is that we are not designing it. We are not making it. You know, a handful of companies and just a handful of countries are doing the designing and making, and they are needing more and more capital and resources. And it just worries me that their end goal is to pull some of those jobs out of the—human jobs out of the economy, because they’ll need to find a way to recoup some of their capital investment. But we’ll see, maybe things will go a different direction. You know, it is hard to tell. We are seeing the numbers in graduate jobs dropping in the US at the moment, and we are seeing layoffs that are apparently linked to AI usage. But it’s hard to know, right? It really comes down.
Ross: It’s about agency—human agency—as in, what is it that we can do as individuals, as leaders, in order to maximize the chances that we have that vision? And I think there’s, you know, for example, I’ve created this framework around how we change to redesign entry-level jobs—not what they used to be, where they can be very readily substituted by AI, but ways where you can accelerate the time to develop judgment, to be able to contribute actively, to be able to bring perspectives. So this is around how organizations reframe it. And if we continue to use the old models, then yes, we’ll change stuff. So it really is around, how do we re-envisage that? And I think, as the neuroscientist, I’m interested in your perspectives on how we can be thinking or designing AI as a complement to human cognition.
Joel: Yeah. So let me throw something else out early on, because I tend to get—yeah, so pick me up if I get too dark and gloomy or too negative, because I do think of myself as an AI optimist. I do think we are on the way to utopia. I just think we’re going to have some speed bumps on the way to getting there. And so I feel like what I’m trying to do with my mission now is to help on the human side of what’s going on, rather than trying to influence the tech companies—trying to get people ready.
And so the immediate thing is the uncertainty and all the changes coming down the pipeline, like I just said. And so when it comes to absolutely redesigning the tech itself, there are lots of centers—Tristan Harris’s Center for Humane Technology is working on that and trying to influence through sometimes lawsuits, legal means, other times trying to get more of a human-centered design aspect into these companies. And I think most of the companies have a pretty—you know, that’s what they want as well. They are trying to make human-centered, human-focused products and services. I think it’s just sometimes they’re racing so quickly that that gets relegated to the back burner, a little bit behind other things. So yeah, we need to put humans first, both in the design of the products, but also we need to educate and help people on the people side—understand what’s happening and help them deal with the uncertainty that is around in the environment at the moment, and give them the psychological toolkits to help deal with this change, whatever level it’s on and whatever part of society it’s happening in. So yeah, starting at the tech side, then I think we need neuroscience and psychologists inside—as many as possible—inside all these tech companies, working closely with the engineers to plug in what we know already: all the deep psychological theories, the way the brain works. You know, how not to make these things addictive, even though that could be very tempting from a financial point of view. Long term, that’s not a good strategy. So these kinds of things, you know.
Ross: Looking on your work. So you have a book called Intuition. Intuition is becoming particularly pointed. So we have, of course, the wonderful work of Herbert Simon and many others over the years who have examined the nature of this. We know more than we can tell. We have accumulated experience which can be expressed in effective decisions, even when we can’t articulate why and how it is. We believe something, or we think that something is going to be more effective in a decision. So that becomes particularly pointed now we have AI, which has vastly more data. Well, there’s a lot of data anyway—humans have a lot of data as well—but AI has extensive data and some effective ways of processing that to be able to make decisions or make recommendations or participate in the decision process. So now the question is, how do we know when human intuition is so valid that it can override or complement the AI, as opposed to just deferring to the AI, saying, “Oh, it does better than we do.” So how do we combine human intuition with AI capabilities?
Joel: Absolutely. The first thing is that, yeah, intuition is a real thing. So my definition is pretty technical: it’s a learned, productive use of unconscious information for better decisions or actions. And it’s not everyone’s cup of tea. You know, people have different definitions of intuition—sometimes spiritual, sometimes magical. But I set out with this definition about ten years ago in the lab to try and build a science around intuition in a different way than had been done before. And we developed a new way to create intuition, measure it in the lab, show it’s a real thing, show how we can learn unconsciously and then utilize unconscious information to improve decision making, improve confidence, improve reaction time, all these kinds of things. And over time, we’ve pulled out these five rules for when you should trust intuition or when you shouldn’t. And that’s what the second half of my book is about—these five rules. And I have the acronym SMILE so people can remember these rules. So very quickly, I’ll just touch on them. The first S is self-awareness around emotions—this idea that if you’re highly emotional, positive or negative, then you shouldn’t trust your intuition. You shouldn’t use it, because these subtle feelings we have in our gut, chest, or palms—that’s how we pick up on this intuitive feeling. If you’re emotional, anxious, or you just won the lottery, or are falling in love, those strong emotions will flood these more subtle, intuitive emotions, and you don’t want to confuse those two things or get mixed up. So it’s better just to wait for your physiology to calm back down again and then trust your intuition. Next is M for mastery, and that’s really this idea that your brain has to learn the links between things in the environment and positive or negative outcomes. So the idea of intuition is a learned thing. It’s not some innate thing we’re born with. We have to learn the relationship, so it’s dynamic. If you want to be an intuitive chess player, you can’t just sit down and use your intuition with the chess pieces. Your brain has to learn all the different pattern recognition things and what the probable outcomes are. So you need to let your brain learn that, and it can learn that unconsciously.
So you need to put in the time for learning. Next is I for instincts, and I also squeeze in a couple other things there about addictions. It’s really about not mixing up the feelings we have—the cravings around addictive things, social media, drugs, and alcohol. So it’s substance and behavioral addiction. Not to confuse the craving we have for those things with actual intuition. Then L for low probability, but it really applies to all probabilistic thinking. There are hundreds of thousands of psychology papers on this topic and how we get led astray if we try and rely on our intuition or heuristics for making decisions about numbers or low probability events. We just don’t experience them in the same way. So the rule there is not to use your intuition for these low probability events or any probabilistic thinking. If you’re in a casino or you’re swimming in the ocean and you start thinking about sharks, your emotions are going to take over, even though it’s a very rare event to even see a shark. Simply thinking about it is going to drive that strong emotional response. And the final one is E for environmental context, and that’s really back to that mastery learning piece. When we learn things, our brain imprints the environment around us. So if you’re in the office at work and you learn new things there, it literally is imprinting that location with that learning, and it gets attached in the brain. So when you change location, change context, that learning—that intuition in this case—won’t apply as well. So you just have to be careful when we’re changing locations, when we’re traveling, because our intuition won’t work in the same way and it won’t be as good. So those are, very briefly, the five rules. And so the idea is to practice intuition following these five rules. That’s the best way we can come up with for optimizing intuition so it is trustworthy and reliable.
Ross: So let’s say you’ve got an executive experienced in their industry, and they have some kind of a bet-the-company decision—maybe a major acquisition, for example—and the AI makes one recommendation, and, you know, lays out logic, and the executive has this feeling in his or her gut—literally says, “This doesn’t feel right.” So what should they do?
Joel: Well, first up, go through those five rules, right? If they’re stressed about something else, or if they’re—go through a checklist and make sure they’re not falling for one of these other things. Do they have experience in the topic? If it’s something brand new they have no experience with, then their intuition could be leading them astray. Are they in a familiar context, familiar topic, all these kinds of things? Have they slept the night before—all these more basic things. So I go through a checklist like that first. Then if those things are all met, then it really comes down to the track record of the AI—what the AI is using for its information. The interesting thing about intuition is that you’re going to be combining conscious information that you know you have, but also unconscious information that you don’t necessarily know you have, but you know if you’ve had exposure to those things—hence the learning and mastery in the context and stuff. So I would try and figure out whether to trust your intuition or trust the AI given those two things. Now, the other thing is, if it’s time-limited—you’ve got to make a decision in 15 or 20 minutes—that also changes things. I would say, for time-limited things, go more with the intuition or gut response. If you have plenty of time to unpack and rationally go through everything, you probably don’t need to use intuition as much. So the scenario is important as well.
Unpack what the AI has been trained on—is it all trained on things in the past? Interact with it, talk to it, get it to explain its logic. What information is it basing its decision on? Where does it come from? And just tell it, “Oh, my gut’s telling me this,” and see how it reacts. That’s the other thing with AI—you want to interact and go back and forth with it, not just get a single answer and leave it at that. So that’s kind of where I wouldn’t want to give too much more advice. Generally speaking, I think it would be case by case, but make sure those rules are met—the biological rules, the SMILE rules—try and understand where the AI is coming from, what it’s using to make this decision. Get it to tell you that, and then try and get those two things to meet and understand what the difference is, or what the discrepancy is. And if there’s plenty of time, then I would even say maybe lean towards the AI. If there’s no time, I would say probably lean more towards the intuition—biology.
Ross: Okay, fantastic. So as a neuroscientist, a significant part of your work has been in visual mental imagery. I think that’s really interesting in a number of ways. One is that large language models—and we also—are multimodal. They are essentially language models, and humans are, yeah, we think significantly in language, but we also, many of us, think significantly both in mental images, so potentially in 3D space, in conceptual constructs. So in terms of how AI and humans can be complements, what are your thoughts around the role of visual mental imagery?
Joel: Yeah, so let’s say about 5%, give or take, of the population seem to have aphantasia, which is a sort of lack of capacity to visualize. So they try to imagine what an apple looks like—they don’t have a conscious experience of the apple. They just experience black on black. They do tend to have spatial locations, so they can imagine things behind them, the neighborhood layout, just no visual objects or no objects in those spatial locations.
So we’ve had many conversations and talked about using some kind of AI vision model or diffusion model as an augmented version of creating mental images on the fly. And a lot of people with aphantasia like this idea—the tech’s not quite there yet, but you can kind of see where it could go. There’s augmenting—if you’re reading a novel, or you want to imagine scenarios, or you’re trying to create a new product or something—you could utilize the new versions of AI, which could create these images on the fly for you, render them, make them interactive, and sort of augment your style of thinking. So if you can’t think in pictures, then you can outsource that to an AI. And people seem to like that idea. You could say that designers already do that to some degree with CAD systems and 3D models to try and understand how things can fit inside other things spatially. So it’s kind of an extension of that idea.
And it’s a nice sort of adjunct—you could add that on to an audiobook, for example, where you could have a system create images for the listener or the reader on the fly, which is another nice idea. So there’s some scope there. But then there are plenty of people with aphantasia who say they love the way they think. They don’t need images. They’re happy to go about their lives just thinking without pictures or sounds.
Ross: So one of the things which people are using AI for is to generate images of their storyboard future—what they might be doing or living in the future. And so that’s obviously—there’s this wonderful book by Marty Seligman called Homo Prospectus. And he says, essentially, that what is most characteristic about humans is that we think about the future. And a lot of that thinking about the future is in mental images—of this might happen, or this could be a disastrous conversation, or this is what I dream of, this is my fantasy. So what role do you see as AI being able to complement, assist, or affirm our mental images? take us a strike.
Joel: I think there’s something there, you know, and it’s interesting. I noticed this when I spent a solid few hours playing with Sora 2 when it came out, and creating all kinds of little films—little videos of me doing things I’d never done before: interviewing famous people, getting Academy Awards, playing in an orchestra, rock climbing El Cap near San Francisco. And a few things happened. After watching these things over and over and coming back to them, I did a double take. I was like, wait. And just for a moment, I thought, wait, did I do the thing? And I get these strange moments where I doubt my long-term memory, and just for a moment I thought, maybe that’s real. And then I go, no, what are you talking about? You didn’t interview Sam Altman. And so that’s interesting and a little bit scary in terms of long-term memory corruption and things like that. But what you’re getting at is the flip side—the therapeutic potential of that. If I’m getting over a phobia or wanting to achieve something, then seeing me do that over and over makes it feel very visceral and real. And I think there’s something in that. And I don’t know of anyone who’s exploring video self-generation like that in Sora 2 as a therapeutic means of either preparing for the future, preparing for giving a keynote or whatever, being in the Olympics, whatever it’s going to be, and getting used to that idea of seeing you win, or getting over a phobia. I mean, there’s lots of possible uses of this, because we’ve never had such a technology that could so easily make a video of yourself doing these things so quickly. So I do think there’s tremendous therapeutic potential with that technology.
And, yeah, I’ve been telling some of my colleagues who do clinical research, clinical therapy stuff, to start playing around and maybe design some studies using this, because I think there is something there.
Ross: Yeah, well, they’re certainly being used in phobias at the moment. But there was a great article in the New York Times a few months ago describing how people were using AI to provide their storyboard futures and so on, with various commentators commenting from psychologists on.
Joel: Was it with videos, or just video or stills?
Ross: Videos, actually.
Joel: Videos are cool.
Ross: So that’s one thing. So another related to that—you’ve also looked at mental visual imagery in the context of metacognition. And so again, plus AI, we focus a lot on metacognition as a way of—how do we think about our own thinking? How do we think about AI’s thinking? How do we think about how they go together? So are there any ways in which we can use visual imagery in assisting our metacognition?
Joel: Well, I mean, we’ve done studies on—so first up, yeah. When it comes to mental imagery, the metacognition does seem to be different from the actual image itself, and this is one of the issues. By far, the most popular way of measuring mental imagery is with a questionnaire called the Vividness of Visual Imagery Questionnaire, but the problem is, it assesses two things simultaneously: people’s metacognition and their actual imagery. So say you and I both imagine a sunset, and let’s say our mental image is exactly the same, but your metacognition is different. So you decide to give it a four, and I decide to give it a one, and that’s kind of a problem that you can have. Or we could have the same metacognition. So people can differ on those two different scales. So when you’re measuring mental imagery, you need more objective, reliable ways to measure these things. And we’ve spent well over a decade developing a range of different ways of objectively measuring mental imagery, visual imagery, in the lab. Does that tell us anything about AI.
Ross: Or assist us in our metacognition in the sense that we are interacting with AI? Enhancing our metacognition is valuable because it enables us to think better about our own thinking in conjunction with it. So anything that can enhance metacognition is valuable in assisting our ability to use AI positively rather than it.
Joel: I mean, yeah. Certainly improving metacognition across the board, I think, is very valuable—whether it’s, you know, I talk to students about this because it’s a huge problem with students when they’re studying and learning. One student will study for five minutes and feel confident they’ve done enough, and another one will study for five days straight and still not feel confident they’ve done enough. And so it’s their metacognition of knowing how much they need to learn and knowing what they need to learn that’s very different, and that will also apply to AI and the skills around AI. Also, I mean, one area that comes to mind with this is anthropomorphizing. We have pretty poor metacognition—we can’t help but layer on these human characteristics onto anything that has any kind of behavior, really, but absolutely AI. I mean, these studies go back to the 50s, where you’d have an outline of a square and a triangle, and the square would bump into the triangle, and the triangle would move along. And almost everyone who watches that straight away goes, “Oh, the poor triangle, it’s being bullied by the square,” and it’s just a black and white outline. That’s it. You don’t even know what’s happening. And so anything that shows some behavior like that, we can’t help but add human characteristics and personalities on, and so absolutely it happens with AI. That’s one of the areas where I think having some metacognition and awareness of how much that happens and how quickly it happens could help people be more aware of how they interact with AI, how they treat AI around that. The other way I think that you could apply metacognition is around critical thinking and this idea. So, you know, I’m sure you’re aware of that MIT study—the outsourcing study—and it kind of kicked off this thing of AI is going to produce brain rot. And as people are outsourcing—and I often will talk about that—if you just outsource everything to a human or an AI and do nothing, treating it like the retirement piece, your brain will atrophy, right? Your brain will change. You’re going to lose the habit of digging in, thinking deeply, the cognitive effort that goes into critical thinking. And so you don’t just want to outsource. You want to fill that gap immediately. I tend to call this cognitive upsizing. So outsource as much as you can, but then fill that gap immediately. Don’t treat it like a holiday from work—just find different tasks, juicier, more emotional, more complex, more human things to do to fill that space.
Otherwise, you know, and I feel this as well—you spend a day outsourcing to AI, and then you find something you can’t outsource, and the effort feels much harder than usual. You can use the analogy of going to the gym—if I put an exoskeleton on and lift weights for a week and then take the exoskeleton off, then it’s going to feel really hard to lift those weights. And it’s similar with the brain—we lose the habit of the discomfort of having to think deeply. So understanding those dynamics, getting metacognition around those feelings, what it feels like, so you can recognize that.
The other one, I think, in terms of metacognition that applies to intuition, but also a lot of things around AI, is just the self-awareness of emotion. So that applies to when we should or shouldn’t use intuition, but it applies to a lot of things around AI and uncertainty and being triggered, and fear of job loss and all this. Some people are very sensitive and they know when they’re getting triggered, when they’re getting stressed or anxious. Other people really don’t have much idea until they’re bursting or shouting at someone. And so that self-awareness of emotion is a crucial part of emotional intelligence, which itself is a bigger construct.
And there are apps you can download to train that self-awareness, and a lot of them use this style of having to drill down and attach a very specific word to how you’re feeling at different moments of the day. Doing that over and over just makes you become more familiar with these sensations in your body, these feelings, and labeling them will improve this self-awareness of emotion. So there are a few things that come to mind.
Ross: They’re really useful. That’s really good. So to round out, taking a positive sense, what’s most exciting to you about the potential of AI?
Joel: Like I said, I’m a huge tech fan. I love this stuff. I go around giving these talks all over the place, and people think I’m this psychological doom-and-gloomer, but I’m so excited by it—whether it’s AI being creative, whether it’s the humanoids coming out of the pipeline. I mean, when you watch that Google documentary from when it beat Lee Sedol playing Go back in 2016, I think it was, and it’s—I’m going to say move 37. Can you remember, 32, 37?
Where you see the expression on his face, and he kind of just stares and freezes and smirks a little bit. This is the Korean player, Lee Sedol, and that moment that the Google AI just changes the game Go forever, and it just comes up with a different way of approaching the game, a different way of playing the game. And this speaks to metacognition as well—that just because, if you want to use the word “think,” AIs think or compute so differently to the human brain, that simply by that alone, they’re going to come up with radically different approaches to things, whether it be the game Go, curing rare diseases, climate change—you name it, there’s just so much potential there before you have to get sci-fi with superhuman intelligence. AI just has a different approach to the way our brains work—it doesn’t have the same biological constraints. It’s not primed in the same way. There are just so many differences that alone will mean that we’re going to get a lot of interesting discoveries from AI just due to that difference. So then you layer on top, as you crank up the superintelligence, I think we’re going to see a lot of amazing breakthroughs. So I’m hugely excited about that. I’m excited about the idea of what AI and sentience and possible AI consciousness can tell us about human consciousness, in the same way it’s making us think about what is intelligence. We’ve had these pretty narrow definitions of intelligence and IQ tests for a long time, and all of a sudden, AI is making us re-evaluate this idea of intelligence. Will it do the same for consciousness? I really hope so.
And then, yeah, having humanoid humans—seeing, you know, like replicants in Blade Runner kind of thing that look and feel and sound human—is a little bit scary. But I think it’s really exciting just to see this, almost like a different species come into our world, like aliens almost. I find that really exciting. I know that some people disagree, but that thrills me. So yeah, a lot about the AI revolution, human robotics revolution, does really excite me. And like I said, sure, there’s going to be road bumps to get there, but this is the first time, really, humanity’s had the possibility open up to create a new way of life, a new society—to create this utopia. And I really hope we get it right. I think we can, if we do it consciously and effortfully.
I think we can. So all that excites me.
Ross: Fantastic. So where can people go to find out more about your work?
Joel: Look me up at profjoelpearson.com—that’s my main hub website. And through there, they can spin off and see the different things we’re working on, from the mental imagery, the how to get psychologically ready for AI disruption, agile science—that’s another project we work on—intuition, lots of different things.
Ross: Fantastic. Thanks so much for your time and your insights Joel.
Joel: Pleasure. Thanks for having me.
The post Joel Pearson on putting human first, 5 rules for intuition, AI for mental imagery, and cognitive upsizing (AC Ep25) appeared first on Humans + AI.


