

Humans + AI
Ross Dawson
Exploring and unlocking the potential of AI for individuals, organizations, and humanity
Episodes
Mentioned books

Oct 25, 2023 • 38min
Minter Dial on organizational empathy, augmenting with AI, empathic curiosity, and connecting to reality (AC Ep16)
“The beauty of life is dealing with challenges, not pretending that it’s perfect.”
– Minter Dial
About Minter Dial
Minter Dial is a professional speaker on leadership and transformation and the award-winning author of four books, most recently Heartificial Empathy, recently released in its second edition. He hosts the Minter Dialogue podcast and is author of the featured Substack Dialogos, Fostering More Meaningful Conversations. He previously held senior executive roles including as CEO of Redken Worldwide.
Website: www.minterdial.com
White Paper: Making Empathy Count
Books: https://www.minterdial.com/books/
LinkedIn: Minter Dial
Facebook: Minter Dial
YouTube: @MinterDial
Twitter: @mdial
What you will learn
Developing empathy and emotional intelligence in leadership (03:16)
Distinguishing between sympathy and genuine affective empathy (04:21)
Understanding and practicing compassionate communication (06:44)
Addressing empathy burnout in the modern workplace (08:38)
Challenges in fostering organizational empathy (12:00)
Role of curiosity, humility, and self-awareness in empathy (13:37)
The impact of reading fiction on empathy development (14:20)
Emphasizing the influence of one’s perspective on AI utilization (20:27)
Exploring empathic AI solutions while maintaining authenticity and consistency in customer service (23:47)
Clarifying intentions and ambitions before implementing AI solutions (29:33)
Exploring the connection between societal disconnection and AI development and perception (32:26)
Episode Resources
Resources
Digital Genius
Replika
Character.ai
Book
Heartificial Empathy, 2nd Edition: Putting Heart into Business and Artificial Intelligence by Minter Dial
We by Yevgeny Zamyatin
Transcript
Ross Dawson: Minter, it’s a delight to be talking to you.
Minter Dial: Ross, it’s always fun to chat with you. I’ve enjoyed following your work, reading about it, and having you on my podcast. Thanks for having me on.
Ross: You have worked with leaders in all guises for many years now. Leadership encompasses cognition, its array of making sense of the world to be able to act effectively in it. It’s a very big topic but what are some of the ways in which we can, as leaders, enhance our cognition or to help leaders to enhance their cognition, breadth, and scope of their ability to think and act?
Minter: Ross, it’s an interesting way to go into this topic by referencing empathy, which is a strong or very important skill that leaders of today and tomorrow need to have. Typically, we divide empathy into two different types. One is cognitive and the other is affective or emotional. To be a little bit out of left field, one of the things that leaders could do to improve their cognition would be to have better self-awareness and a higher emotional quotient. In other words, better understanding of their emotions, better acceptance of them, and eventually, a better showing of them. That’s where I’d like to start.
Ross: Let’s dig into the theme of empathy. How do you define that? Let’s bring this idea of empathy to life.
Minter: Essentially, there are many different schools of thought as to what empathy is, but broadly speaking, it’s about being in someone else’s shoes. More specifically, it’s about understanding someone else’s feelings, thoughts, and experiences. If you break that down, that means that I can understand what you’re thinking, I can understand what you’re feeling, I can understand your context experiences.
There’s a second piece of it, which is affective empathy, which is actually I feel your feelings, which takes it to another level. If you’re sad, I feel sad, I don’t feel sad for you, which is sympathy, I feel your sadness. In the way I approach empathy, I believe, it’s much more reasonable to think that you can learn cognitive empathy, but much harder to imagine learning or improving your affective empathy because if you don’t feel stuff, I can’t make you do it. On the other hand, in the case of cognitive understanding, open questions, thoughtfulness, observation, and taking time, are things that you can control, if you wish.
Ross: A lot of this happens in the creation of a prosperous workplace, but just to push to an edge case, if you have to lay off a bunch of people, does that mean you have to cut off your empathy? Because if you’re feeling the pain of many people, that’s a massive burden. Maybe you should be feeling that burden, but how does one manage in this kind of an example, when there’s no other path for an organization to survive, to be able to cause that kind of pain?
Minter: Whether or not it’s the only thing, it is the thing you’ve decided. The reality is that empathy isn’t about being nice, which is one of the big misconceptions people have. Empathy is about understanding someone else’s thoughts, feelings, and experiences. I’m going to get back to the emotional side of it in a moment, but let’s say that I have to deliver to you, Ross, some bad news. For example, I might have to cut your salary, or maybe I have to demote you or move you to a place, you’d go rather than stay with me. If I understand your context and the impact it will have on you, not just at work, but maybe in society, with your family, then I might be more suitably arranging the way I express it. “Hey, Ross, this is going to be hard. Please, do you have a moment here? Take a seat. This is going to be some bad news for you. I know It’s going to be bad news because of your situation.”
By trying to do that, by showing that you’re considerate about the situation, it obviously doesn’t take away the pain of the final mandate, which is, “I’m firing you”, however, what I’m going to do is, because I know your situation, I’m also going to think about having thought about that, what impact it’s going to have on you. Considering your situation, Ross, and this bad news that I have to give you, here’s what I’m going to suggest, or here’s how we can position it. For example, with regard to your family, you will keep your title for another six months while you’re searching, or I don’t know. There are different ways to land the news in a way if you can be considerate about the person’s situation.
I just want to get back to the affective side. Because surely, feeling everybody’s pain is difficult. There is a pathology called being an empath, where you are constantly, totally sensitive to everyone’s feelings all the time. That is hugely draining, it’s a real problem. It can make you unable to make any decisions or act because you’re fretful about making someone unhappy. That said, in business, we tend to make a separation between professional status and this other area, which is personal, which includes emotional status. In today’s world, there is much evidence to show that a lot of people in human resources are suffering empathy burnout.
This comes from two things. One is a hugely difficult economic situation and business environment, except for a few lucky ones, but on balance, a lot of uncertainty, whether it’s war, economics, global climate, whatever. It makes everyone nervous. People are having difficulties and strapping down, so a bad news environment, fear factors, and bosses are saying, “You better batten down the hatches.” The people who are the intermediary, generally relaying the information from the executive suite into the employee workforce, for example, letting go of people, are the HR team. They are also having to deal with, for example, the movement from or not whether to have flexible work hours, or work from home, and how all that is supposed to happen. They’re trying to do that with humanity, all the while being whipped and pressurized because performance is difficult, and the outlook is uncertain.
Ross: I want to get to the theme of your book “Heartificial Empathy” and some of the ways in which technology plays a part. But first, in the short time we have in a podcast conversation, how is it that people, leaders, and anybody can move to have more functional empathy? Empathy is enormously valuable for all sorts of reasons. You can’t sell anything to anybody unless you really understand the situation, you can’t engage, you can’t motivate. These are very powerful and pragmatic capabilities but also ones that give us a richer life. Are there any ways in which we can develop our empathy?
Minter: They most certainly are. If someone’s listening to this, they’re kind of nodding their head already, “Oh, it’s like Ross said, it’s great for business, it’s helping management, it’s going to allow you to sell, it’s going to be great for customer interactions, and so on.” Then you’re drinking the Kool-Aid. But the challenge and reality is that a lot of businesses struggle to have, let’s call it, organizational empathy. Part of that is the culture, but also the people in the C-suite. Are they modeling the behavior? Wherever you sit in the organization, is your boss, is the executive team, modeling empathy? Or are they struggling to deal with the pressures? Because there are two things that kill empathy in organizations. The first is stress related to performance issues, in large part, and and lack of time; because I’m running from meeting to meeting, I don’t have time to listen to you, park that for another time, and ultimately, never allowing that time to happen.
If you’re interested in becoming empathic, I’ll get to the concrete methods in a moment. But first of all, understand why you want to be empathic, because empathy is just a tool, and it can be used for good and bad purposes. Ask a sociopath; that’s their primary tool. Why do you want to become more empathic? How truly aware are you of your and your organization’s empathic levels? Some say, “Oh, I’m already empathic.” In the studies I’ve done, year after year, between 72 and 80 percent of individuals will describe themselves as being above average in their level of empathy. Problem! This issue of self-awareness is genuinely important, especially in the higher ranks.
One of the key qualities of being empathic is being curious. One of the key elements of being curious is having the humility to absolutely wish to understand or learn from somebody else. Because if I know it all already, then I’m going to start cutting you off, I’m not going to listen, I’m going to be thinking about what I’m going to say next, and that doesn’t allow the other person to feel heard. Having that self-awareness is important. Then understanding, genuinely, where you are as an organization.
Finally, just to come back to your question, Ross, the things that can help you generate or be more empathic? Assuming you’ve got the self-awareness, one lovely idea is to start reading much more fiction. I don’t know about you, Ross, you and I write nonfiction for the most part. But fiction, when it’s well-written with great dialogues and the development of personalities and characters, allows you somehow to get into the minds of other people, people who are not like you. For example, it can be a woman, or it could be someone of another race, religion, or country. If it’s well-written, it allows you this nuanced, complex understanding of how other people are. That’s one very lovely, and easy thing to do. There are, of course, many others.
Ross: I’ve always been an inveterate reader of fiction. There are times when I’ve not read as much fiction, but I’m still reading as much fiction as non-fiction these days. It’s a delight in its own right. We are a part of the human race, and amazing writers make us see things, and tug on our heartstrings in wonderful ways.
Minter: One of the things I’ve been reading most recently is a couple of dystopian novels. They also have their place, all the more so in today’s world, where there seems to be this huge divide. In that divide, I see the do-good, positive intention version of the world, and on the other side, highly fearful, highly compartmentalized, and worried about the future, maybe more tribal in thought. If you look at the book that I encourage everyone to take a look at, which is about to hit its 100th anniversary in 2024, it’s Yevgeny Zamyatin’s “We“, which was written in 1924, first published in English, and finally came out in Russian sometime in the 80s.
It is a tremendously interesting read because it fundamentally looks at this idea of who are ‘we’. What is ‘we’? When we belong, we belong to what? Where’s the place of ‘I’? Are we allowed to have ‘I’? Is ‘I’ good? Is ego appropriate? Then you have the narcissism on the other side. It’s a really interesting discussion. Part of the biggest paradoxes that we have to resolve or just live with, in business, and life, is learning this paradox between the need to feel different and yet belong.
Ross: That’s fascinating. This echoes my own quest over life, the role of ego, and how we create a world together. But to your point about the value of dystopian fiction, Margaret Atwood explicitly says that she writes dystopian novels to help us avoid the future she describes. She already has played a role by helping people recognize things that are happening, which echo some of her themes which have led to people being able to express themselves more clearly about what it is they don’t want.
Minter: It brings up the notion, Ross, of history. Margaret Atwood being rather well-endowed in history. I’ve had her nephew, Dan Snow, on my show a couple of times. The thing is, we’ve kind of lost the plot as far as studying history is concerned. If you don’t study history, how are you going to avoid repetition? Frankly, what I’ve been talking to professors of history, in universities here in England, as well as in the United States, their commentary is disheartening. We no longer wish to study history as facts and events that happened in a context; we only want to criticize it depending on today’s evaluation or more as of today, which is not going to give us a good understanding of what happened.
Ross: A very apt turn of phrase, I lost the plot, as in…That’s the plot that we have, which we’ve lived through as a human race, which can potentially inform our path forward.
Minter: Absolutely! Storytelling has a great value.
Ross: To switch on to the themes of “Heartificial Empathy“, your recent book which you’ve revised with the rise of generative AI, amongst other points, machines can express or engage us with emotion to evoke empathy, to express empathy in various guises. In a world where artificial intelligence, AI’s, can be empathic, or to evoke empathy in us, what are the things which we need to be thinking about the most?
Minter: I love your group about humanity and AI, by the way, Ross. I’m enjoying just the beginning of that. The first thought is that how you think of AI will inform how you use it. In other words, are you worried about everything, in which case, you’re going to be operating from a place of fear? Or do you have a positive bent? Then are you a little bit idealistic about what its potential is and putting your head in the sand as to what could go wrong? It’s important to have that as a beginning piece. My approach would be to think about what is strategically important for you and your business. Then, how can AI supplement and augment you and your human intelligence? That’s the general piece. It’s amazing how many things are out there.
Then you have to think about your ethical framework. How do you want to bring that in, in a way that’s appropriate? Are you going to be kind of too goody two shoes about it as in expected to have a higher standard of operations than we as human beings are? Or are you going to have a more realistic understanding of what you’re trying to achieve? Are you prepared to experiment, fail, test, and try again? You’re going to need a lot of that with humility, because, by the way, life is tricky moving along. Then basically, consider that a lot of employees are probably going to be worried about the impact of AI, so positioning it in a way that they hopefully, won’t sabotage, and they are willing to work with it and work with you, think of it as a skill acquisition, and do it.
Several organizations are considering how to use empathically and say, formatted, and coded artificial intelligence to help certain functions in business concretely, such as marketing, communications or CRM, and customer service. But it doesn’t mean removing the human being; it’s trying to augment, facilitate, and take out some of the nutty, silly tasks, making them better, and eventually, more effective by being sometimes graded for being more empathic in the way they are approaching their communications.
Ross: My framing around this is “humans plus AI,” as in how can humans and AI be better? How can AI amplify humans and humanity? Looking in our customer service context, of course, we can just have a human interacting with the customer, we can have AI interacting with the customer, or we can have the AI supporting the human, either way, some combination of them. Firstly, a lot of customer services are now automated. I’d like you to address that idea of to what degree should the AI express empathy? Whether that’s really felt or not? How can humans and AI together be more effective in expressing or living empathy?
Minter: I hear this regularly. This is like a consultant’s answer, but it depends. For example, if you’re in a B2B or B2C, and how B2C are you? Are you millions of millions of people? The need for some kind of scalable response system becomes all the more evident. What are you trying to achieve? How real are you as a human being? Then, how can you create a copacetic, or consistent with your culture, type of AI service? The reality is we are very far from having empathic AI. What we’re getting better at is trying to tag or identify more empathic responses. There’s a very important distinction that’s worthwhile bringing up, which is within empathy, there is the giver of empathy, the one who’s being empathic, and the one who’s receiving it.
I like to make this distinction because, in essence, sometimes someone can be the giver and be empathic, but the other person doesn’t feel it. That’s not necessarily bad; it might be just that the other person is there to be empathic, and maybe, for example, I’m a product manager thinking of a new product for a person like Mr. Dawson. What would Mr. Dawson really like? I think he would really like this, this, and that. That would fit into his day and really be useful for him. If it’s a pen, he’d like to have a nice click when it closes because that’s satisfying. There’s little user experience element to it. But when you use that pen, you don’t know that I was being empathic; you’re not going to say, “Oh, Minter, that pen designer was really empathic with me.” You might say, “Oh, this is a freaking great pen,” but you’re not going to associate it with the quality of empathy. That’s sort of an example of a case.
But other times, you might try to be empathic, but the other person doesn’t feel it; maybe that person is in a deeper or worse space. This notion of giving and receiving depends on what you’re trying to measure. In the case of customer service, which you brought at first, when you are responding to somebody, the question here is how much data you have on your customer base, and how much of the work that you’re requiring your customer service to do can be improved. For example, if a call comes in, oh, I can identify the call; that’s this customer. That’s the profile of this customer; this customer likes to be treated really quickly, just short sentences, wants effectiveness, and doesn’t do any niceties like “How are you doing, sir?” Go straight to the core and answer the question. Alright! That’s great. I’m informed as to how I should operate with this customer.
If the customer comes in as angry, oh, I didn’t expect that. The software can help me, Minter, relax! because this is how you’re going to deal with this. Here are four options for how you can reply to this. The first one is highly empathic but not very good for business. The second one is less empathic and a little better for the business and so on, so you can have different measurements. You’re not necessarily always going to take the most empathic option, depending on the culture and what your objectives are. Then you have these four answers, and they’re all pre-typed, you, as the customer service agent, have the agency ‘keyword’ to choose which of the four you think is best based on the criteria and valuations that you as an organization want to set up.
This is something concretely that people are doing at Digital Genius, which is one organization that does that. By helping the agent to be more informed about the customer incoming, giving some tips on how to be a little bit more empathic, just attitudinally, because when the other person is spitting fire at you, it’s hard to be empathic at that moment, necessarily, and then come up with a pre-typed, so you don’t have to worry about typos or mistakes.
Ross: Pushing a bit further, one of the things that is fascinating is the degree of AI to engage us emotionally. We have Replika, and some of the characters in character.ai, and many others. I’ve forgotten the name of it, but there’s a Chinese service which has hundreds of millions of virtual boyfriends or girlfriends on it. In a way, that goes beyond empathy. Perhaps that’s one of the things that makes us be emotionally engaged… is the other person is empathic, probably a very important part of it. But it is essentially one of the frontiers that we need to explore and discover as in what happens when we become emotionally engaged? People are already falling in love with AI chatbots in various guises, and that will certainly continue. Where are we? What are the opportunities and challenges of these deeply emotionally engaging AI conversationalists?
Minter: We are moving along, and my quip would be to say that we’re in a very lonely society and people are very willing and desirous of having emotions because, heck, we’re not just lonely, we’re sad. The levels of anxiety and depression in the world are huge. People are very quick to run into no one has the time to hear anybody else, it’s all about me. On top of that, not only is it all about me, but it’s all about what I feel. Forget the facts. My feelings are the truth. My truth is better than Trump’s and yours. That’s a level playing field. But when it comes to organizing this thought, I have a three-part version. One is what are you trying to achieve? What is your ambition? What is your intention? The second is, what is your ethical framework that supports that? The third, which is important, is what is your business model?
You need to combine those three things as you look at what you’re trying to do with AI, whatever business you’re doing because you can make perhaps your AI better than you as an organization is, you can make it, perhaps more empathic than you as an organization is because there’s such a thing as organizational empathy. But is that going to make it for a better experience overall for your customer, or maybe you’re just looking to make a quick dime and sell the company in 18 months? In which case, the ethical framework is usually thrown out of the window. This takes into consideration what is your intention and what is your business model. I look at those three things as being important when you look at AI.
Ross: To round out here on this idea of artificial empathy, where machines, in many cases, will be effectively better at expressing empathy than many humans, or at least that’s a premise I would make, where does this go? In broader society, in terms of all of us, and how do we engage with that? What are some top-of-mind thoughts on what’s coming and how we should be thinking about this world where we do literally have artificial empathy in a very real way, as well as human empathy?
Minter: Here’s where I’m going to go with this, Ross. There’s a lot of artifice in general, not just AI or artificial empathy. I feel as a society, we tend to live through avatars. I use the word avatars as a metaphor for an alternative reality. We generally believe that my reality is the right reality, and we’ll promote everything to make me better and make me look good. In this idea of virtue signaling and looking good, I feel we’ve lost the plot again as to what is reality. The things we’re looking at, in certain cases, it’s all about making a quick dime, making a lot of money, flipping the business to some VCs or whatever. But worse than that, as a society, we are so grotesquely egotistical that we think we deserve to live forever, that we are the first generation that deserves to have immortality.
There are people in the transhumanist department who are thinking this, and they’ve completely detached themselves from reality, which is that we are mortal, highly fallible, imperfect beings. The beauty of life is dealing with challenges, not pretending that it’s perfect. As human beings, we’re disconnected from one another – loneliness. We’re disconnected from reality. As such, we’re making sense of things that are disconnected from reality. Have you ever heard of apophenia? This is a beautiful word, which means making sense of things that actually don’t have any rational sense underneath them. You invent sense out of the stars, “Oh, I see the stars; that must mean that tomorrow, I’m going to make money.”
We’re in a world where we lack true sense, with sense being the idea of rationality as well. We’re in this high-feeling mode, highly detached from reality, and desperate for sensing connection for true sense. We’re so desperate that we’re prepared to go for anything to have meaning. I would like us to focus on being a little bit more real, being a little bit more self-aware, not being so self-indulgent, and thinking more about community and thinking about actually what we mean by “we”, not in a naughty world where everybody is beautiful, and everybody deserves to belong because I think that’s a beautifully idealistic idea that has no base of reality. We have to learn how to find our limits, to say that good intentions can lead to bad outcomes, and be a little more realistic about the way we approach things, including, of course, the way we encode AI.
Ross: Yes, the nature of society is changing rapidly. There are some fundamentals to humanity, many humans with, as you say, human fallibilities brought together. Now we’re amplifying that in many ways with the technology we’ve created which, in a way, comes back to who we are in this world. Minter, what are the best places for people to find you and your work?
Minter: Generally speaking, it’s on a paddle tennis court because I’m a nut for Paddle Tennis. But if that’s not the way you work, I also like to write. I get up pretty much every morning and write about 1,000 words a day. Most of my writings, my hub is minterdial.com. There’s that little company over in Amazon that carries a few of my books. I’ve just released a white paper called “Making Empathy Count,” which looks at this notion of how you evaluate and measure empathy. That’s also available on Amazon. Otherwise, I’m out there on social media, still drumming but also listening. If you talk about mental models, spend more time listening than ranting. I’ve been ranting on this podcast with you, Ross. Thank you for listening and indulging me. But we should spend a whole lot more time listening with curiosity and with genuine humility, and not necessarily thinking about how I’m going to make the world better, but at least, putting effort into making your world, a little part of the world, a little better.
Ross: Fantastic. Thank you so much for your time and your insights today, Minter.
Minter: It’s been a pleasure over a glass of scotch in London, but, thank you very much for having me on, Ross.
The post Minter Dial on organizational empathy, augmenting with AI, empathic curiosity, and connecting to reality (AC Ep16) appeared first on Humans + AI.

Oct 18, 2023 • 37min
Regan Robinson on spending time in the future, using imagination, things that make you go hmmm, and hyper-awareness (AC Ep15)
Regan Robinson, a futurist, discusses spending time in the future, using imagination, and cultivating well-being. She emphasizes the transformative power of imagination and explores ways to overcome cognitive biases.

Oct 11, 2023 • 39min
Toby Walsh on the differences between human and artificial intelligence, our relationship to machines, amplifying capabilities, and making the right choices (AC Ep14)
Toby Walsh, Chief Scientist at UNSW.ai, talks about the differences between human and artificial intelligence, the complexity of defining intelligence, deceptive design choices in AI, the symbiotic relationship between human creativity and AI capabilities, the role of probabilities in large language models, and outsourcing human tasks with AI's evolving roles.

Oct 4, 2023 • 38min
John Hagel on moving from threat to opportunity, the passion of the explorer, learning platforms, and scalable learning in practice (AC Ep13)
In this podcast, guest John Hagel, a leading Silicon Valley entrepreneur, discusses the importance of scalable learning and redirecting individuals towards more valuable work. He introduces the concept of the 'Passion of the Explorer' and highlights the impact of collaboration, trust, and shared excitement in problem-solving. Hagel also explores the influence of the Bay Area's optimistic mindset and emphasizes the need to frame challenges as exciting opportunities for creating a flourishing world. Find more information about his work on his website and through his social media presence.

Sep 27, 2023 • 39min
David Berkowitz on AI in marketing, gaining superpowers, amplifying marketers, and the future of agencies (AC Ep12)
David Berkowitz, a veteran marketing agency and technology leader, discusses leveraging AI as a marketing superpower, streamlining repetitive tasks with AI, and the evolving landscape of AI in organizations. He highlights the immediate usability and versatility of AI in various applications and emphasizes the importance of adapting to industry changes and utilizing data effectively in marketing strategies.

Sep 20, 2023 • 0sec
Anuraj Gambhir on wise mirror, technology for spirituality, the state of neurotech, and bliss mode (AC Ep11)
Anuraj Gambhir, a futurist, consultant, and educator, discusses Wise Mirror, neurotech, and the integration of technology in daily life for improved health and performance. Topics include near-infrared scanning technologies, the mind-body connection, and the critical transition from information to wisdom. Ethical considerations in AI and lessons from the Blue Zones for a balanced life are also covered.

7 snips
Sep 13, 2023 • 39min
Genevieve Bell on the history and relevance of Cybernetics, frameworks for the past, present and future, and decolonizing AI (AC Ep10)
Genevieve Bell, Distinguished Professor at Australian National University, discusses the history and relevance of cybernetics, frameworks for the past, present, and future, and decolonizing AI. The podcast explores the origins of AI and its connection to learning and intelligence, while also delving into the concept of decolonizing AI and unraveling historical articulations. It emphasizes the need to critically examine power dynamics and challenges the assumptions made about learning and intelligence in AI. Genevieve's work offers a broader perspective on technology and cybernetics.

Sep 5, 2023 • 0sec
Jeremiah Owyang on amplifying humanity, enterprise excellence, autonomous agents, and AI-business alignment (AC Ep9)
“The goodness of what humans desire, AI will do that; the bad players, these tools will also amplify that. It’s for us to determine the course of how these technologies will be used.”
– Jeremiah Owyang
About Jeremiah Owyang
Jeremiah is an industry analyst based in Silicon Valley, and advisor to Fortune 500 companies on Digital Business, as well as an entrepreneur, investor, andthe host of tech events including some of the current major AI events in the San Francisco Bay Area. He has a strong global profiles and has appeared in publications including The Wall Street Journal, The New York Times, USA Today and Fast Company.
Websites: web-strategist.com
LinkedIn: Jeremiah Owyang Facebook: Jeremiah Owyang Instagram: @jowyangTwitter: @jowyang
What you will learn
How the local AI scene is thriving and offers valuable opportunities for enthusiasts (02:53)
AI’s potential to amplify humanity and reshape society (04:40)
Recognizing the fear of AI replacing humans and its underlying causes (06:36)
The potential for a mutually beneficial division of labor between AI and humans (08:20)
The Centaur concept, as a fusion of human and AI capabilities (08:44)
Critical role of organizational infrastructure in AI adoption (10:02)
Highlighting the current fervor and interest in AI across corporations (13:30)
The challenge of AI Integration in go-to-market (16:19)
The importance of embracing curiosity and staying informed about AI tools and concepts (18:08)
Real-world examples of AI utility (21:37)
Introducing the concept of foundational models and their evolving role in AI technology (22:37)
Addressing the potential future of AI that involves extensive data access (25:10)
The centralization of AI and the race for data (28:37)
The importance of business models in AI ethics (29:07)
The critical considerations for enterprises embarking on AI projects (31:23)
Episode Resources
OpenAI, ChatGPT4MidjourneyHugging FaceSalesforceAdobeAgent GPT
BookImpromptu: Amplifying Our Humanity Through AI by Reid Hoffman
MovieThe Matrix
Transcript
Ross Dawson: Jeremiah, fantastic to have you on the show.
Jeremiah Owyang: Ross, I’m delighted to be here. Thank you.
Ross: You’re deep-deep into AI. I’d love to just get the big-picture perspective on what you’re seeing happening and what the potential is, this year, next year, and beyond.
Jeremiah: Sure. I’ve been living in Silicon Valley since the .com era so I’ve seen approximately five tech trends. I haven’t seen a movement this big, perhaps since the .com era. There’s notable excitement and energy all across Silicon Valley and San Francisco, you can touch it, you can feel it. I attend a minimum of three AI events per week so I can stay abreast of the rapid changes that are happening. Most of the AI startups’ foundational models are in the Bay Area, so it’s happening here, plus the big tech giants who are all moving into AI.
I also host an event series for AI startups called the Llama Lounge. It’s a clever name, and hundreds signed up in over ten different startups’ demos. Also, I have been an investor in AI startups since 2017 and I’m working with a VC firm. I’m doing other things for corporate executives as well. I’m definitely entrenched. Ross, in June, there were 84 AI events. In July, the “Slow Month”, there were 69 AI events. Those are just the public events that we know about. There are private events, and co-working mansions, and events with the tech CEOs. There is so much happening, and I’m excited to come to share that knowledge with you today.
Ross: Fantastic. We’re particularly interested in Humans plus AI. Humans are wonderful, AI has extraordinary capabilities. For the big picture frame, how should we be thinking about Humans plus AI, and how humans can amplify their capabilities with AI?
Jeremiah: I think that the verb “Amplify” is correct. There is a book written by Reed Hoffman, co-written with a friend of mine called Impromptu that talks about AI amplifying humanity. That is the right lens for this. All tools that we’ve built technologies throughout the course of human history have done that, from fire to splitting the atom to technology to AI. I do believe AI is at that level, it is quite significantly going to change society in many ways. The goodness of what humans desire, this tool will do that; the bad players, these tools will also amplify that.
It’s for us to determine the course of how these technologies will be used. But there’s something different here, where the experts I know believe that we will see AGI (Artificial General Intelligence) equal to human intelligence within the decade. This is the first time, Ross, that we’ve actually created a new species in a way. I think that’s something quite amazing and shocking. These are tools that will amplify what we desire as humans, what we already do.
Ross: If we think Frame AI as a new species, as you put it, as a new novel type of intelligence, one of the key points is that it’s not replicating human intelligence. Some AI has been trying to model human intelligence, and neural structures, and others have been taking other pathways. It becomes a different type of intelligence. I suppose if we are looking at how we can complement or collaborate, then a lot of is around that interface between different types of intelligence. How can we best engineer that interface or collaboration between human intelligence and artificial intelligence?
Jeremiah: That’s a great question. I think that we can use artificial intelligence to do the chores and the repetitive tasks that we no longer desire to do. Let’s acknowledge that there’s a lot of fear that AI will replace humans. But when we dig deeper into what people are fearful of, they’re more fearful of the income loss that they’ll have in some of the repetitive roles. It’s not always the things that they have sought after to do in their career. It’s just the way that they’ve landed in their career and they’re doing tasks that are repeated over and over. But if it’s just using your keyboard and repeating the same messages over and over, that is really not endearing to the human spirit. This is where AI can help complement so we can level up and do tasks that require more empathy or connection with humans or unlock new creative outlets.
Ross: One thing is a division of labor. All right, Human does that, AI does that or robots do this.
What is more interesting is when we are collaborating on tasks. This could be from anything like, ‘I’m trying to build a new product.’ There are many elements within that where Humans and AI can collaborate. Another could be strategic thinking. In terms of how we build these together, rather than dividing, separating, and conquering, where is it that we can bring together to collaborate effectively on particularly higher-level thinking?
Jeremiah: Yes, those are great things. AI is great at finding patterns and unstructured data, which humans struggle with doing. Humans are often able to unlock new forms of thinking in creative ways that are not currently capable of being done by machine learning or Gen-AI. Those are the opportunities where we segment the division of labor. I want to reference, I had the opportunity to interview Garry Kasparov, Grandmaster Champion of chess, at IBM of all places, and his thinking is that we want to look for the Centaur. He believes the best chess player in the world will be a human, and she would also be using the AI. He wants to create a league where the humans with AI would be combating another human with AI in a chess battle. He believes that would be the greatest chess player ever. It’s not just a human or not just an AI, but it’s that centaur, that’s a mixture of the species coming together. And I think Garry is right.
Ross: Garry, specifically, in our context says that it’s not about how good the AI is or how good the human is, it is around the process. The quality of the process is what determines the ability of that centaur, and the human and AI working together to be more effective. This comes down to the idea of the process of bringing together humans plus AI. Thinking about it from a large company perspective, how is it that we can design processes that bring together humans and AI to create this centaur that can transcend either humans or AI individually?
Jeremiah: In August, I went to the largest AI business conference that’s independent from a tech vendor. There were 2500 business leaders who are leading AI at large companies and government organizations, most of them are American, I want to add. One of the biggest challenges right now is that the organization is not even set up correctly to prepare for AI. What I found is that there are about three different models in which I’m seeing AI being grouped. The first one is product innovation. The second one is a go-to-market, which is marketing, sales, and customer care partnerships. The third would be loosely called Enterprise, which is operations, finance, IT, legal, and security.
Those three groups are what I tend to see; there could be a fourth group, which would be an overarching group that would be running a center of excellence for AI and/or defining ethics and purpose that would cascade across all three of those groups. To bucket those at those high levels is correct what I’m seeing, and I’ve confirmed that with other leaders. Note that they span multi-departments because AI is enterprise-wide. Now, this is just the context; let me just set this up. In that room, one of the speakers who was leading analytics at a large makeup company, a beauty company, polled the room and said, ‘How many of you have a center of excellence?’ Out of the 2000 people, only 20 people raised their hands. This tells us something quite interesting.
We saw this, by the way, in Web 2.0 when I was a Forrester analyst, the social media of the corporation will reflect the culture of the company internally. The way that the social media accounts were rolled out, you can tell how that company was organized. The same thing is starting to happen with AI. If a company is not organized correctly, and there is not a single source of truth from data, data modeling, cleaning the data, plus an ethics layer, and then making sure that the data is being fed back, this is all before it even touches any foundational model or machine learning, then you have multiple versions of AI, and it would be fragmented.
A fragmented organization results in a fragmented data set or fragmented data strategy, which would result in a fragmented AI experience across any of those three groups. That’s the biggest challenge that companies have right now; it’s not much about machine learning skills or the ability to generate prompts, it’s that they’re not set up correctly from the infrastructure at the get-go, in most cases. That even includes the large tech giants as well. They’re so large. They allow innovation to the fringes of the organization, that their data is spread across the organization. The big soft skill here, Ross, is organizational leadership cross-departments, that is the most important thing that is needed right now before they can even think about prompt engineering or using the tools.
Ross: It’s around having common data governance, common data models, their architecture, and then coordination across whatever AI models across that.
Jeremiah: Correct. Thank you for succinctly articulating the exact steps. I’m going to rely on you for that. Ross, in addition, there’s a lot of heat and interest right now for AI. Every corporation wants this; however, a few weeks ago, I visited a Colocation Center in Santa Clara. For those that don’t know, Colo is where corporations have their servers. I visited one that was focused on AI. Now, big companies are at a crossroads; do they go to the giant tech scalars, like Amazon, Google, or Microsoft, and give them data so they can train and learn their models against your own customer data? It’s like paying rent to somebody while they sleep in your bed. That’s basically what they think about it. Or do they train their own models with their own data in their own private colocation centers, or on-premises data centers and servers, where their data is safe?
Now, the latter option is quite expensive. Right now, there is a wait time of thirty to fifty weeks for Nvidia chips. Yes, there are cheaper versions out there. But that’s a long wait. And with that wait, most of those chips have already been pre-purchased by the hyper scalars. Then you have to have the power and service. Right now, a full stack of a server for AI is 1.25 million dollars if you can get it. The capital expenditures for a big corporation to lean into this, plus staff, and ongoing maintenance is millions of dollars of a bet. That’s just for one AI, for one of those products or groups that we talked about, let alone for the enterprise.
The issue here is that even if they get the organizational alignment and the set of criteria that you just listed out, their project could still be a business failure and the corporation may lose interest and appetite in a few years, resulting in a net negative project. That’s another business model issue that also has to be contended with.
Ross: Generally, what are the parameters that would suggest whether the enterprise should be looking at using off-the-shelf models and off-the-shelf training, as opposed to being able to build their own models?
Jeremiah: Regulated industries, I have been speaking to the heads of AI at financial & pharma. They’re more likely to grab off-the-shelf open source right now, the common model, surprisingly, would be Llama or Llama Two, which is built by Facebook of all people. But they can download that from a repository like Hugging Face and/or Falcon. There are other players out there that are offering banded suites that would do this on a safe cloud or a private cloud away from the big hyperscalers or set it up on-premise. There are other things that could happen to do that. That would require a significant commitment from the C suite to set that up unless there was an IT unit already ready to deploy that.
In most cases, a marketing group or a sales group will not have time to wait for an enterprise to do that; that could take months if not years. They’re more likely to go use a cloud by Salesforce and/or Adobe, which is now offering AI, in addition to the three hyperscalers that I prior mentioned. That’s what most likely is going to happen. As a result, you’ll see a fragmentation with the go-to-market team which I broke apart, and the product team which is more likely to have it on-premise because they have the infrastructure, and then you have a breakage. Now, this results in a broken customer experience because the product might have AI integrated, but when it’s time for customer care or marketing, their systems are not talking to each other and the customer is going to be quite frustrated. They don’t care which department the AI belongs to. They just want their problems fixed.
Ross: Interesting. There are a lot of architectural or structural issues which do need to be led, as you suggest, from the top of the enterprise. One of the things that you’ve said is that in this world, we need to become a master at using AI tools. What is that process? Is it all up for us individually to go out and learn how to engage? Do enterprises need to roll out education programs? How do we become masters at using AI?
Jeremiah: I believe in the listeners of your show that they are curious. Even if you’re not technical, you follow Ross, and you’re going to explore new ideas because Ross is your leader. I am sure people here are, or if they are not, they’re trying some of the most basic tools and you should become familiar with those. It’s also important for you to train your kids on these things. If they’re under teens and you do it with them, you’d be careful. I do this for my young children. We’re doing prompts and creating stories or fun, kids are using Gen-AI and understanding how it works. That same attitude of curiosity and safely doing this should be applied to you as well. Just like you learned the Internet, email, learned to use apps on your mobile phone, then social media, and then you might have learned Web3, now you need to use this next technology set, there’s no question.
Now this one is a very simple interface; it’s quite easy to use these tools aside from Midjourney. Most of them are just text-based chats. Yes, personal exploration is required for you to stay current in all things in life. This one is coming at us quite quickly. I also invested to take a continuing education class from an esteemed University; it actually did not cost me much, it was around USD 600 total, which is a tax write-off, and I paid out of pocket to do that. That’s something that I’m willing to do to make sure that I’m current. For those who are working at companies, you should request that or use your educational credits to do that and/or request HR to offer classes. There’s no shortage of classes now, including those provided for free from Khan Academy, LinkedIn Learning, and beyond. There is no shortage of content to learn from. Those are the ways that that needs to happen.
Ross: That’s very sound advice as in getting there and doing it is the only way to learn. You’ve been talking a lot about AI agents in quite a few different contexts.
Let’s take a step back. What’s an AI agent? Why is it important? Best we can dig in from there.
Jeremiah: Yes, great question. What is an AI agent? That term, you might have seen a science fiction movie called Matrix, where there’s an independent agent. There are good agents and bad agents. There are some that would help the main character, Neo, and there are quite a few actually, that were antagonists against the main character, Agent Smith, in particular, in all of his forms. Those agents, good or bad, operate independently with very little human oversight. They are like living creatures. There’s a term used for them sometimes called Baby Artificial General Intelligence, otherwise known as baby AGI, like infants because it’s the precursor of regular human intelligence. These tools need little oversight. The easiest way to try these is to use Agent GPT. You can just put that into a search tool, find that site, and try it out with a login or without a login. There are different variations; you can purchase additional credits.
The most common use cases are to book a complicated travel experience. For example, Ross, you travel quite a bit; you know how travel works. But imagine somebody who is short on time or is new to travel could say ‘Book me a trip to San Francisco from Bondi Beach’, and it would list out all the things you need: passports, vaccinations; then it would go find flights, then find hotels. It would do all these things with little human intervention across multiple different sites. Then what it surprisingly does in some cases, I asked it to help me improve my cardiovascular skills, and it actually started to code. It started to code in Python an app to track my fitness. I didn’t ask it to do that; it started to code and generated code, which I then grabbed, and then actually, if I had those technical skills, I could get it to build an app. It’s doing all those things without any human intervention. That’s what an autonomous agent is. I hope that is a definition, an example. Those are rising at a rapid pace.
Now, there’s another technology set, which you are all quite familiar with, called a foundational model. The most common one is called GPT. The foundational model is trained on human knowledge and intelligence, then it’s tokenized, and it creates new variations, and anticipates what our needs are. Now, those foundational models are starting to also become like autonomous agents. You can see those markets are starting to merge. I did a diagram called the AI tech stack; you can go search for it and find it. The foundational models currently are separated. But having spoken to some of the CEOs of those companies, you can see that they are quickly moving towards an AGI, which means they would all have that.
Long story short, to summarize, autonomous agents are the precursor to Artificial General Intelligence equal to human capability. They’re being developed quite rapidly. They would be living next to us, and supporting us. I imagine, Ross, that we would have different autonomous agents, just as we have as many email accounts or as many social network accounts, as an example. Ross, you’ll probably have a personal intelligence agent; you’ll also have one for your personal business. If you were working at a company, they would assign one to you and probably take it away from you post-employment. You might have one provided to you by your healthcare provider that just focuses on that, with a very dedicated set of data that’s regulated by, typically, governments. There might be wacky fun ones out there as well that do things for personal interests. Right there, I can imagine three to five different personal agents that are working alongside you; you have a pocket of experts, doctors, MBAs, and geniuses at your disposal working for you while you sleep.
Ross: Okay, that’s a compelling vision if we can make it work the way we want it to work. One of the first questions that comes to mind is, again, the interface between the human and the agent. Will this be something where we can just use text or speech to be able to tell it and it interprets it? Will it ask us questions to clarify? How do we make sure that the agent is truly aligned with what we want, does understand our intentions even if we’re not good communicators? How do we get that alignment with the agent and ourselves?
Jeremiah: What I’m going to say now is going to unnerve some people, but others, they’ll find it a wonderful solution. Let’s see, Ross, which way you think on this. Having spoken to the leaders who are building these things, two things are going to happen. One is it’s going to look historically through your data which means you will expose all your emails and it’ll already find your public social media. You have published quite a few things on your amazing website, including your awesome frameworks. It would already grab that information and as you allow API access to your personal apps, it would get that.
Secondly, it would compare that to other people that are like you. You and I have a common friend, Chris Saad, who’s a thought leader in his own right when it comes to technology. I consider you my very smart peer. We’re similar in many ways when it comes to the business content that we produce and think about. From these different data sets, your personal historical data, and those that are like you, it can start to anticipate what are your needs and what you’re thinking. By the way, that’s not new, social networks and Google have been doing that for 25 years. Google is ancient, 25 years, and Facebook has been around since 2004, about 19 years. All right, that’s part one. That’s not that new. But we’re going to expose a lot of information.
Part two is where people get a little nervous. But in some of the foundational models, they will be listening and recording everything that you’re doing in real-time, with your permission. Some of them will have the microphone ON at all times so it can listen to the context of what’s happening. Of course, this needs to be done legally, with rights and permissions so it can understand what are your needs. Maybe there’s a camera ON, so it can understand your facial expressions, I can see you right now as we’re recording, and get real-time feedback even though we’re in different parts of the globe, it’s a very important piece of data and the AI will have that as well, including voice inflections, background noise, and how much sleep you’ve had. The more information that you give to the AI in real-time, the more accurately it will be able to understand the context and predict. Then of course, finally, you would give it explicit prompts as those things you mentioned. Ross, given those three phases that I talked about, where do you lean on this? Optimistic or pessimistic for that future?
Ross: It completely depends on how it is architected, and the ownership. If this is run by a current tech giant, I would be extremely cautious. If we’re able to build this into a decentralized system where I have a reasonable degree of data ownership or control, then I’m all for it. That’s one of the challenges; I’ve believed so much in decentralized data sovereignty and all of these things for a couple of decades now. We’ve really seen very little progress in the big picture.
I think the promise of what you describe is incredible in how can we amplify ourselves. The challenge is that can we do this without it being run by tech giants from which we can question do they really have our interests at heart.
Jeremiah: 100% agreed. That’s a bigger topic. It could fill a whole podcast on its own. In short, I do see this AI movement heading towards centralized. It’s already centralized aside from some of the open-source models. But what happens is those open-source platforms become very strong. Even Hugging Face has trained data, right? That’s already a centralized database in a way. That’s one issue. Big corporations are the ones that can afford to do the training. That already results in the training. Whoever gets the most data has the most accurate model. There’s a race to get the data.
There are ways that you can segment your data to make sure that it doesn’t get overly shared. But what’s key is the business model. This is where Facebook let us down. Their business model is a free product. Now, if these AI agents, as we just discussed, are a premium model and we pay, we know who the actual customer is. The issue is, that the rich people benefit first; they get compounding benefits versus those that don’t have that money in emerging markets will get behind in the innovation curve. Then we create yet a tiered society. This is why, again, going back to is AI revolutionary? In many ways, it’s amplifying and echoing what already exists in society. I just want to make sure that that’s clear that we shouldn’t just cast blame on the tools only. It’s just doing what we have already been doing in society.
Ross: Yes. That’s a great point. The vision you described is compelling. The reality is that even if it is run by tech giants, it is such a compelling proposition that most people will go along for that ride.
Jeremiah: Yes, convenience and price are… When I was a Forrester analyst, we researched privacy, we asked people, ‘Is privacy important to you in the era of web 2.0 and digital?’ People said, ‘Yes, very important.’ and we said, ‘How many of you have looked at your security settings?’ None, it’s like under 1%. How many have read the Terms of Service which, of course, is becoming more challenging over time? We can consume them now with GPT. But that being aside, and then how many of you are willing to pay for a social network or email? No, no, no, I want free. That’s an issue that we have.
Ross: Let’s pull this lot of advice for individuals. But let’s pull this to enterprise leaders. What is your advice to enterprise leaders in a world where AI is changing the nature of work, it’s changing the nature of value creation, and what are the things that need to be put in place to understand the values that humans can still bring to this world?
Jeremiah: Ross, that’s such a big question. At a high level, for any AI project that’s rolled out, the enterprise needs to solve an existing pain. Looking where there’s a breakage in, perhaps, customer care, or where there’s marketing breakage, or sales breakage, those are where you want to use AI to solve those problems. Because those are the only programs that will sustain over three years because we’re going to need that for it to be successful. Not just doing skunkworks. There’s been a trend recently in corporate innovation programs, where many of them are not separate budgets. In fact, they roll up to an existing P&L, a product team, in most cases, where the project can land and be incorporated versus a skunkworks. The age of Skunkworks and a lot of those Skunkworks projects got destroyed during the pandemic, those innovation outposts. Now we’re tying it back to business goals. That’s step one.
Step two is having a clear… I’m not even sure. I watched presentations at this AI Four conference from Deloitte, which wants to sell consulting services around setting up AI centers of excellence. They have a wonderful framework and a process, and it was very idealistic in getting all your stakeholders. But at the end of the day, there’s a real challenge here, Ross, because the data is owned by each BU, and the customer relationship is passed from department to department. It’s not clear who the sovereign data owner is because there are so many people involved. Can there be a single data owner across the enterprise? Is that the CIO? Is that the Chief Digital Officer? Is that the Chief Strategy Officer? Is that the Chief AI Officer, which is now a title, by the way? Even though those roles are supposed to be horizontal across business functions, it’s not clear who that individual is and if they can even keep the data aligned. That is the second thing to figure out is data alignment.
Those two things, aligning it to real business problem and data alignment are the two biggest things that you need. The third thing is tying in purpose to the human side of that. When I think of how enterprises need to engage in this space, we have a mission, which is AI for business and humanity. In some of the projects I’m doing related to enterprise, that is the mission statement. This means you need to be careful about how you communicate this to employees, especially lower-level employees who are extremely sensitive to the topic of AI, in particular, entry-level, most of those tasks will be automated and replicated by AI because they are repeatable processes, so instilling humanity from employee to executive plus the partners, plus your customers and greater societal, they have to have this ring effect of how does AI impact all of those stakeholders is required. Just as we did the same thing, in many cases, for sustainability, you had to look at those different rings of how you align that for the organization, that same process needs to happen for AI.
Ross: That’s fantastic! I just have to say it was a big question and that was a fantastic answer in terms of having the value, and intent. The data alignment thing is a really interesting issue the way you’ve raised it and it’s something that is coming to the floor in the world of AI, and I love that you’ve ended with the folks on humanity which has to be at the center. Where should people go to find out more about your work?
Jeremiah: It’s been a delight to spend time with you, Ross. You’ve asked such great questions. I’m available on most social channels as my first initial, and last name, which is JOwyang. I also have a blog called, web-strategist, and a newsletter but I can be available on those multiple channels.
Ross: Fantastic! Thank you so much for your time and insights. It’s been a great conversation.
Jeremiah: I’m so grateful for you. Thank you.
The post Jeremiah Owyang on amplifying humanity, enterprise excellence, autonomous agents, and AI-business alignment (AC Ep9) appeared first on Humans + AI.

4 snips
Aug 29, 2023 • 35min
April Rinne on superpowers for thriving, seeing opportunities, prioritizing humanity, and calendar brain (AC Ep8)
April Rinne, a futurist, discusses embracing change, navigating uncertainty, and regulating pace in a rapidly changing world. Also explores the interplay between exponential change and human capacity, highlights the need for optimism, and harnessing a calendar mind for balance and flexibility.

13 snips
Aug 23, 2023 • 0sec
Mark Schaefer on book writing processes, the right questions, community value, and the courage to experiment (AC Ep7)
Mark Schaefer, bestselling author, talks about book writing processes, integrating AI in writing, the enduring importance of questioning, audience vs. community, experimental learning, staying relevant, amplifying your unique voice, and embracing trend curatorship.


