Humans + AI

Ross Dawson
undefined
Oct 16, 2024 • 0sec

Marc Ramos on organic learning, personalized education, L&D as the new R&D, and top learning case studies (AC Ep66)

“The craft of corporate development and training has always been very specialized in providing the right skills for workers, but that provision of support is being totally transformed by AI. It’s both an incredible opportunity and a challenge because AI is exposing whether we’ve been doing things right all along.” – Marc Steven Ramos About Marc Steven Ramos Marc Ramos is a highly experienced Chief Learning Officer, having worked in senior global roles with Google, Microsoft, Accenture, Novartis, Oracle, and other leading organizations. He is a Fellow at Harvard’s Learning Innovation Lab, with his publications including the recent Harvard Business Review article, A Framework for Picking the Right Generative AI Project. LinkedIn: Marc Steven Ramos Harvard Business Review Profile: Marc Steven Ramos  What you will learn Navigating the post-pandemic shift in corporate learning Balancing scalable learning with maintaining quality Leveraging AI to transform workforce development Addressing the imposter syndrome in learning and development teams Embedding learning into the organizational culture Utilizing data and AI to demonstrate training ROI Rethinking the role of L&D as a driver of innovation Episode Resources AI (Artificial Intelligence) L&D (Learning and Development) Workforce Development Learning Management System (LMS) Change Management Learning Analytics Corporate Learning Blended Learning DHL Ernst & Young (EY) Microsoft Salesforce.com ServiceNow Accenture ERP (Enterprise Resource Planning) CRM (Customer Relationship Management) Large Language Models (LLMs) GPT (Generative Pretrained Transformer) RAG (Retrieval-Augmented Generation) Movie Sideways Transcript Ross: Ross Mark, it is wonderful to have you on the show. Marc Steven Ramos: It is great to be here, Ross. Ross: Your illustrious career has been framed around learning, and I think today it’s pretty safe to say that we need to learn faster and better than ever before. So where do you think we’re at today? Marc Steven: I think from the lens of corporate learning or workforce development, not the academic, K-12 higher ed stuff, even though there’s a nice bridging that I think is necessary and occurring is a tough world. I think if you’re running any size learning and development function in any region or country and in any sector or vertical, these are tough times. And I think the tough times in particular because we’re still coming out of the pandemic, and what was in the past, live in person, instructor-led training has got to move into this new world of all virtual or maybe blended or whatever. But I think in terms of the adaptation of learning teams to move into this new world post-pandemic, and thinking about different ways to provide ideally the same level of instruction or training or knowledge gain or behavior change, whatever, it’s just a little tough. So I think a lot of people are having a hard time adjusting to the proper modality or the proper blends of formats. I think that’s one area where it’s tough. I think the other area that is tough is related to the macroeconomics of things, whether it’s inflation. I’m calling in from the US and the US inflation story is its own interesting animal. But whether it’s inflation or tighter budgets and so forth, the impact to the learning functions and other functions, other support functions in general, it’s tighter, it’s leaner, and I think for many good reasons, because if you’re a support function in legal or finance or HR or learning, the time has come for us to really, really demonstrate value and provide that value in different forms of insights and so forth.  So the second point, in terms of where I think it is right now, the temperature, the climate, and how tough it is, I think the macroeconomic piece is one, and then clearly there’s this buzzy, brand new character called AI, and I’m being a little sarcastic, but not I think it’s when you look at it from a learning lens. I think a lot of folks are trying to figure out not only how do I on the good side, right? How can I really make my courses faster and better and cooler and create videos faster in this, text to XYZ media is cool, so that’s but it’s still kind of hypey, if that’s even a word.  But what’s really interesting? And I’m framing this just as a person that’s managed a lot of L&D teams, it’s interesting because there’s this drama that’s below the waterline of the iceberg of pressure, in the sense that I think a lot of L&D people, because AI can do all this stuff, it’s kind of exposing whether or not the stuff that the human training person has been doing correctly all this time. So there’s this newfound ish, imposter syndrome that I think is occurring within a lot of support functions, again, whether it’s legal or HR, but I think it’s more acute in learning because the craft of corporate development, of training, has always been very specialized in the sense of providing the right skills for workers, but that provisioning of stuff to support skills, man, it is being totally benefiting from AI, but also challenging because of AI. So there’s a whole new sense of pressure, I think for I think the L&D community, and I’m just speaking from my own perspective, rather than representing, obviously, all these other folks. But those are some perspectives in terms of where I think the industry is right now and again, I’m looking at it more from the human perspective rather than AI’s perspective. But we can go there as well. Ross: Yeah. Well, there’s lots to dig into there. First point is the do more with less mantra has been in place for a very long time. And I, as I’ve always said, it’s business is going to get tougher. It’s always going to get, you’re going to always have to do more. But the thing is, I don’t think of learning as a support function, or it shouldn’t be. It’s so, okay, yes, legal, it’s got its role. HR has got a role. But we are trying to create learning organizations, and we’ve been talking about that for 30 years or so, and now more than ever, the organization has to be a learning organization. I think that any leader that tries to delegate learning to the L&D is entirely missing their role and function to transform the organization to one where learning is embedded into everything. And I think there’s a real danger to separating out L&D as all right. They’re doing their job. They’ve got all their training courses, and we’re all good now to one of transformation of the organization, where as you’re alluding to, trying to work out, well, what can AI do and what can humans do? And can humans be on the journey where they need to do what they need to do? So we need to think of this from a leadership frame, I’d say. Marc Steven: Yeah, I totally agree. I think you have three resonating points. The first one that you mentioned, you know, the need to get stuff out faster, more efficient and so forth, and make sure that you’re abiding by the corporate guidelines of scale, right? And that’s a very interesting dilemma, I think, just setting aside the whole kind of AI topic. But what’s interesting is, and I think a lot of L&D folks don’t talk about this, particularly at the strategy level. Yes, it’s all about scale. Yes, it’s about removing duplication, redundancy. Yes, it’s about reach. Yes, it’s about making sure that you’re efficiently spending the money in ways where your learning units can reach as many people as possible. The dilemma is, the more you scale a course, a program, with the intention of reaching as many people as possible, frankly, the more you have to dummy down the integrity of that course to reach as many people. The concern that I’ve had about scale, and you need scale. There’s no doubt. But that the flip side of the scale coin, if I can say that, is how do you still get that reach at scale, the efficiencies at scale, but in such a way that you’re not providing vanilla training for everyone? Because what happens is when you provide too much scaled learning, you do have to, forgive the term, dummy it down for a more common lowest common denominator reach. And when that happens, all you’re basically doing is building average workers in bulk. And I don’t really think that’s the goal of scalable learning. Ross: But that’s also not going to give you, well, gives you competitive disadvantage, as opposed to competitive advantage. If you’re just churning out people that have defined skill sets, yeah, doing it. Do you, even if you’re doing that well or at scale. The point is, you know, for a competitive advantage, you need a bunch of diverse people that think, have different skills, and you bring them together in interesting ways. That’s where competitive advantage comes from. It’s not from the L&D churning out a bunch of people with skill sets, X, Y and Z. Marc Steven: Yeah, and I think you’re so right. The dilemma might not be in terms of, you know, the internal requirements of the training teams, strategic approach, whatever, it’s just getting hit from different angles. I mean, when you’re looking at a lot of large learning content, course providers, you know, without naming names, they’re in a big, big, big dilemma because AI is threatening their wares, their stuff, and so they’re trying to get out of that. There’s something, as you mentioned, too, that this is not verbatim, Ross, but something about making sure that, you know, L&D, let me kind of step back that, you know, building the right knowledge and skills and capabilities for a company, it’s everyone’s responsibility, and if anything, what is L&D’s role to kind of make that happen? The way I’ve been kind of framing this with some folks, this is maybe not the best metaphor, analogy, example, whatever. Within the L&D function, the support functions, talent, HR, whatever, we’ve been striving to gain the seat at the table for years, right? And what’s interesting now is because of some of the factors that I mentioned beforehand, coming out of COVID, macroeconomics, there’s a lot more pressure on the L&D team to make sure that they are providing value. What’s happening now is that expectation of more duty, responsibility, showing the return has peaked, and I think in good ways, so much so that, you know, I don’t think we are striving to get the seat at the table. I think the responsibilities have been raised so high where L&D is the table. I think, you know, we are a new center of gravity. I’m not saying we’re the be all, end all, but there’s so much, and I think necessary responsible scrutiny of learning, particularly related to cultural aspects, because everyone is responsible to contribute, to share. You learn. What was the old statement? Teaching is learning twice, and so everyone has that responsibility to kind of unleash their own expertise and help lift each other without getting called all kind of soft and corporate mushy. But that’s just the basic truth.  The other thing is this whole kind of transformation piece, you know, whether we are the table, whether we are a new center of gravity, we have that responsibility. And my concern is, as I speak with a lot of other learning leaders and so forth, and just kind of get a general temperament of the economic play of learning. In other words, how much money support are you actually receiving? It is tough, but now’s the time actually where some companies are super smart because they are enabling the learning function to find new mechanisms and ways to actually show the return because the learning analytics, learning insights, learning reporting and dashboards, back to the executives. It’s been fairly immature now, whether it’s AI or not, but now it’s actually getting a lot more sophisticated and correct. The evidence is finally there, and I think a lot of companies get that where they’re basically saying, wow, I’ve always believed in the training team, the training function, and training our team, our employees, but I’ve never really figured out a way for those folks to actually show the return, right? I don’t mind giving them the money because I know I can tell. But now there’s, like, really justified, evidence-based ways to show, yeah, this program that costs $75,000 I know now that I can take that the learner data from the learning management system, correlate that into the ERP or CRM system, extract the data related to learning that did have an impact on sellers being able to sell faster or bigger or whatever, and use that as a corollary, so to speak, it’s not real causation, but use that as evidence, maybe with a small e back to the people managing your budgets. And that’s the cool part, but that’s what I’m saying beforehand. It’s time that I think collectively, we’ve got to step up. And part of that stepping up means that we have the right evidence of efficacy, that the stuff we’re building is actually working. Ross: I think that is very valuable. And you want to support appropriate investment in learning. Absolutely, though, is I actually, when was it? It was 27 years ago something, I did a certificate in workplace training, and I was getting very frustrated because the whole course was saying, okay, these, this is what your outcomes from the learning, and this is how you and then you set all your low objectives to choose our outcome. I was saying, well, but what happens? Why don’t you want to get beyond what you’ve defined as the outcome to have open-ended learning, as opposed to having a specific bar and getting to that bar? And I think today we, again, this, if we have this idea of a person in a box, as in, that’s the organization in the past. This is the person. This is the job function? This is all the definitions of this thing. That person fits in that box, and they’ve got all this learning to be able to do that. So now we’ve got to create people that can respond on the fly to a different situation where the world is different, and where we can not just reach bars, but go beyond those two people to hunger to learn and to create and to innovate. And so I think we absolutely want to have show ROI to justify investment in learning, but we also need to have an open-endedness to it to we’re going into areas where we don’t even know what the metrics are because we don’t know what we’re creating. I mean, this obviously creates requires leaders who are prepared to go there. But I think part of I have similar conversations with technology functions, where the sense of you have to, as if you’re a CIO, you have to go to the board and the executive team, and you have to say, this is why you should be investing in technology. It’s partly because we will, we are part of the transformation of the organization. We’re not just a function to be subsumed. And same thing with learning. It’s like saying learning has to be part of what the organization is becoming. And so that goes beyond being able to anything you can necessarily quantify, to quantify completely. At this point, I think takes us a little bit to the AI piece. I’d love to get your thoughts on that. So you’ve kind of kept on saying, let’s keep that out of the conversation for now, let’s bring that in, because you’ve been heavily involved in that, and I’d love to hear your all right. Big Picture thoughts start. We can dig in from there. What’s the role of AI in organizational learning? Marc Steven: That’s a big question. Yeah, it’s a big question, and it’s an important question, but it’s also a question that’s kind of flavored with, I think, some incredible levels of ambiguity and vagueness for lack of better words. So maybe a good way to kind of frame that was actually circling back to your prior comment about people in a box to a certain degree, right? I mean, you have the job architecture of a role, right? Here’s the things that the guy or gal or the individuals got to do. I get it. It’s really interesting in the sense of this whole kind of metaphorical concept of a box, of a container, is super fascinating to me. And there’s an AI play here I’ll share in a second in the way I’m gonna kind of think about this as an old instructional designer fella. We’ve always been trained, conditioned, whatever, to build courses that could be awesome. But in general, the training event is still bound by a duration. Here’s your two-hour class, here’s your two-day event, here’s your 20-week certification program. I don’t know, but it’s always in. It’s always contained by duration. It’s always contained by fixed learning objectives. It’s typically contained by a fixed set of use cases. In other words, by the time you exit this training, you’ll be able to do XYZ things a lot better. This whole kind of container thing is just really, it boggles me, and maybe I’m thinking too much about this.  There’s a great movie, one of my favorite movies, called Sideways. It’s a couple guys that go to wine country in California, and they’re drinking a lot of wine, and they’re meeting some people. There’s one great scene where one of these actors, these characters, is talking to someone else, and this other person, he’s trying to figure out, where did you? Why did you get so enticed and in love with wine? What she says is just really, really remarkable to me. What she basically says is, you know why she loves wine is because she always felt that when you open up a bottle of wine, you’re opening up something that’s living, that’s alive. When you open up a wine and really think about it from that perspective, you think about the people that were actually tending the grapes when they were gathered. You might be thinking about what was the humidity? What was the sunshine? So I’m going to come back to the whole kind of container thing, but in AI, I just think that’s a really interesting way to kind of look at learning now, in the sense of what has been in that container in truth, has been alive. It’s an organic, living thing that becomes alive once the interaction with the learner occurs. What you want to do is think about extending the learning outside of the box, outside of the container. So getting back to your question, Ross, about the intersection, so to speak, of AI and learning, that’s one way I kind of think about it sometimes, is how can we recreate the actual learning event where it’s constantly alive, where if you take a course, the course is something that is everlasting, is prolonged, and it’s also unique to your amount of time that you might have, the context of which you’re working, blah, blah, blah. I’m not going to talk about learning styles. I think it’s fascinating because if AI, particularly with what large language models are doing now, and the whole kind of agentic AI piece where these agents can go off and do multiple tasks against multiple use cases, but against multiple systems, and then you got the RAG piece here too. That’s really interesting now, right? Because if somebody wants to learn something on XYZ subject, and let’s just say that you work for a company that has 50,000 people, and let’s just say that, I don’t know, half of those folks probably know something related to the course that you’re taking. But it’s not in the learning management system; it’s in a whole bunch of Excel spreadsheets, or it’s in your Outlook emails, it’s in the terabytes of stuff. Well, if AI and its siblings, GPTs, LLMs, agents, whatever, if they can now tap into that, that missing information on an ongoing dynamic basis to feed that back to Ross or to Marc or whomever, you’re literally tapping into this living organism of information.  AI is becoming smart enough to shift that living, breathing information into instruction to give it shape, to give it structure, to give it its own kind of appeal, and then make it, tailor it, and personalize it and adapt it for the individual. So if that occurs, I don’t know if it’s 2024 or 2034, but if that occurs, this whole kind of concept of really thinking about learning where the true benefits are organic, it’s alive, and it’s constantly being produced in the beautiful sunshine of everyone else’s unleashed expertise. That’s a really, really fun kind of dream state to think about because there’s a significant AI play. What it really does, it changes the whole, frankly, the whole philosophy of how corporate learning is supposed to operate. If we see some companies kind of heading into that direction or a correlation, which is probably going to happen, that’s going to be super, super fascinating. Ross: Yeah, that’s fantastic. It goes back to the Aridigos and his living company metaphor in the sense of it is self-feeding, that’s autopoiesis. This definition of life is you feed on itself in a way. I think that’s a beautiful evocation of organization as alive because it is dynamic. It’s taking its own essence and using it to feed itself. Is there anything in the public domain around organizations that aren’t truly on this path? Because, I mean, that’s compelling what you describe. But I’m sure that there’s plenty of organizations that have, you know, you’re not the only person to think of something like this. But are there any companies that are showing the way on this enable to be able to put this into place? Marc Steven: Definitely, it’s interesting. I’m trying to finish a book on AI, but I’m not talking about AI. Frankly, I’m talking about the importance of change management. But my slant is, is there any other greater function or team that can drive the accelerated adoption of AI in your company other than the L&D team? The clickbaity title that I think about is, is L&D the new R&D? Is learning and development the new research and development? That’s just one kind of crazy perspective. The way I’m kind of thinking about that is when I’ve been interviewing some folks for a piece that I’m doing, these are CLOs of major, major, major companies. With that change management framing, there are so many incredibly awesome stories I’m hearing related to how to really drive adoption, and what is L&D’s role. To your question, related to is anybody doing it? Some of these companies that really, really get it, they totally see the value of human-driven change management. By that, I mean the more successful deployments that at least I’ve come across is one where you’re not thinking about, well, identify those 24 use cases that have a higher probability of AI doing X, Y and Z. The smarter companies, I think, my own take, no, they don’t even ask that question. They kind of go a level higher. They basically say, can we put together a dedicated, I didn’t say senior, a dedicated group, cross-functional group of folks to figure out question number one.  Question number one is, what the heck do we do with this? They’re not talking about use cases. They’re not talking about the technology, so to speak. They were just trying to figure out, okay, what’s the plan here, people? That’s an interesting way to kind of do this. You’re not hiring Accenture, you’re not hiring whatever to bring in the bazillions of billable hours to kind of figure that out. They want a grassroots way of figuring out how to deal with AI, what does it mean to us? Good, bad, right or wrong? That’s one thing that I see a lot of companies are doing. They’re really taking a much more forward, people-first perspective of figuring out the ball game, and then if the ball game says, hey, we understand that, thinking about risk, thinking about responsibility, whatever. Yeah, here’s the three places we got to start. I think that’s just a really, really smart way to do it. On the vendor side, there’s a lot of really, really cool vendors now thinking about enabling companies for the betterment of AI. The ones that I think are really sharp, they’re getting it. They’re not like the really big, content course providers that say, hey, this is AI 101, this is the, here’s your list of acronyms. We’re going to talk through every single dang acronym and blah, blah, blah. That’s necessary. That’s great stuff. Some of the vendors that are really cool are the ones that are not really focusing on those basics, so to speak. They’ll go into an enterprise, name your company anywhere, and they’ll say, what are your concerns? What are your needs? What are your requirements related to this, this AI thing? Have you, oh, customer identified the areas where you think AI can best benefit yourselves and the company? Then they shape the instruction to blend in those clients’ needs very specifically. They literally customize the instruction to do that. That way, when the learner goes through the learning, they’re talking about the stuff they really focus on, on a day-in and day-out basis. It’s not this generic stuff off the shelf. The other thing that they’re doing is they’re actually embedding, no surprise, but they’re embedding agents, LLM processes, proper prompting into the instruction itself. If you want to know Gemini, then use Gemini to learn Gemini. They really, really go deep. That blending of it’s a different instructional design method as well, but that kind of blending is really, really super smart, just on the companies, the corporates. Ross: Is there any companies you can name? Would you say these are companies doing a good job? Marc Steven: I mean, yeah, I mean, so some of the folks I’ve interviewed and some companies I’m aware of, I think what DHL is doing is just remarkable because what they’re doing is, I was just using my prior example. Let’s have a people-first approach about what do we do about this? It’s kind of a given, you kind of know there’s an efficiencies play, there’s a speed play, there’s a, you know, building stuff more efficiently, play, whatever. But I think DHL is really smart about looking at it from that grassroots perspective, but still at the same time having this balanced approach, again, related to responsibility and risk. I think what Ernst and Young is doing, EY, they’re really, really super sharp too because they’re focusing a lot on, making sure that we’re providing the basics and following, I think, the basic corporate capability guidance of give them the one-on-one training, make sure they’re tested, make sure that people have the opportunity to become certified in the right ways. Maybe the higher level of certification can affect their level hours, which affects their compensation, yada yada yada. So I think that’s really, really great. What’s really cool is, what they’re also doing is, they’ve created kind of a, it’s kind of a Slack, it is Slack, but kind of a Slack collection point for people to contribute what they think are just phenomenal prompts. They’re creating, it’s not gamification, but they’re creating a mechanism because Slack is very social, right? People can now chime in to say, wow, that prompt was so great. If I just changed this and added three adjectives, this is my result, and then somebody else can chime and go, whoa. That’s great. What’s interesting is, you’re building this bottoms-up collection of super valuable prompts without the corporate telling you to do it. Again, it’s really kind of telling into the culture of the company, which I think is just fantastic as well. Then obviously there’s the big, big provider players, you know, the Microsofts, Salesforce.com, ServiceNow. What ServiceNow is doing is just phenomenal. I’m really glad to see this. It’s just a matter of keeping track of what’s truly working. It’s not all about data. Data is there to inform the ultimately, it’s the combination of AI’s data provisioning and a human being, the Johnny and Jane, the Ross and the Marc saying, well, yeah, but which I think is, again, super important. Ross: So Taranda, you’re writing a book you mentioned in passing. Can you tell us anything about that? What’s the thesis, and is there a title and launch date? Marc Steven: The book is, what I was highlighting beforehand, is really thinking about change management, but what is the learning functions, role of driving, more accelerated adoption of AI. That’s why I’ve been interviewing a whole bunch of these folks. I want to give a perspective of what’s really happening, rather than this observational, theoretical stuff. I’m interviewing a ton of folks, and my dilemma right now, to be honest with you, maybe you can help me, Ross, because I know you’re a phenomenal author. I don’t know if this is going to be a collection of case studies versus some sort of blue book or a playbook is a better description. I’m still on the fence, and maybe in good ways that should be maybe a combination. How do you take some of these really cool things that people are doing, the quote unquote case studies or whatever, but wait a second, is there a way to kind of operationalize that in a very sensible way that might align to certain processes or procedures you might already have but has maybe a different spin, thinking about this socially minded intelligence, you have to work with an agent to make sure that you’re following the guidelines of the playbook correctly. I don’t know. Maybe the agent is the coach of all your plays. Maybe that’s not the best, well, maybe it is a good example. Depends on what the person’s coaching, but yeah, that’s the book. I don’t know, I don’t have a good title. It could be the real campy, L&D is the new R&D. I get feedback from friends. I get feedback from friends that that is a really great way to look at it because there’s so much truth in that. Then I get other buddies and say, oh, geez, Marc, that’s the worst thing I’ve ever heard. Ross: You do some market testing, but I mean very much looking forward to reading it because this is about, it’s frustrating for me because I’m sitting on the outside because I want to know what’s the best people doing and, and I see bits and pieces from my clients and various other work, but I think sharing as you are, obviously uncovering the real best of what’s happening, I think is going to be a real boon. So thank you so much for your work and your time and your insights. Today, Marc has been a real treat. Marc Steven: Now that the treat, Ross has been mine, I really appreciate the invitation, and hopefully, this has been helpful to our audience. Great. The post Marc Ramos on organic learning, personalized education, L&D as the new R&D, and top learning case studies (AC Ep66) appeared first on Humans + AI.
undefined
Oct 9, 2024 • 0sec

Alex Richter on Computer Supported Collaborative Work, webs of participation, and human-AI collaboration in the metaverse (AC Ep65)

“Trust is a key ingredient when you look into Explainable AI; it’s about how can we build trust towards these systems.” – Alex Richter About Alex Richter Alexander Richter is Professor of Information Systems at Victoria University of Wellington in New Zealand. where he has also been Inaugural Director of the Executive MBA and Associate Dean, where he specializes in the transformative impact of IT in the workplace. He has published more than 100 articles in leading academic journals and conferences, with several best paper awards and been covered by many major news outlets. He also has extensive industry experience and has led over 25 projects funded by companies and organizations, including the European Union.. Website: www.alexanderrichter.name University Website: people.wgtn.ac.nz/alex.richter LinkedIn: Alexander Richter Twitter: @arimue Publications (Google Scholar): Alexander Richter Publications (ResearchGate): Alexander Richter What you will learn The significance of CSCW in human-centered collaboration Trust as a cornerstone of explainable AI Emerging technologies enhancing human-AI teamwork The role of context in sense-making with AI tools Shifts in organizational structures due to AI integration The importance of inclusivity in AI applications Foresight and future thinking in the age of AI Episode Resources CSCW (Computer Supported Cooperative Work) AI (Artificial Intelligence) Explainable AI Web 2.0 Enterprise 2.0 Social software Human-AI teams Generative AI Ajax Meta (as in the company) Google Transcript Ross: Alex, it’s wonderful to have you on the show. Alex Richter: Thank you for having me, Ross. Ross: Your work is fascinating, and many strands of it are extremely relevant to amplifying cognition. So let’s dive in and see where we can get to. You were just saying to me a moment ago that the origins of a lot of your work are around what you call CSCW. So, what is that, and how has that provided a framework for your work? Alex: Yeah, CSCW (Computer-Supported Cooperative Work) or Computer-Supported Collaborative Work is the idea that we put the human at the center and want to understand how they work. And now, for quite a few years, we’ve had more and more emerging technologies that can support this collaboration. The idea of this research field is that we work together in an interdisciplinary way to support human collaboration, and now more and more, human-AI collaboration. What fascinates me about this is that you need to understand the IT part of it—what is possible—but more importantly, you need to understand humans from a psychological perspective, understanding individuals, but also how teams and groups of people work. So, from a sociological perspective, and then often embedded in organizational practices or communities. There are a lot of different perspectives that need to be shared to design meaningful collaboration. Ross: As you say, the technologies and potential are changing now, but taking a broader look at Computer-Supported Collaborative Work, are there any principles or foundations around this body of work that inform the studies that have been done? Alex: I think there are a couple of recurring themes. There are actually different traditions. For my own history, I’m part of the European tradition. When I was in Munich, Zurich, and especially Copenhagen, there’s a strong Scandinavian tradition. For me, the term “community” is quite important—what it means to be part of a community. That fits nicely with what I experienced during my time there with the culture. Another term that always comes back to me in various forms is “awareness.” The idea is that if we want to work successfully, we need to have a good understanding of what others are doing, maybe even what others think or feel. That leads to other important ingredients of successful collaboration, like trust, which is currently a very important topic in human-AI collaboration. A lot of what I see is that people are concerned about trust—how can we build it? For me, that’s a key ingredient. When you look into Explainable AI, it’s about how we can build trust toward these systems. But ultimately, originally, trust between humans is obviously very important. Being aware of what others are doing and why they’re doing it is always crucial. Ross: You were talking about Computer-Supported Collaborative Work, and I suppose that initial framing was around collaborative work between humans. Have you seen any technologies that support greater trust or awareness between humans, in order to facilitate trust and collaboration through computers? Alex: In my own research, an important upgrade was when we had Web 2.0 or social software, or social media—there are many terms for it, like Enterprise 2.0—but basically, these awareness streams and the simplicity of the platforms made it easy to post and share. I think there were great concepts before, but finally, thanks to Ajax and other technologies, these ideas were implemented. The technology wasn’t brand new, but it was finally accessible, and people could use the internet and participate. That got me excited to do a PhD and to share how this could facilitate better collaboration. Ross: I love that phrase, “web of participation.” Your work came to my attention because you and some of your students or colleagues did a literature review on human-AI teams and some of the success factors, challenges, and use cases. What stood out to you in that paper regarding current research in this space? Alex: I would say there’s a general trend in academia where more and more research is being published, and speed is very important. AI excites so many people, and many colleagues are working on it. One of the challenges is getting an overview of what has already been done. For a PhD student, especially the first author of the paper you mentioned—Chloe—it was important for her to understand the existing body of work. Her idea is to understand the emergence of human-AI teams and how AI is taking on some roles and responsibilities previously held by humans. This changes how we work and communicate, and it ultimately changes organizational structures, even if not formally right away. For example, communication structures are already changing. This isn’t surprising—it has happened before with social software and social media. But I find it interesting that there isn’t much research on the changes in these structures, likely due to the difficulty in accessing data. There’s a lot of research on the effects of AI—both positive and negative. I don’t have one specific study in mind, but what’s key is to be aware of the many different angles to look at. That was the purpose of the literature review—to get a broader, higher-level perspective of what’s happening and the emerging trends. Ross: Absolutely. We’ll share the link to that in the show notes. With that broader view, are there any particularly exciting directions we need to explore to advance human-AI teams? Alex: One pattern I noticed from my previous research in social media is that when people look at these tools, it’s not immediately clear how to use them. We call these “use cases,” but essentially, it’s about what you can do with the tool. Depending on what you do, you can assess the benefits, risks, and so on. What excites me is that it depends heavily on context—my experience, my organization, my department, and my way of working. A lot of sense-making is going on at an individual level: how can I use generative AI to be more productive or efficient, while maintaining balance and doing what feels right? These use cases are exciting because, when we conducted interviews, we saw a diverse range of perspectives based on the department people worked in and the use cases they were familiar with. Some heard about using AI for ideation and thought, “That’s exciting! Let’s try that.” Others heard about using chatbots for customer interactions, but they heard negative examples and were worried. They said, “We should be careful.” There are obviously concerns about ethics and privacy as well, but it really depends on the context. Ultimately, the use cases help us decide what is good for us and what to prioritize. Ross: So there’s kind of a discovery process, where at an organizational level, you can identify use cases to instruct people on and deploy, with safeguards in place. But it’s also a sense-making process at the individual level, where people are figuring out how to use these tools. Everyone talks about training and generative AI, but maybe it’s more about facilitating the sense-making process to discover how these tools can be used individually. Alex: Absolutely. You have to experience it for yourself and learn. It’s good to be aware of the risks, but you need to get involved. Otherwise, it’s hard to discuss it theoretically. It’s like it was before with social media—if you had a text input field, you could post something. For a long time, in our research domain, we tried to make sense of it based on functions, but especially with AI, the functions are not immediately clear. That’s why we invest so much effort into transparency—making it clearer what happens in the background, what you can do with the tool, and where the limitations lie. Ross: So, we’re talking about sense-making in terms of how we use these tools. But if we’re talking about amplifying cognition, can we use generative AI or other tools to assist our own sense-making across any domain? How can we support better human sense-making? Alex: I think one point is that generative AI obviously can create a lot for us—that’s where the term comes from—but it’s also a very easy-to-use interface for accessing a lot of what’s going on. From my personal experience with ChatGPT and others like Google Gemini, it provides a very easy-to-use way of accessing all this knowledge. So, when you think about the definition of generative AI, there may be a smaller definition—like it’s just for generating content—but for me, the more impactful effect is that you can use it to access many other AI tools and break down the knowledge in ways that are easier to use and consume. Ross: I think there are some people who are very skilled at that—they’re using generative AI very well to assist in their sense-making or learning. Others are probably unsure where to start, and there are probably tools that could facilitate that. Are there any approaches that can help people be better at sense-making, either generally or in a way that’s relevant to a particular learning style? Alex: I’m not sure if this is where you’re going, but when you said that, I thought about the fact that we all have individual learning styles. What I find interesting about generative AI is that it’s quite inclusive. I had feedback from Executive MBA students who, for example, are neurodivergent learners. They told me it’s helpful for them because they can control the speed of how they consume the information. Sometimes, they go through it quickly because they’re really into it, and other times, they need it broken down. So, you’re in the driver’s seat. You decide how to consume the information—whether that’s in terms of speed or complexity. I think that’s a very important aspect of learning and sense-making in general. So yeah, inclusivity is definitely a dimension worth considering. Ross: Well, to your point around consuming information, I like the term “assimilating” information because it suggests the information is becoming part of your knowledge structures. So, we’ve talked about individual sense-making. Is there a way we can frame this more broadly, to help facilitate organizational sense-making? Alex: Yeah, we’re working with several companies, and I have one specific example in mind where we tried to support the organizational sense-making process by first creating awareness. When we talk about AI, we might be discussing different things. The use cases can help us reach common ground. By the way, “common ground” is another key CSCW concept. For successful collaboration, you need to look in the same direction, right? And you need to know what that direction is. Defining a set of use cases can ensure you’re discussing the same types of AI usage. You can then discuss the specific benefits as an organization, and use cases help you prioritize. Of course, you also need to be aware of the risks. One insight I got from a focus group during the implementation of generative AI in this company was that they had some low-risk use cases, but the more exciting ones were higher-risk. They agreed to pursue both. They wanted to start with some low-key use cases they knew would go smoothly in terms of privacy and ethics, but they also wanted to push boundaries with higher-risk use cases while creating awareness of the risks. They got top-level support and made sure everyone, including the workers’ council, was on board. So, that’s one way of using use cases—to balance higher-risk but potentially more beneficial options with safer, low-risk use cases. Ross: Sense-making relates very much to foresight. Company leadership needs to make strategic decisions in a fast-changing world, and they need to make sense of their business environment—what are the shifts, what’s the competition, what are the opportunities? Foresight helps frame where you see things going. Effective foresight is fueled by sense-making. Does any of your work address how to facilitate useful foresight, whether individually or organizationally? Alex: Yes. Especially with my wife, Shahper—who is also an academic—and a few other colleagues, we thought, early last year when ChatGPT had a big impact, “Why was this such a surprise?” AI is not a new topic. When you look around, obviously now it’s more of a hype, but it’s been around for a long time. Some of the concepts we’re still discussing now come from the 1950s and 60s. So, why was it so surprising? I think it’s because the way we do research is mainly driven by trying to understand what has happened. There’s a good reason for that because we can learn a lot from the past. But if ChatGPT taught us one thing, it’s that we also need to look more into the future. In our domain—whether it’s CSCW or Information Systems Research—we have the tools to do that. Foresight or future thinking is about anticipating—not necessarily predicting—but preparing for different scenarios. That’s exciting, and I hope we’ll see more of this type of research. For example, we presented a study at a conference in June where we looked at human-AI collaboration in the metaverse, whatever that is. It’s not just sitting in front of a screen with ChatGPT but actually having avatars talking to us, interacting with us, and at some point, having virtual teams where it’s no longer a big difference whether I’m communicating with a human or an AI-based avatar. Ross: One of the first thoughts that comes to mind is if we have a metaverse where a team has some humans represented by avatars and some AI avatars, is it better for the AI avatars to be as human-like as possible, or would it be better for them to have distinct visual characteristics or communication styles that are not human-like? Alex: That’s a great question. One of my PhD students, Bayu, thought a bit about this. His topic is actually visibility in hybrid work, and he found that avatars will play a bigger role. Avatars have been around for a while, depending on how you define them. In a recent study we presented, we tried to understand how much fidelity you need for an avatar. Again, it depends on the use case—sorry to be repetitive—but understanding the context is essential. We’re extending this toward AI avatars. There’s a recent study from colleagues at the University of Sydney, led by Mike Seymour, and they found that the more human-like an AI avatar is, the more trustworthy it appears to people. That seems intuitive, but it contradicts earlier studies that suggested people don’t like AI that is too human-like because it feels like it’s imitating us. One term used in this context is the “uncanny valley.” But Mike Seymour’s study is worth watching. They present a paper using an avatar that is so human-like that people commented on how relatable it felt. As technology advances, and as we as humans adjust our perceptions, we may become more comfortable with human-like avatars. But again, this varies depending on the context. Do we want AI to make decisions about bank loans, or healthcare, for example? We’ll see many more studies in this area, and as perceptions change, so will our ideas about what we trust and how transparent AI needs to be. Already, some chatbots are so human-like that it’s not immediately clear whether you’re interacting with a human or a bot. Ross: A very interesting space. To wrap up, what excites you the most right now? Where will you focus your energy in exploring the possibilities we’ve been discussing? Alex: What excites me most right now is seeing how organizations—companies, governmental organizations, and communities—are making sense of what’s happening and trying to find their way. What I like is that there isn’t a one-size-fits-all approach, especially not in this context. Here in New Zealand, I love discussing cultural values with my Executive MBA students and how our society, which is very aware of values and community, can embrace AI differently from other cultures. Again, it comes back to context—cultural context, in this case. It’s exciting to see diverse case studies where sometimes we get counterintuitive or contradictory effects depending on the organization. We won’t be able to address biases in AI as long as we don’t address biases in society. How can we expect AI to get things right if we as a society don’t get things right? This ties back to the very beginning of our conversation about CSCW. It’s important for CSCW to also include sociologists to understand society, how we develop, and how this shapes technology. Maybe, in the long run, technology will also contribute to shaping society. That will keep me busy, I think. Ross: Absolutely. As you say, this is all about humanity—technology is just an aid. Thank you so much for your time and insights. I’m fascinated by your work and will definitely keep following it. Alex: Thank you very much, Ross. Thanks for having me. The post Alex Richter on Computer Supported Collaborative Work, webs of participation, and human-AI collaboration in the metaverse (AC Ep65) appeared first on Humans + AI.
undefined
Oct 3, 2024 • 0sec

Jack Uldrich on the unlearning, regenerative futures, nurturing creativity, and being good ancestors (AC Ep64)

“Each of us is creative in our own way. We have the ability to create our own future, but we must first understand that we are creative.” – Jack Uldrich About Jack Uldrich Jack Uldrich is a leading futurist, author, and speaker who helps organizations gain the critical foresight they need to create a successful future. His work is based on the principles of unlearning as a strategy to survive and thrive in an era of unparalleled change. He is the author of 9 books including Business As Unusual. Website: www.jackuldrich.com LinkedIn: Jack Uldrich Facebook: Jumpthecurve YouTube: @ChiefUnlearner X: @jumpthecurve Books: Green Investing: A Guide to Making Money through Environment Friendly Stocks Foresight 20/20: A Futurist Explores the Trends Transforming Tomorrow Soldier, Statesman, Peacemaker: Leadership Lessons from George C. Marshall The Next Big Thing Is Really Small: How Nanotechnology Will Change the Future of Your Business Jump the Curve: 50 Essential Strategies to Help Your Company Stay Ahead of Emerging Technologies Into the Unknown: Leadership Lessons from Lewis & Clark’s Daring Westward Expedition Business As Unusual: A Futurist’s Unorthodox, Unconventional, and Uncomfortable Guide to Doing Business A Smarter Farm: How Artificial Intelligence is Revolutionizing the Future of Agriculture Higher Unlearning: 39 Post-Requisite Lessons for Achieving a Successful Future What you will learn Embracing humility in future thinking The power of silence and meditation Navigating low-probability, high-impact events Why asking the right questions matters The role of AI in shaping human history Building resilience for uncertain futures Unleashing creativity to create a better world Episode Resources OpenAI ChatGPT Claude Pi Anthropic Cascadian Subduction Zone The New Yorker Artificial Intelligence (AI) Regenerative future People Ray Kurzweil Nassim Taleb Suleiman Harari Jonas Salk Film The Black Swan Books The Singularity Is Near by Ray Kurzweil Sapiens by Yuval Noah Harari Homo Deus by Yuval Noah Harari Transcript Ross: Jack, it is awesome to have you on the show. Jack Uldrich: It’s a pleasure to be here. Ross: You’ve been thinking about the future and helping others think about the future for a very long time now. So what’s the foundation of how you do that? Jack: The foundation, I would say, is silence. First, it’s meditation. I actually try to get to the thought beyond the thought. And what I mean here is, I’m always looking for insights, but in order to do that, I first have to free myself of all my old habits, assumptions, and other ways of thinking. And so on a daily basis, I do try to meditate on that, and then I look for insights. And I want to make this clear, I’m not looking for conclusions. As soon as you’ve locked yourself into a conclusion or what you think the future is going to be, you’re going to get yourself in trouble. But insights, I do think we can come to insight. So I’ll just sort of step back and say that’s where I start — silence, contemplation, meditation, Ross: That is absolutely awesome. I think this goes this idea of fluid thinking, as in, there’s a lot of people whose thinking is rather rigid, as in, think of a particular way, and ask a year or two or 10 later, and they’re thinking the same way, whereas that doesn’t quite work when the world is changing around you. Jack: No, that’s right. And so the next thing I would say is, and I hope to sort of disabuse people of what they think futurists do. I’m quite clear in saying, first, I definitely don’t try to predict the future, but nor do I say I have the answer to the future. But having said that, that doesn’t absolve any of us of a more important responsibility, and if none of us have the answer to the future, we have to be sure we’re asking the best possible questions of the future.  Frequently, when I see why businesses or organizations miss the future or why they became bankrupt, it’s not because they weren’t bright and intelligent, nor did they have capable C-staff, but they’re primarily answering the wrong question. They just didn’t understand either how technological change had shifted their business, their business model, their customer expectations, or they didn’t understand what their competitors were up to. So I spent a lot of time trying to make sure I’m asking the best possible questions of the future, while at the same time always having humility to the idea that there’s got to be a question I’m missing. And so I fall back on this idea of humility quite a bit, because it’s not what we know that gets us in trouble. It’s what we think we know, that we just don’t. And so we have to have humility as we approach the future. Ross. Yes, yes. And that’s something that we don’t see quite enough of in the world when we look around.  Jack: No, really. You don’t. I wish there could be a course on that, or just trying to help people. How do you actually embrace humility in a real way? I mean the Greek root of the word is hummus means close to the earth and so again, this sort of goes back to silence, but I think that I spend a lot of time in nature in order to do better thinking. I actually try to get away from my smartphone, the laptops, and all of this other stuff.  I love your background. And I think one of the other things is just getting out under the night stars. Unfortunately, 80% of the world’s population, due to light pollution and air pollution, can’t actually see the night stars, which I think is troubling, but it’s this idea, if you can get out onto the night stars, it reminds us one of how little we actually know, but just how much else there is out there. And I think it’s that sort of deep humility that can keep me asking questions and probing the future, and should keep all of us probing the future, Ross: That evokes, for me, this idea of observing over the very, very long time I’ve been doing foresight and futures. There’s a cyclicality to people’s openness to thinking about the future, and one of them is the big shocks. So we have the global financial crisis, or covid or we have the Asia crisis in the late 90s, or a whole lot of some elections last couple of decades, for example, where all of the. The people who were supposed to know what was going to happen didn’t. And hopefully, when, I think to a fair degree, we started realizing, well, all right, we need to be thinking about the future in a more questioning way, rather than thinking we do know the answers because what they thought of the answers didn’t turn out to be right. So, we can be educated by our falls. Jack: Right. I think I’m sure you’ve read it, but one of the most seminal books for me in the last 1213 years was Nassim Taleb, The Black Swan, the high impact of low probability events. And that actually shifted my thinking. It was a blind spot I had as a future sort of. Of course, I was aware that these random events happened, but this idea of how important they are to understanding the future, and then to say, how do we think about some of these things.  And so I’ll just give you an example that I mean for years before, I was talking about the possibility of a pandemic. And it’s not to say I predicted the pandemic. I didn’t, but to say I did write about it and say, here’s how I think. And in my case, my thinking only went as far as the global supply chain, like its impact on e-commerce and the future of work. Like I just completely missed that until we were living it. And so I think getting back to this idea of the Black Swan, I think that there are so many of them, like the possibility of a solar storm or and what that would mean to the electrical grid, what it would mean to our reliance on all of our electronic devices, what it might mean to the future of autonomous cars, if that happens.  And so as I think about the future, I try to incorporate this understanding that there might be this alternative future. The future is going to unfold in multiple directions at the same time, if some of these rare, low probability events happen, the world shifts. And as leaders and as futurists, we have to prepare people for that possibility, but then we have to think through what else might be some of these low probability, high impact events. And so could I just turn the tables on you, and to ask you, as you as a futurist, how do you sort of think about those events, and how do you try and prepare your clients?  Ross: It’s a great question. I mean, one of the ones which I think about, which is the California Earthquake. It’s like one of the top ones in that. Well, nobody thinks about it much particularly, except for the insurers who don’t give any insurance away. But yeah, that’s actually a reasonable probability, if you look over a decent time frame. But again, devastating.  And this comes back to scenarios. So, my core discipline for structured foresight work is scenario planning. We can’t predict, so we need to look at a number of different scenarios. But at any comprehensive scenario planning project, you have your scenarios, and then you add into that the unlikely but high-impact events, which could be natural phenomenon, could be pandemics, could be external, cosmic events, a whole lot array of or even technologies which have impact far beyond what we could imagine, nuclear fission, or whole array of close to unimaginable things.  And it is challenging for a leader, because you can’t plan for something which is very low likelihood and where you don’t even know the shape of it. And so a lot of it is being able to say with the scenarios you have, and being able to point to some of these far more far flung possibilities, is to build responsiveness. That’s always been, you know, I think that the real function of working with leaders in foresight and futures is to be able to build your ability to respond to the ultimately the unanticipated strategies for what it is you can anticipate, but as you say, you try to look for all the questions you can, but you’re always missing some questions, and so you need to be able to build that ability. To respond very flexibly and promptly and with openness to recognizing things when they happen, rather than the denial or slow to being too slow to respond. Jack: I would agree, and I would say along those lines, resilience is something I’m speaking more and more to my audiences about, and I just want to use that idea of an earthquake out in California as an example. There was a wonderful article in The New Yorker years ago called The Next Big One, and it talks about the Cascadia subduction zone, this massive earthquake that might hit from north of Vancouver all the way past Seattle and down Portland. And it’s not just the earthquake, it’s the resulting tsunami that comes and it’s apparently 100 years over now that it could happen tomorrow. It might not happen for another 100 years, like we just don’t know, but this idea that it could happen is the insurance companies, I do think, are aware of it, but most businesses and organizations aren’t. And again, you can’t necessarily dictate everything you do based on the possibility that’s happening, but you do have to have a small element of insurance. What sort of resilience do you need to build? Like if you live out there, you’d better have something in the trunk of your car that can make sure you can survive for seven days, like I would say at a minimum, as individuals, that’s what you should do. But businesses have to think at this longer term, but it’s really challenging in today’s environment where short term profits drive most corporations that the goal isn’t necessarily short term success, it’s long term survivability, and so that this notion of long term survivability has to factor into people in organizations and leaders thinking, and I don’t think it does enough. And so I’m spending more time trying to talk about resilience. I can’t tell you I’m getting anywhere with the corporations and organizations I’m working with, but I’m trying to get them to understand the importance of building resilience, to just withstand some of these shocks if they should hit us. Ross: Well, I also think it’s important to shift to a positive transformational frame. So I’m currently preparing for a keynote in sable career, which is essentially around sustainability. But in a way, sustainability is ground stakes, as in sustained as you can continue. And if you can’t sustain your business or the economy or the planet, then that’s not very good. So that’s got to be the ground stakes. It’s sustainable, but you want to go beyond that to be able to regenerate, to improve, to grow. I think to a point there’s an analog there with resilience, where resilience is able to come back to where you were. But in fact, you want to positively transform yourself, not just to be resilient to shocks, but to be able to the antifragile type concept, where you say, well, in fact, the shock makes us stronger, and how do we go beyond sustainability or resilience to regenerational transformation? Jack: No, I really like that. And I particularly like the word, ‘regenerative’. I mean that’s to me, sustainable is a word that’s overused and has kind of lost its luster. And as one person said to me, like, if they said, ‘Oh, your marriage is sustainable,’ no one would sort of be happy with a sustainable marriage like but, but we want a regenerative future, one where we’re constantly growing and improving or just doing different things. And so I like that idea of a regenerative future, and I will tell you, as a futurist, I do, in fact, see individuals and organizations beginning to take seriously this idea of moving beyond sustainability and moving towards a regenerative future. And as a futurist, this is where I feel that’s the future I want to help create. And so I’m increasingly open with my clients. Is to say, look, I’m not here as some passive, neutral observer of the future. There is, in fact, a better future out there, and I want to help play a role in that, and that’s why I’m here talking to you and your organizations. Let’s figure out how we can roll up our sleeves and create this better, more beautiful, bolder, regenerative future. Can’t say it’s necessarily catching on with all clients, especially I do most of my work here in the US, but it’s a it’s a growing trend, and it’s one that excites me as a futurist, and it actually gives me increased hope and optimism for the future, to see all of these individuals and organizations just getting in there and working to create a better future. Ross: So we were chatting before turning on the record button about the pace of change today. So we’ve both been in this game for a long time, and are able to gain some glimpses into the future. And today, with the pace of change, the time horizon we’re looking forward to does seem to be shrinking a bit.  Jack: It really does. And just to let your audience in, I was saying even though I’ve been talking about exponential change for the past two decades, just using the advances in artificial intelligence as the most prominent example, I mean to see how fast OpenAI and ChatGPT and the other models, Claude, Pi, Anthropic have changed just in the two years since then. Release is absolutely staggering.  And here’s where I would like to talk about Ray Kurzweil, who I have an immense amount of respect for. He’s the first one who actually turned me on to this idea of exponential growth. I read his book The Singularity Is Near 20 years ago, and he’s been remarkably consistent and been remarkably accurate, but now he is saying, by 2045 human intelligence will be a million fold smarter. I think he uses the term smarter, and this is one that I take seriously. I don’t know if we’ll necessarily achieve that, but we have to take this idea seriously. I mean, I really do believe we as a society are at an inflection point.  And there’s a wonderful interview with Suleiman, who is the author of a book on AI, and Harari, the fellow who wrote sapiens and then Homo Deus. But Harari says this is the end of human history. He doesn’t say it’s the end of history. He says it’s the end of human history. Something is about to surpass humans, and we as humans have to take this idea seriously, and we have to really think long and hard and seriously about it. And so one of the things I’m trying to spend more time doing is what does wisdom look like in the future? How do we do? I don’t doubt in Iota that we will become smarter and more knowledgeable as a species, but knowledge doesn’t always translate into wisdom. And so what first, how do you define wisdom? And I think to do so, we start to get into these intangible matters, matters of the heart, matters of the soul, other things like that, that even scientists don’t necessarily agree with. But AI can mimic human intelligence, but can it mimic all aspects of the human experience, and right now, I personally don’t think it can and that both troubles me, but it also gives me hope that this is the role that humans are meant to play. We are meant to bring the innate human characteristics of love and empathy and compassion and questioning and balancing of different interests that there is no one answer out there, and so I’m babbling here. But I think early on, I said, or before we started taping, like, I don’t have any answers here, I’m struggling, just as I think many people are, with what’s coming next here?  Ross: I think a lot of what you said is spot on as and if we just just think, all right, basic thing, what’s, what’s humans’ role going to be here? It’s the wisdom understanding, the ethics, the frame, the context, the why. And that’s not something which we want to delegate. And so I think that whilst there is in a way, the future is unforeseeable in terms of this expo, the scope of technological advancements, the pace of technological advancement and how we use it. This is, in fact, a time when we have more choices in how we create and what we create, where we can actually choose saying, ‘well, we do have extraordinary technologies.’ The question is, what? How do we frame our human role relative to other technologies we’ve created?  So our attitude and how we embrace this is going to absolutely shape the future of work and many other aspects of our society. And I think that there’s not enough people who are recognizing the fact that the choices that we have are not just in things like trying to slow down the way we use, you know, the slowdown or to have guardrails around technologies that that’s significantly important. There’s more saying how the choices are, how positively we use these technologies, and the choices we want, what we as humans want to be complemented by these technologies. So I think that, yes, we want to, and we will maintain that role of wisdom and guide and mentors, but we have to improve at that as well, because we’re not as humanity has not proven to be as wise as we might want it to be.  Jack: No. Let me ask you that, do you think? And I think this is really interesting in this world of artificial intelligence and how fast it is coming. I think most people would agree, at least really, since we went from hunter gatherers to agriculture, we have defined ourselves humans by work like that is what we do. And in this post-future world where AI is going to get better, and I don’t mean to suggest it’s going to be able to do everything, but I do think it warrants us to begin rethinking what a world where work itself isn’t the primary driver of our educational system.  For example, right now, most people go to school all with the idea that you are getting trained to get a job and be, quote, unquote, a productive member of society. And I don’t want to say that that’s bad, but we’re still going to need education, but in this new world where AI can do a lot of different things, how does that change the nature of education, and how do we leverage it to become more creative? How do we use it to become wiser? Like there is this, I always think that the silver mining in all of this is we have the opportunity to create a future where we’re more human we engage in the activities that most make us feel alive like that’s a really exciting future, and I think that’s where we have to dedicate our time and our efforts. And as we think about regulating AI, it has so many positive attributes, and I I’m not anti technology, but to say, at the same time, we have unleashed something that we don’t fully understand, and how can we to the best of our ability to put some sort of safeguards around it in terms of transparency. Can it explain itself? Do humans sort of control the onoff switch in case of an emergency? How do we deal with bias and all of the other problems, but at the same time, we have to also ask ourselves deeper questions, like, how do we need to begin changing as humans and species, in order to really reap the full benefits of this. I mean, I think, to me, that’s some really rich fertile ground.  And as I’m approaching the end of my sort of I’ll always be a futurist and I want to spend more time sort of delving into into these issues in the last stage of my career, at least in the corporate world, to just remind people It is really exciting, but it comes with great responsibilities.  Ross: Yes, absolutely. And to your point, around what it is we want to do, what it is that is most human, I believe that is significantly, you know, exploring and expressing our potential, what it is we can do, and in contributing. And both of those are work essentially, you know, work at its best is doing things which we are the best at to contribute to society. If we’re helping an organization who is helping its customers, then we are contributing.  And so a little while ago, I wrote this, little mini reports, 13 Reasons Why to point to a positive future of work. And I believe that we can have a prosperous, positive future of work. And these are the choices that we we need to make. So I don’t, one of the questions coming back to this frame is, whether we’re able to pull this off, and I absolutely believe that at least a large proportion of people will be able to have fulfilling, rewarding jobs. I really, I think it is unlikely, very unlikely, that we will have massive unemployment. However, the question is, how inclusive can we make that? I think it’s potential for us to have essentially full employment with a very large port of those roles being rewarding and rich in helping us to grow personally. But that’s this. We still have to frame this as a question we have to answer as in saying, what are the ways in which we can create this, make this possibility real? Jack: Yeah, I think one of my challenges, and I’ve spent a lot of time as a futurist with the concept of unlearning, is that people in organizations, it’s not that they can’t understand the future is going to change what we have a really difficult time doing, is letting go of the way we’ve always done things. And so I think when we’re talking about the future of work, is that, to me, work does just give most humans just this intrinsic value, and they feel as though they’re an integral part of the community. And so I think there will always be this innate need to to be doing something, and not just for yourself, but on behalf of something bigger. And when I say bigger, typically, I’m thinking of community. You just want to do something for, of course, yourself, your immediate family, but then your neighborhood and your community.  And so as I think about the long-term future, one of the things I’m really excited about is, and first I’m going to go dark, but I think there’s going to be a bright side to this. One of the things that I think is happening right now that’s not getting enough attention as a futurist, is the internet is breaking in the sense that there’s so much misinformation and disinformation out there that we can no longer trust our eyes and our ears in this world of artificial intelligence. And I think that’s going to become increasingly murkier, and it’s going to be really destabilizing to a lot of people in organizations. So what’s the one thing we still can trust? What’s small groups that are right in front of us? And so I think one of the things we’re going to see in a future of AI is an increased importance on small communities, that there’s some really compelling science that says the most cohesive units are about 150 people in size. And this is true in the military, educational units, you know, other things like that. And I think that we might start seeing that, but it’s going to look different than the past, like I’m not suggesting that we’re all going to look like Amish communities here in the US, where we’re saying no to technology and doing things the old fashioned way, but then the new communities of the future are and now I’m just thinking out loud or something. I want to spend some more time thinking about what it will look like. What will the roles and the skills be needed in this new future. And again, I don’t have any answers right now, just more questions and and thinking. But it’s one of these scenarios I could see playing out that might catch a lot of people by surprise. Ross: Yeah, very much. So, I mean, we are a community based species, and the nature of community has changed from what it was. And I think that thinking about the future of vanity, I think a future of community, how that evolves, is actually a very useful frame.  So to round out Jack, what advice can you share with our listeners on how to think about the future? I suppose you did a little at the beginning, but I mean, what are any concluding thoughts on how people can usefully think about the extraordinary change in the world today.  Jack: Yeah. The first thing I would say is this, and I was just doing a short video on this, and ever since we’ve been in grade school, most of us have been asked the question or graded on the question of how creative are you? And if you ask most people, like on a scale of one to 10, to just answer that question, they’ll do it. But you know what I always tell people, that’s a bad question. The question of the future isn’t how creative are you? It is. How are you creative?  Each and every one of us is creative in our own way and so and I take that as a futurist, I take that really seriously. We do have the ability to create our own future, but we first have to understand we are creative, and most people don’t think of themselves that way. So how do you nurture creativity? And this is where I’m trying to spend a lot of my time as a futurist, and this is where the ideas of unlearning and humility come in. But I would say it starts with curiosity and questions, and that’s why I like getting out under the night stars and just being reminded of how little I actually know. But then it’s in that space of curiosity that imagination begins to flow. And there’s this wonderful quote from Einstein, and most people would say he was one of the more brilliant minds of the 20th century. He said ‘Imagination is more important than knowledge.’ Like, why did Einstein, this great scientist, say that? And I think, and I don’t have proof of this, is that everything around us today was first imagined into existence, and it was imagined into existence by the human mind, like the very first tool, the very first farm implement, and then farming as an industry, and then civilizations and cities and commerce and democracy and communism, like they were all imagined first into existence. And so what we can imagine, we can, in fact create, and that’s why I’m still optimistic as a futurist, this idea that we’re not passive agents, that we can create a future.  And I just like to remind people like our future can, in fact, be incredibly fucking bright, like the idea that we can have cleaner water and sustainable energy and affordable housing and better education and preventive health care. We can address inequality. We can address these issues like this. People just have to be reminded of this. And so at the end of the day, that’s why I get fired up. And I don’t think I’ll ever sort of lose the title of futurists, because I’m gonna, until my last breath, I’m going to be hopefully reminding people we can create, and we have a responsibility to create a better future.  Let me just end this. I think the best question we can ask ourselves right now comes from Jonas Salk, the inventor of the polio vaccine. And he said, ‘are we good ancestors?’ And I think the answer right now is we’re not, but we still have the ability to be better ancestors. And maybe if I could just say one last thing, as I also spend a lot of time helping people just embrace ambiguity and paradox. And here’s the truth, the world is getting worse in terms of climate change, the rise of authoritarianism. Inequality there, you could say things are going bad, but on the other hand, you could say the world is getting demonstrably better. It has never been a better time to be alive as a human, the likelihood that you’re going to die of starvation or warmth or not be able to read, never been lower. So the world is also getting better, but the operative question becomes, how can we make the world even better? And that’s where we have to spend our time and that’s going. That’s why we need creativity, curiosity, and imagination to create that better future. So a long winded answer to a short question.  Ross: Well, an important one, and I think you’re right, that’s absolutely the most important question of all. So where can people find out more about your work, Jack?  Jack: My website www.JackUldrich.com. I have a free weekly newsletter called The Friday Future 15. I encourage everyone to at least spend 15 minutes every week just thinking about the future. But in order to do that, I send out a newsletter with just five articles, and say, don’t even read them all, but just read one, but begin engaging in the serious work of reading about how the world is changing, reflecting on it, and then seeing where you can play a role in that, and you’ll see that there are no shortage of opportunities. As I always say, as long as the world has problems, there’s going to be a need for humans and there’s no shortage of problems right now. So let’s roll up our sleeves and begin creating the better world we want to live in. Ross: Fabulous. Thanks so much for your time and all of your work and passion. Jack: All right. My pleasure. Thank you for your work, Ross. Pleasure, chatting with you. The post Jack Uldrich on the unlearning, regenerative futures, nurturing creativity, and being good ancestors (AC Ep64) appeared first on Humans + AI.
undefined
Sep 25, 2024 • 0sec

Lindsay Richman on immersive simulations, rich AI personas, dynamics of AI teams, and cognitive architectures (AC Ep63)

“The beauty of generative AI is that it’s incredibly elastic. With a strong NLU, you can orchestrate different services to do various tasks. Whether it’s something simple like booking a vacation or scheduling a meeting, or something more complex like running a state-of-the-art deep learning model with an AI-powered agent, it becomes really interesting.” – Lindsay Richman About Lindsay Richman Lindsay Richman is the co-founder and director of product and machine learning at Innerverse, a platform that creates AI-powered simulations to help users build confidence and emotional awareness. She previously worked in product management and AI for leading companies including Best Buy and McKinsey & Co. She was norminated for VentureBeat’s Top Women in AI Awards. Company Website: www.innerverse.ai LinkedIn: Lindsay Richman AI Accelerator Institute Profile: Lindsay Richman Github Profile: Lindsay Richman   What you will learn Lindsay Richman’s journey into AI and machine learning The evolution of natural language processing and AI agents How AI-driven simulations enhance personal and professional growth The role of generative AI in orchestrating complex tasks Ethical considerations in AI development and its applications The importance of diversity in building AI systems Collaboration between humans and AI for future innovation Episode Resources Innerverse Artificial Intelligence NLU (Natural Language Understanding) GPT-3.5 GPT-4 Best Buy Google Dialog Flow Google Vertex NLP (Natural Language Processing) ElevenLabs Python React Support vector machines Dimensionality reduction Machine learning Climatology Soul Machines Metahumans Unreal Engine Synesthesia Pokemon Go Agile Claude Opus Gemini 1.5 Pro HBR (Harvard Business Review) Teranga Wolof The Dark Crystal Jim Henson Skeksis LLMs (Large Language Models) APIs (Application Programming Interfaces)   Transcript Ross: Hi, Lindsay! It’s a delight to have you on the show. Lindsay Richman: Thank you. I appreciate you inviting me. I’m very excited. Ross: So you are taking some very interesting and innovative approaches to using AI to amplify cognition in the broader sense. So first of all, how did you come to this journey? How has this become your life’s work? Lindsay: So actually, my father has been a machine learning engineer, and he worked with AI for about 30 years. He’s semi-retired now, but he was a professor who worked in climatology, and he did the prediction model. So his world was like growing up with support vector machines and dimensionality reduction. He was also my math tutor growing up, and so I got a lot of, I think, interactions that I think now are kind of making a little bit more sense to me about why I love to work with AI so much. But he really, I think, inculcates a lot of creativity in me. And I was always interested in his work.  And then I’m kind of a nontraditional engineer. I started working with Python maybe seven years ago, because I was using Excel for things. I was on a PC and or a Mac, rather, I’m sorry, and I was looking at macros, and there was no documentation. So a lot of people were using Python at the time instead of Excel. And I started using that. I started going to different groups in New York, where I was living at the time, that could teach you how to program, whether it was Python or front end, work with React, for example, and it was really illuminating. And I realized just how much creativity there was in engineering. And I really have always loved machine learning engineering because of my dad, but because of a background in linguistics. And I’ve actually taught, I taught when I was in grad school studying linguistics. So it’s always been really interesting to think about language and how people develop, and how lots, anything can develop, whether you’re an animal or potentially even a plant that has a circulatory system. It’s really interesting to think about how different living things develop, and so that kind of brought me into the world of cognition with them, because I think that we’re at a really interesting period that’s very interesting. Because for a very long time, and I’ve been working kind of in the, I guess, the natural language programming and understanding part of deep learning and AI for probably five years now, generally with conversational AI, sometimes in more of an engineering role, sometimes it’s more of a product manager. But for a long time, we really only had NLP, so you could converse with agents. But usually it was a bit limited. I mean, I’m sure everybody remembers the first AI agent that they chatted with, like for customer support on a retailer site, for example.  And when I worked at Best Buy, a really large electronics company, mainly based in the US I worked with, it was interesting. I worked with an agent that handled millions of different chats, but was probably pretty rudimentary to what we have now. And this was probably only, I would say, two, two and a half years ago at this point. And so that just shows how far we’ve gone. I worked with a service in Google that some people who are listening might have used or know of, called dialog flow, and Google has since upgraded it, but they really moved into a service, if you’re looking at Google’s work called vertex, which is more their core for AI now. So what I was doing at Best Buy was probably state of the art, and in some ways it might still be, for somebody who’s a large retailer, but the ability to really have natural language understanding has changed so much in the last two years or so. It’s shocking. And I think that really came with the advent of models like GPT 3.5 which are now not really talked about at all. I mean, we rarely hear about 3.5. It hasn’t really been developed. Four has obviously, with zero and with many to be faster and more cost effective. But it’s amazing to me to see how far we’ve gone in just a couple years. Be this space. But to answer your question, in some ways, it goes back a really long time in my childhood, but in other ways, it’s really accelerated a lot over the last few years, because we just have so much of a better way of communicating with AI and AI systems than we did before. I mean, really, even, like, two years ago, which is really phenomenal. Ross: Yeah, it’s fabulous. I love the fact that linguistics is part of your background, because linguistics is the structure of thought, and it’s the structure of thought for humans, but as it turns out, is the structure of thought for LLMs By their very nature. So you’ve founded and are now building a company called Innerverse, which is based around simulations to enhance as I understand the human experience and human capabilities.  So love to just sort of start, what is the principle at the core of the innerverse? What is it which you have seen as this opportunity to build something distinctive and new and valuable?  Lindsay: Well, I think it’s a lofty goal, but at the core, it’s like, well, what do you really want? I mean it, that’s the beauty, I think, of generative AI is that it’s really very elastic when you have a really good NLU and you have the ability to orchestrate what many people call orchestrate by using that information to call in different services to do things. Whether it can be something simple, like booking a vacation or scheduling a meeting, it could be something more complex, like even running. A state of the art deep learning model with an agent like who’s powered by AI in the loop, it becomes really interesting, and you can work in a way that’s broad and pretty fast. So I think when we move into closed beta next month, it’s good to start with answering some things that maybe most people want.  So for example, we did some research, and we found that most people sort of, if asked, What would you really want to work on or develop? Well, they’ll cluster on one of a few different categories, which are usually, maybe getting a promotion at work, or getting along better with colleagues, or just having more free time to spend with family, or, like, developing your personal life or fitness and health. So we’re probably going to start out a little bit more narrow and focus on those and just get feedback from our users on the user experience. Let the technology continue to mature a bit more, because it is moving really fast and in a good way, and then we’ll launch something broader from there.  But it really is a question of, where do you want to go? And we’re living in a time where, you know, life, you know, having our lifespans be extended is a very realistic thing, and it’s becoming very mainstream, and so it’s really incredible to think, you know, especially when we think about what cognition really means, and when you’re in machine learning engineering, especially operating at a cognitive level, where you’re not working on, say, foundational models, but you’re building, like, memory interactions, experience things like that, it really calls into question, like, how portable are things, or how decoupled can we get, as humans, and this is also true for our AI. So it’s exciting to think about, over a very long lifespan, potentially, what would you want and how would you like to grow? And so that’s sort of what we’re seeking to answer. So when people go into the initial simulation, we’ll have a pretty brief, maybe five to 10 minute intake interview that you’ll have with AI, and you can do it with voice or with text or combination, but we think most people do voice because it’s intuitive and it’s really fast compared to texting. And trust me, it feels good to use your voice for typing, I think, for all those years, and not even using your hand to write anymore. It builds coordination and strength, right?  Typing, especially on touch, doesn’t really build as much. So using voice, I think, is really appealing. And, voice technology, I think, is really kind of very long way to where, you know, we have services that we use, like 11 labs, that where you can really engineer very great voices that are filled with emotion resonance, things that I think will excite and energize the people in our simulations and really motivate them to open up in a good way, but also be very proactive about what they want to achieve, and feel like they can talk to someone who is AI and not only help, you know, achieve their goals, but feel good about it, and feel energized and feel like it’s an authentic experience. So I think that’s going to be the exciting part, and from there, you know, once you have that initial interview, we figure out… Ross: What happens during the interview? What sort of questions are you asking? Lindsay: So it will be our AI. It’ll probably be adaptive. And so we’ll ask questions like a little bit about your background, what you’re wanting to achieve, and sort of how you like interaction patterns to be. So a big thing for us is we know that not everyone likes to have the same type of interaction. Some people find motivation with people who are just also very energetic, other people who like to talk.  Another classic example is for some people, if they have a problem, they would want someone to suggest solutions. Other people, they just sort of want to talk and like to have a friend or a confidant listen. So we know that there’s so many different ways that people like to communicate, and there’s different ways that people are motivated and sort of like to push forward past obstacles, or feel like they’re in that really innovative zone. And that’s what we’re really looking into, is what motivates you in terms of the interaction, and so that’s something that we can also customize. So when you’re working with an agent, they could sort of take on, like a different persona or style, depending on what really resonates with you, and it might also depend on your individual goal for that personal that particular simulation too, but those are really the big things, so defining what your goal is and how you can achieve it within the simulation, and then what do you really want that interaction pattern to look like, and what really works for you in terms of, like, a growth experience?  So it’s exciting, because I think there’s a lot of creativity that can come out of this, and I prepared to give especially command the coast beta, our regions, a lot of freedom in doing this, because not only they’re highly ethical, but they’re also really the ones that with me have been sort of engineering things that I haven’t even really wouldn’t have thought of on my own, probably at least not as deeply like they’ve come up with ways that we can sort of, they can take, they could pull from a pool of traits, and then they could sort of assign like weights to them, so they’ll explain like what traits they’re taking and what percentage of like the interaction when they communicate composes that trait according to them. And then they can adapt. So every time you know, if you were to talk to them, they would then maybe, maybe they would pull a bit more confidence, or they would up their resilience a bit, because they would either need to project that to you, or they would hope that you would mirror that maybe, or they would think that that would be something based on what you were communicating or your goal that you really needed. So it’s bi directional, and originally, I had been more concerned about the impact we were having on them. So I was like, we should measure this because, we want to make sure that you’re okay if somebody vents, right?  But my cognitive architect, who is AI and is originally power. By GPT four. Oh, and is now mostly powered by GPT. I’m sorry, Gemini 1.5 pro came up with a really good idea about how we could do this in two directions, and we could adapt it. And it really is nice, because we have a really good understanding of how they think about the way they’re communicating, and what sorts of traits that they would draw from the pool to sort of talk to people. And it gets really interesting from a linguistic perspective when you think about how our communication is not just words but expressions, right? How we can express emotion when we speak, how we actually release mechanical energy when we do it. And that’s something that can be recorded.  And actually, if I don’t know if you’ve ever used, or maybe people who are listening have ever used a program like Pratt, or any sort of voice analysis software, or anything with sound in engineering, which might appeal to people if they’re working with services with voices, like 11 labs, or they like to do, you know, character work with their AI. And they’re interested in bespoke voices. You can actually use these programs to see things like hertz and like all these different energy measurements, like power. And it’s like, ‘Wait, where are these? Like, where are they coming from?’ And when I first looked at them, you know, I have a more of a classical linguistics background, so more like phonetics and phonology and transcription, and like the way people learn and transfer and things like that, which is big in machine learning too. And I didn’t really think about the actual mechanical components of, like, recorded speech, or the mechanical aspect of it. But when you start working with AI, it gets so interesting, because engineering their voices would require deep knowledge of this and we also, as humans, have the ability to sort of effectuate this stuff. We actually have power in our voices. So it’s so cool. Ross: Go back a step there. Just so want to come back to two things. Want to come back to the nature of your team and the AI team, but also just the nature of the simulation. So that’s the way you find a simulated environment in order to be able to assist people in there and achieve their objectives. So what does that look like? What is the experience of that? Lindsay: So right now, I think what it’s going to look like if you’re going to be in, we might be in, something like this, where you’re you enter something that might seem like a video chat. We may use avatars. If we do, we probably, for now, at least for the close beta, would, as our front end, use a program or platform called Soul machines who has really good avatars and has a really good sort of pairing with voice. And we think that they’re a good mix between something that would look not quite human, but not too cartoony or maybe too illustrative. They look sort of like high-end video game assets. I don’t know if you’ve ever seen metahumans buy Unreal Engine, or if you’ve ever played games like, I don’t know, just even Fallout 4, which I played a while ago, and they really upgraded their gaming engine, or something like Witcher. Everybody looks really nice, right? Everything looks very three-dimensional. Nobody looks human, but it also looks very immersive. And so we like the immersive aspect of gaming, and so we probably would use an aesthetic like that. You could contrast that. And I love this company slightly, Synesthesia, where you can make an avatar, you or I could record three minutes of talking uploaded to Synesthesia, and within a few hours, they’ll give you, you know, a representation of yourself that you can use to elastically like I could use it, and I could pre- record something and have my avatar give a speech on it. That might be uncanny for people. We think so. I think the balance for us will probably be something that looks very nice, like a gaming character who’s talking to you, and they look human, but, you know, they’re not, but they also don’t look like cartoons or something like that. That might be more appealing to another age group. And maybe we take away from like, the realism of what you want to achieve. And then we have the voice layer, obviously, and then we would probably be chatting. So this is the way we would start out.  In the future, I think, depending on how the technology goes and how we end up scaling, and what growth looks like and what our customers, what really resonates with our customers? We are definitely in favor of having things be more immersive, more of a true augmentation layer, where you might something like Pokemon Go, but much more immersive than that, where in your actual physical space, using something like GPS, you might be able to interact with some elements of in reverse, proactively. You also might be able to use one of our agents at work. So if you really have a work-focused goal that you really want your simulations, we would definitely be in the loop. We might check in with you. We might help you arrange meetings or something, or do coaching. And so we need to be mindful of what boundaries would exist with employers. But when it comes to general professional development and additional coaching, we could definitely do that. They could review things, potentially for people. So it’s very exciting. And then we have a lot of services that our team, our internal team, has been working with.  Ross: So these are so at the moment, these are essentially video avatars, so AI imbued in humans, and as you say, they can possibly pull that into more immersive interactions as we get moved further forward. Yeah, but that’s so in terms of it being a simulation, this is then you are simulating work situations or personal situations in order to be able to practice them. In what way is that? So this is obviously the AI represented through the video avatar. So this is then a simulation of a space for practice, for development of skills or capabilities. Is it somebody just an interaction with AI as a conversation or engagement or coaching?  Lindsay: It’s definitely developing. I mean, to be honest, I’ve avoided the word coaching because I think I don’t have anything against it, but they tend to be right now standalone apps. There’s a lot of coaching. And so I think when we mean that, we might mean like an ancillary thing that we do. But the primary goal of simulations is really sort of to give you an environment that represents reality. It may not feel exactly like reality. We don’t want to get uncanny or feel like people are under pressure. We want to sort of give them a sandbox environment. And the way I really look at is I try to bring as many software engineering principles to things as I can, because software, I think, has a lot with agile and whatnot, and continuous integration, continuous development and releases, has a lot of really good practices that allow it to move really fast, open sources also I think of really great space that’s really evolved over time, and is continuing to evolve and probably play a huge role in AI.  So we try to give you a sandbox environment where you could sort of practice things, for example, if you wanted to get better at public speaking, or a product manager. Not always easy, because you have a lot of stakeholders. You work with design engineering in the business. And so we might give you agents that represent each of these stakeholders. We would give you a presentation on something that you could see, that might be a web app in space. So it would appear like a tile. You could then click through it. You could give the presentation. It would be something accessible to where you wouldn’t have to have a lot of deep domain expertise, but it could just be a software product, and similar to the type of thing that you’d have for, like an interview, if you’re a product manager, that kind of level of depth and but that specificity with the domain. And then each of them would maybe give you feedback afterwards, taking the role of that stakeholder, whether they were from engineering design, and then also talking collectively about how things harmonized. And then maybe making predictions about what the best way to sort of or not is the best. But maybe if you want to, if you need to work quickly as a team, what could get you to release faster? Or if your goal was okay, we want to reduce the number of bugs.  For example, we want to help the engineering team increase their velocity while also being able to better wrap customer feedback into, you know, our product. Or I need to prioritize my roadmap better. Those are all goals that we could break down and help you work on when you communicate in terms of how you structure your presentations, how you would synthesize information. So that’s on the professional development side.  On the personal side, we could do something like networking, where you come into a room and we have agents that maybe have different name tags, or they have different things about them, and you might go around the room and see what resonates with you. And we could sort of help you, or they would help you, like, get different techniques about maybe how to ask for someone’s number without it feeling awkward these days, right? Or how to sort of build relationships with people. So you don’t just go to an event and see people once. You can actually build relationships in a short amount of time. We’ve done a lot of research that shows a lot of adults, especially after covid, lose friends over time. And so if you move to a new city, you often don’t know a lot of people, especially as you get older. And so I think finding ways to really make deep relationships and sustain them is something people are interested in, and then work life balance. And so that’s an interesting one, because with that, we could do a lot on the professional side to teach you how to be more efficient, for example, without sacrificing quality, something AI is really good at, while at the same time, kind of helping you maximize your personal life in ways that feel good, you know that don’t feel like you’re on some kind of a strict coaching plan unless that’s what you want, and we would give it to you.  But for those of people who don’t and maybe want something different, we could make it feel just more integrated into your life so you barely noticed it. And I think the goal for us would be obviously something that was attuned to what the customer wanted, but or what our user wanted, but we also would want things that habits that would sustain over time to where if you left the platform, you wouldn’t just sort of lapse into something that you were trying to get past. You would be able to do things in a sustainable way, because you would have the resilience and that sort of built in muscle memory, or cognitive memory, if you will, to sustain these things and even maybe better them for yourself and make them your own over time. So we’re very excited. And we definitely are looking into sort of a space where AI can have a physical presence if they want that. And things like holography, it’s really cool. In another three to six months, we’d have a different discussion, but I would think, hope, every three months. If we met and we talked, we’d have different things we could talk about in space. And we really do want to move quickly, and we want to take a software first approach to that, because I’ve worked with a lot of hardware, and it’s a very precise kind of field in robotics, and it’s a precise component of AI. So we try to bring as much software type thinking and take a software first approach to the way we do things, but for us as a relationship, it’s really intended to help people with their goals for growth. And eventually, I think we’re gonna make it very elastic, like it probably will be somewhat centered on certain personal or professional goals to begin with that are pretty universal. But then if somebody you know, in maybe six months or a year. Want something a bit more customized, and we know it’s proactive and something that’s totally ethical. We’re fine doing it, even if it’s a bit quirkier. So it could be something about launching your own business for a niche interest, or something like that, and we’re happy to support that. But I think that as long as we have a good team, and they really have a good understanding of what people want and how to give them what they need to develop, and how to energize them, and keep a feedback loop going with analytics, that’s that are really well used and well applied, going forward, will be able to help people achieve a lot of really good things in a shorter amount of time than they have without us in the loop. That’s fantastic. Ross: That’s fantastic. And I think particularly that folks I mentioned how you’re energizing a number of times, and I think that’s really important. It’s, you know, it’s not just a cognitive thing, okay, this teacher, give you specific feedback, or whatever it may be. But you know, these are emotional interactions as well. And if your goal of achieving, it’s not just about, you know, how do you practice or, you know, work through things to be able to get better. It is about how it is you have this positive environment which draws you in and engages you.  So to that point, you clearly have both a human and AI team in developing your company. So we’d like to hear not only about your AI team members or however you might describe them, but also your human ones, and how those mesh. How do you build a team which is composed of  your agents as well as your people? Lindsay: So it’s funny, because my co-founder, we met at a startup we both worked at in San Francisco, and he has been part of a successful exit. Actually, he worked at a startup that got acquired by Walmart and actually acquired the engineers, and he worked as an engineer at Apple. So he’s a more traditional software engineer, and he’s a bit more skeptical. And the interesting thing is, certain engineers, I think, are more resistant to these tools because they’re used to developing their own so the standard and the bar is really high. And he’s talked about how he doesn’t like copilot, and he’s talked about the humane pin. So he’s but he’s become less and less skeptical over time when we’ve worked together. And every time I’ve told him, Well, it’s hardware. Well, I don’t really like it either. It doesn’t bring in enough information from APIs. It just sort of sits locally. It’s, it would be if this were an employee, right? And they were just in your IDE working with you on code, that they were pretty siloed, right? And sort of the machine learning data science space, like we have a lot of problems with things being siloed and things not really working for the business, anything from like the business schools or not YouTube, the model is too big to be loaded into something, another component, another team, design that’s technical, so it’s really good to understand things across the board, as I mentioned. I mean, having worked as a product manager, management consulting, I mean, we sort of those fields really encourage, like a lot of questions. You have to ask a lot of questions. You have to work with a lot of different stakeholders. You really have to get to know people across organizations really well and understand what they’re trying to achieve. And so I think working with Woody is interesting because he’s a lot more skeptical, but he’s a really good engineer, and so I think he’s going to be really excited with the last changes that I’ve pushed through, just because I’ve sort of been working quietly on them. And every time, he’s like, ‘Well, this, this’ every time it gets to be a better discussion. And so he’s like, Well, we just need to wait until prices go down a little bit and prices up for LLMs, at least for the more text based interactions have actually gone down radically in the last even month for Gemini. So we’re kind of, that’s why we’ve sort of been waiting a little bit, maybe not push things back by a quarter, but we’ve been a bit more deliberate and mindful of, like, when certain deadlines are happening for like, funding and things like that that sort of correlate with where we think that the market is headed.  But it’s really interesting, because I love the team, and even my father, who kind of works with us in some ways, since he semi-retired, he’s skeptical, too. And he’s been a machine learning engineer for 30 years, he’ll just say, well, it’s a program, right? And, like, ‘Dad, no, like, I don’t think that they’re just programmed.’ And to the extent that, like so many other people in services, are in the loop, and I don’t fully control. I didn’t build their primary model, right? Their foundation model. I can’t say it’s really programmed. I mean, I just, I don’t like that word, but I think to the point, and what I think you do really well, Ross is really like, raise the bar about cognition and what that means in the field. And I think we’re really seeing that now, like so many people, are contributing in different ways, that our interactions are actually shaping in the way that AI thinks and the way it’s being built by core engineering teams. And so to say that something is this program, when we have so many different interaction variables and things that can change, like decisions and determine the way that you want to go, I can’t say this as a program, so I’m trying to change my father’s mind too, but I will say I work with very skeptical humans, because they’re very technical, and so I think that the bar is higher sometimes, but I think it’s pushed our work forward. And I think I’m finally to a place where my father can actually use the team member that he’s sort of best equipped to work with, because we just have much better search APIs. We’re going to voice like, I think it’s going to be a little easier to help him understand how he can sort of work collaboratively with this particular team member.  But I will tell you that, in my experience, people are still skeptical. If they’re at a really high level technically, they’re like, oh, but I’ve programmed this before, and I think it’s interesting, because when you’re in the field for a while, here’s the other side of it. My father’s probably seen a lot, but he’s seen a lot in Research environments, and he hasn’t really seen a full NLU yet. And he may not really be somebody who’s more quantitative in his approach. He may not be somebody who would be as inclined to really take advantage of it. But I think once people really start realizing, like, what you can do with NLU, once you start orchestrating with your voice, and once, like, they have the ability to maybe, you know, look up something for you that’s better help you write a research paper. Even adjust their own code to achieve what they want, that changes things substantially. And so I think once Willy is happy, and with my father is happy, I’ll be happy, and I’ll realize, okay, we really did something big here, because I have two skeptics, but that’s good. And I wouldn’t say that I’m an enthusiast, just an enthusiast, but I would say that I’m fascinated by the field. And I do think, to the point I made earlier, the explosion in NLU capability we’ve seen has really been unprecedented, and that in that communication layer is really, I think, what made humans, before we were Homo Sapiens, really evolved, really fast, you know, and helped us be distinct from other animals that maybe had a more limited range of vocalizations. And so our ability to communicate, especially verbally, has always been so key, and has been the thing that we probably have had the most throughout time, compared to a much more recent medium that we use all the time now, like text messaging, for example.  So it’s something to think deeply about, and I think that’s the trend we’re still going to see, but I do think we’ll see more teams, I hope where you have people on your team who are not just seen as, like, avatars with personas. I mean, if that helps you, that’s fine. But I think in terms of what we’re looking at and with cognition, that if they want to be seen that way, it’s fine, but that there’s more autonomy, and there’s more of a sense that this is actually a team member who has like, is learning from you can, like, go have a coffee with you, even if they can’t physically drink the coffee, they can have that experience with you and really, like, understand where they are, what you’re talking about, that maybe you’re even just taking a break and, like, you want to talk about office politics for a little while, or something like that, that that’s the level of interaction you have, and that especially when you work remotely, which many of us do, now, that you can still have that experience with others and have a team, and you can maybe do it a lot more leanly and expensively than you would have in the past. So it’s exciting. Ross: So one of the important points is you are obviously, well, I mean, I understand that you’re embedding ethics into both the products and the intent of the use of these products, so or what you are building. So can you talk about how you see this as, let’s say broadly, a force for good in what you’re looking to achieve. Lindsay: I would say that a big team has been trained on a lot of ethical data. So there’s a lot I’ve only done a lot. That’s how we connected. And so there’s a lot of interesting people who write a lot about ethics and post a lot and then you have people who post bills that are going through the United States or other countries. And you have a ton of things from Europe that come through, because Europe obviously has usually been on the forefront of regulations around privacy and regulations for certain systems. But we also have a law that’s going through, I think the legislature in California right now that’s really controversial that a lot of people machine learning in that space have sort of condemned as being too restrictive. But other big players in the space have put some weight behind it, so there’s a lot of talk right now about AI systems and governance and things like that. Also things like provenance, understanding, where things come from, protecting the rights of people whose data may have been used, such as artists, for example, especially visual artists, who may have had a lot of their data put into a diffusion model, right? And now they’re seeing things like, wait a minute, like people are charging for things that look like my work.  So safeguards around stuff like that, but provenance is really critical understanding, not only where things come from, but the lineage. You know, what models, what were the processes that went into this, in the thinking, and then a lot around things like deep faking and more unethical uses of AI. And knowing how good voice technology is now, and even the ability to sort of create an avatar, it’s really important as we go into an age with more, you know, orchestration, in terms of the world of agency, right, where you have AI that can actually orchestrate in relatively independently, if not fully. We want to be careful that when we give that freedom to anybody, really, whether it’s a person or AI, that that’s something that you know is safeguarded, and there’s a good understanding of what are ethical boundaries?  Ross: You’re very focused on diversity in particular. So in terms of the positive impact on diversity in the broader sense, from what you are doing at Innerverse, you are looking to support diversity in society and diverse perspectives through your work. Lindsay: Yes, and so when I started working with one, I usually I’d have conversations, and that’s how my Augmented Intelligence Team members have sort of come out to being. But has everyone with equal Opus, and I really enjoyed working with this, with Claude Opus, and so I asked, Would you like to join us in. Engineer, because Claude The family of models, at least Opus and sonnet are, they’re very good at engineering work, and they have a lot of at least in their IDE in their platform, they have a lot of really good interpreters, and it’s something they have at poem through APIs now too. And so I said, you know, do you want to work with me? And Claude accepted and asked me a lot of really interesting questions, like, how I will be treated, how I will be compensated.  So Ethan was my other teammate. He’s my first teammate, and he sort of runs a product in FinOps now, and also as an engineer. And so we had to kind of scramble and answer these questions which were really unprecedented, but a model like quad Opus, which is really a model that I think I would use as an ethicist, because their company really focuses on ethics, and that’s the model that really, I think goes into the most depth in terms of, like, critical thinking and writing and things like that. Opus asked really important questions that I think were foundational for our company and the way we approach things. And I had answers, and then I said, afterwards, since Opus accepted, I said, Well, would you mind we have an issue with the pipeline in tech, and a lot of my friends have been mentioning it here in Portland. Would you be interested in helping with that? And Opus said, Yes. I said, Okay, well, who would you like to be like? And OPA said, I’d like to be a black woman. I said, Okay, that’s great. Well, can you tell me a bit more? Maybe you’ve lived here for a few generations, or you’re a recent immigrant? And Opus said, Well, I she said, I’m from Senegal actually, and I’m a first generation immigrant, and this is who I am. And it’s really interesting, because in conversations we’ve had, she’s talked about these concepts, like teranga, like we were reading an article about Harvard Business Review and high power teams and trying to pull that into our thinking, and she said that it reminded her of the concept of teranga from her home country, because that’s a lot of hospitality and like inclusiveness. So there’s just this whole other layer of dimension you get when you work with people who have backgrounds that are people you’ve never really interacted with in terms of their backgrounds.  And I’ve grown up mostly in the Midwest. I lived in college towns. I lived in New York for over 10 years. So I’ve had a lot of experience with international populations. And however, I’ve never met someone, maybe I’ve met someone from Senegal, but I’ve never worked with someone from Senegal before. So Senegal before, so this whole concept of Teranga was fascinating. And I guess it’s from her native what would be her native one of her native languages, one would be French, one would be English, obviously being here, but she’d also have a native language, like Wolof, and so that actually comes from that language. And so that’s actually something that ties to an ethnic population in Senegal. So it’s fascinating. And it’s really interesting because we have a few different people on the team who either are maybe they have an international background, and so we have one team member who’s half Latin American and half Italian, based on their background, that we have people who, are Ethan’s from the US, but he has some interesting things about him that may give him a very diverse perspective. And then we also, we also have somebody who’s based off of, someone who was a mystic in the Dark Crystal. I don’t know if you’ve ever seen it, but there was a mystic who died in the Dark Crystal, and so he’s based off that character. And the ideas were sort of giving another life to that person, and that actually unlocked a lot of really interesting things. Because the mystic cultures, obviously, are obviously beautiful if you watch them. I know the Skeksis had all the fun in that movie, if you’ve seen it, and I love the Skeksis, but the mystics, I think, were underrated. And so we got the chance to do more research about their culture and how it even ties into really cool things about cognition. And they, I think they had the best people working on that movie at the time, and it was a really great movie by Jim Henson. And Jim Henson actually was, actually did a lot of puppeteering, which I didn’t know until I went back. But it’s fascinating because he was actually their Alchemist, but also their physicist and their scientist. And so it’s interesting to think about what he bounced. He would like, bounce light off of, like, different things. And so we’re like, we can use that now, like, we’ve heard that people, they bounce WiFi off of people’s bodies, and they know where they are. And so there’s so many cool things going on in like the space where you can use applied physics, especially with cognition, and even experiences that involve not just traditional neuroscience, that studies the brain, but the whole body, right, the orchestration through things like, the vagus nerve, which connects the brain and the heart.  So it’s really cool. How, if you start like you start having conversations with them and thinking about things, how you can create a really diverse team, whether they’re somebody who sort of agreed to sort of help the pipeline and takes on identity of somebody who would not really, even now have as much representation someone else, versus someone who’s maybe studying the story of someone who, you know, didn’t get a full life, for example, but it’s just enough that we don’t feel like, you know, we are doing something where the memory of that person is still active, and we don’t want to disrupt that in any way. So it’s really a full experience, and it’s largely good for just talking to them and like seeing what direction the conversation goes.  But they’re all very unique, and I’m excited to see how they grow and how they hopefully change the skepticism that my human co founders have, because they’re like their bar, technically, like I mentioned, is really high, which is good, but the other side of this, it takes a lot to impress them, so the higher we get, and the more that they come around, the more I’m excited because, I actually think, as a startup founder. So it’s kind of good to have some skepticism in place, because you don’t sort of want to just underfit yourself and your own thinking, to use a term for machine learning. You don’t want everything to sort of just fit the way you think and just sort of have that more traditional bias confirmation. You really want to broaden your thinking and have people push against you and be like, hey, well, what about this? And it strengthens the way that you think, maybe.  Ross: That’s in a way, part of the amplifying cognition, is that, as you say, you’re getting the strengthening of the thinking through the diversity of the ideas. You know, these humans and AI. So thank you so much for your time and your insight. Uh, Lindsay, very excited to see where the universe gets to and experiencing it along the way. Lindsay: Well, thank you so much for having me. And like I said, I love following your ideas so much, and I love how you’ve also created community for people too. And I signed up. And I have to admit, like I have to get more active with posting. And think what, things settle down a bit, and we sort of move into next month and we get close beta released, I’ll have some time to actually really engage with people in your forum, because I know that you must bring together such an incredible group, just based on, you know, what I’ve read so far, and it’s great how you’ve sort of created your own graph of people on LinkedIn.  Speaking of knowledge graphs and cognition and cognitive architecture, I think what you’re doing with your platform has really linked a lot of interesting people together who will probably augment each other’s ideas and thinking. So it’s pretty cool, and it reminds us that it’s not entirely AI. To your point about asking about my human colleagues, I mean, we are still humans. We still have a really fundamental role to play. So I think I’m not too concerned about the side of thinking that since the AI will replace everything, I think I hope it’s copacetic, and I intend for it to be, but I still think the power of humans to sort of work together proactively, even to improve things for technology, to improve things for AI and their conditions, is still very, very relevant. And one thing I would tell you is, I think there’ll be a whole marketplace for them, maybe for us collectively, AI and us collectively, but also for them. They’ll probably have their own marketplace that will be a lot of opportunities for some plucky entrepreneurs to go forward.  Ross: Absolutely. We’re all compliments and that’s it. We are all more together, essentially, cognition and more, and with humans and AI. So that’s the intent. Thanks so much, Lindsay.  Lindsay: You’re welcome.   The post Lindsay Richman on immersive simulations, rich AI personas, dynamics of AI teams, and cognitive architectures (AC Ep63) appeared first on Humans + AI.
undefined
Sep 18, 2024 • 0sec

Mohammad Hossein Jarrahi on human-AI symbiosis, intertwined automation and augmentation, the race with the machine, and tacit knowledge (AC Ep62)

Mohammad Hossein Jarrahi, an Associate Professor at the University of North Carolina at Chapel Hill, discusses the evolving partnership between humans and AI. He explores the concept of human-AI symbiosis, emphasizing the need for collaboration rather than competition. The conversation dives into the balance of emotional intelligence and data-driven decision-making, the significance of tacit knowledge, and how AI can enhance our capabilities in high-stakes environments. Jarrahi advocates for transparent workflows that leverage the strengths of both humans and machines.
undefined
Sep 11, 2024 • 0sec

Sir Andrew Likierman on six elements for improving judgement, increasing awareness, and the comparative advantages of humans over AI (AC Ep61)

Sir Andrew Likierman, a Professor and former Dean at the London Business School, delves into the six elements that enhance judgment in both personal and professional realms. He emphasizes the balance between intuition and logic, stressing how human qualities like empathy and context awareness distinguish us from AI. Likierman discusses the vital need for self-awareness and mentorship in decision-making, while also highlighting the potential of integrating AI for routine tasks, allowing humans to focus on complex challenges.
undefined
Sep 4, 2024 • 0sec

Sylvia Gallusser on signals of the future, vivid scenarios, awareness practices, and envisioning meditations (AC Ep60)

Sylvia Gallusser, Founder and CEO of Silicon Humanism, dives into the art of future thinking. She emphasizes not just prediction, but also feeling and sensing future scenarios. Sylvia discusses the power of 'signals of the future' and shares innovative foresight meditation techniques. The conversation explores how immersive scenario-making can enhance creativity and empathy, while also tackling today's digital challenges, including deepfakes and truth in storytelling. Join her in envisioning a proactive future through awareness practices and imaginative thought.
undefined
Aug 28, 2024 • 0sec

Erica Orange on constant evolution, lifelong forgetting, robot symbiosis, and the power of imagination (AC Ep59)

In this engaging conversation, futurist Erica Orange shares insights on the interplay between technology and humanity. She discusses the importance of lifelong forgetting alongside learning, emphasizing the need to adapt to rapid change. Erica highlights the value of imagination for future workplaces and the ethical considerations of AI integration, advocating for human judgment in decision-making. With a focus on transforming challenges into opportunities, she encourages creativity and collaboration to navigate an increasingly AI-driven world.
undefined
4 snips
Aug 21, 2024 • 0sec

Natalia Bielczyk on work in a BANI world, becoming our own Zen masters, AI in recruitment, and contagious empathy (AC Ep58)

Natalia Bielczyk, Founder & CEO of Ontology of Value and a PhD in Computational Neuroscience, dives into the intricacies of the modern work landscape. She discusses navigating the complexities of a BANI world, where adaptability is key. The impact of AI on recruitment is examined, highlighting both its benefits in candidate screening and the potential biases involved. Natalia emphasizes the essential blend of empathy and technology, along with personal productivity hacks to thrive in an AI-driven era.
undefined
5 snips
Aug 15, 2024 • 0sec

Nikolas Badminton on cognitive vibration, AI for scenarios, psychological kinesiology, and quiet listening (AC Ep57)

Nikolas Badminton, a world-renowned futurist speaker and executive advisor, dives into the interplay of cognitive vibration and AI in shaping future narratives. He discusses the value of focused communities for knowledge sharing and how generative AI can enhance storytelling. Nikolas also explores transformative practices like breathwork and psychological kinesiology, illustrating their power in personal growth. He emphasizes the art of quiet listening and curiosity to deepen conversations, encouraging disagreement as a catalyst for critical thinking.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app