Speaker 2
And that made me think of another use that I have experimented with a little bit, which is telling an AI, telling chat GBT about a thing or feeding it some sort of argument and then asking it to steel man the other side. Give me all the reasons this isn't going to work and be as forceful and as direct as possible about why this thing I just told you is bad. And then that is super valuable when you're trying to figure out, you know, what is it that we're doing here and is this actually any good?
Speaker 1
Yeah, absolutely. I'm assuming that a lot of people listening have done some of their own experimentation, but just in case you haven't, the thing that is to me, even at this moment, really revolutionary, is the fact that you can be in a conversation with this tool. It's not like other situations where even if you were able to Google a framework, which is, you know, that falls apart pretty quickly, but even if you were able to, you can't look at it and go, oh shit, I forgot. Facilitation is one of the skills that should be on here. Add facilitation. That is the beauty of this thing is that you can be in an ongoing iteration where you're asking for generation, but then you're doing refinement and being like, now collapse it to three levels. Now make it 10 levels. Now tell me what this is like. Now add this thing. Now put it against market comp. And that's the magic is being in the back and forth.
Speaker 2
And it does not require any technical ability, which is I think one of the huge things here because people, you know what a touch ID button is though. So I guess you give you more credit.
Speaker 1
Give me a certificate. The
Speaker 2
reason I bring that up is because people have been pointing to this AI moment, then comparing it, at least in the HR space to like the creation of like relational databases. So like, that was a big deal when those were invented and became a thing and became, you know, huge pieces of software that we use in organizations now. But if you weren't technical, you weren't going to like go play with a relational database or even before that. So early days of the internet, people are comparing like, this AI moment is like the beginning of the internet only Super nerds were ones playing with the internet in 1994 earlier and that is not the case right now All you have to do is go log in make an account and start typing to an AI and you can start interacting with it So if there's anybody out there who has been reticent to start experimenting because they think they don't know how to get started, I promise you there is no technical expertise needed to start playing with it now. I
Speaker 1
feel like something I've been thinking a lot about is that because we're humans and we are capacity for knowing is finite and our desire for sense making and storytelling is massive that particularly in the complex systems we work in, but certainly as the world turns faster and there's increasing complexity, like the limitations and the natural output of the fact that we are human beings steering a thing is the simplification and like the synthesis of what's going on. So in a survival mechanism almost. Exactly. Exactly. In an example like the one that you gave, and we say this in companies all the time, it's like you go to your board meeting as the C-suite and for the quarter you put together a PowerPoint deck that has bullet points with sub bullet points to explain what has happened. But it's very limited by human cognition and understanding and data and everything else. And I just think that there's a world coming where we won't have to do that and where we could just know a lot more and have a lot more richness without actually having to personally sense make and, and dig into minutia.
Speaker 2
Yeah, I completely agree. And the thing that is going to be a bit... The word I want to use is mindfuck, but I'm going to say another one just in case we don't want to use that one. The thing that is going to be hard to wrap our minds around is that some early AI research was really focused on giving AI's games to figure out. So go chess, Starcraft, which is near and dear to my heart. You give an AI a game, because games have clear rules. And you can tell like whether or not you're like winning the game or learning the game. And one of the things that happened pretty quickly, I think across all of these games, is that, first of all, it very quickly became very good at most games. And then second, it would make moves that seemed stupid, seemed wrong, but inevitably they would end up developing the game and they would win. So the reason they seemed stupid is that it was not using it kind of human constructs, human principles for how to play the game. It was really from first principles figuring out how to play the game. Well, what is that going to look like or feel like in our organizations when AI start doing things or telling us things that don't necessarily jive with our interpretation of it? One of those knock on effects. I think that's going to be very interesting to figure out. Do you trust the AI because it obviously is making sense of the information in a way that is just beyond human comprehension? Or is it actually the AI is going off track? I don't know how you necessarily figure that out. Is that going to be a skill that people are going to be able to develop in the future? A hundred years from now, every organization has like the AI like sense maker and it's like very close with the AI. It's like it becomes like an oracle. That's weird. Oh, I just freak my own brain out.
Speaker 1
Well, to me, what you're describing is just fodder for experimentation. Yeah. So right now, when it's like we have a talent gap and we to upskill in our organization, the limitation of our imagination is like, okay, well, we could hire some new people, we could teach some people some stuff, we could guess what those skills are. Maybe the counterintuitive move is that everyone does a headstand for five minutes a day because, ultimately, that increases oxygen to the brain and it makes us more capable of problem solving. I don't know. I think it's interesting to be able to run even more counterintuitive, radical, not obviously correlated experiments in the future that are inspired by something, not just completely random. That
Speaker 2
just then makes the argument for why the stuff that we're dealing with today and helping organizations with today about being comfortable with experimentation and just building that as a muscle that we have and get stronger all the time. So that in that future, it's just like, okay, yeah, fine. We've got a new inspiration for some experiments to run and it happens to be coming from an AI. Great, who cares?
Speaker 1
Totally, who cares? It's like over time hopefully just like right now often when we are gonna talk about a new return to office policy for example. It's like you have an opinion, the custodian has an opinion, the CEO has an opinion, and then we're gonna look at the internet and see what 10 other companies are doing. Hopefully this is just like a much larger data set to add into the soup of possible experiments. Yeah. Yeah. So I want to talk just in a more like grounded way about really the move from level four to level five. So level four being the talent marketplace. We've talked about talent marketplaces a lot on this show and they're the future. Side note, y'all, some of you are not ready for this, but this is happening. It's happening. So strap in. Let's talk about how we see AI, and I would just say tooling broadly, but certainly intelligent tooling, impacting, affecting things like talent marketplaces.
Speaker 2
Yeah. So I think this is actually a good place to talk about a thing that I was hoping we would get to which is I'm wondering if we are over-indexing on the chat interface to understand AI. Because that's what is popular and has kind of popped right now. Yeah. So when I think about talent marketplace and AI, I don't think about a chat interface at all. I think about an AI that is sucking up all of the information about the skills and interests and abilities and previous work experiences and what went well and what didn't go well and communication styles and literally everything about the people in an organization on one side and then every opportunity that needs a human attached to it on the other side and all of those characteristics and making really smart and non-intuitive connections between those two sides of the marketplace and not just a person in the middle who is like trying to look at two different spreadsheets and like send emails back and forth to connect people to opportunities. That's the obvious, I think, AI connection to the talent marketplace. Yes.
Speaker 1
Agreed. I would add two pinches of spice to that soup. One is when you talk about all of the information that's being sucked up, I think it's easy for people's minds to think about existing technology rather than AI that is listening to Zoom calls, Slack channels, teams, email, and is garnering and gaining a lot of the information about how you shoot, move, and communicate without there being explicit quote-unquote data captured in a system meant for that purpose. So, like, there is a holistic view coming that is not so and what I'm good at and helps me sort of orient myself inside of my organization. So I just wanted to add that bit, because I think it's very easy to be like, oh yeah, it'll process what's in our talent systems and our HRIS and whatever, but think much, much more broadly than that. Yeah,
Speaker 2
think about, I guess the analogy that's coming to mind for me is that in the course of doing work, we're creating a lot of as of right now, useless exhaust conversations going off into ether, text messages or whatever, like the work is creating a bunch of data that's so far doing nothing. That's the stuff that you're talking about the things that are just kind of off put in the course of doing work. Which, I mean, obviously has like privacy and ethical and various implications that we're just figuring out.
Speaker 1
Totally. And the other thing is, you know, I think this is where a lot of the concepts that we've talked about on this show in the past over the years about the permeability of organizations comes into play. So everything that you said about like the opportunities and the matching and the forecasting of what is going to be needed, et cetera, I think that in an AI augmented world that will extend outside of our organization to be able to identify where those skills and capabilities and styles and time and etc. Exist outside of our organization quote-unquote organization like I fundamentally think that this shift is going to accelerate the knowledge and idea that organizations as a construct are just constructs But my point is the marketplace will not be ultimately constrained by the walls of our company. It will be just like it is now when I say to the ready, we need an instructional designer. Anybody know how to do that? And everybody goes no. And then it's like, cool. Should we get a consultant? Should we hire a partner? Should we go on up work? I don't know. Like AI will be able to do that work in just a month. much better way than we do it now. So like, I think on the one hand, everything will become much more atomized where it's not like I need this person. It will be like, I need this piece of work or this kind of experience or this specific moment and on the other side there will not be containment inside of people roles organizations, etc. Like just a lot of those human made boundaries will be brought down, which is just sort of an inversion of how things work right now, which is like monolithic jobs in monolithic orgs. It's just like an unbundling and an atomization that we can't do now because of our little human brains.
Speaker 2
Do you think that ultimately ends up being in net positive or net negative for human beings? I
Speaker 1
have no idea, man. I mean, my guess is net positive. I mean you know play this back for me in two years and we'll talk about it. But I think two things. One is I know that that is where the world has been headed for a long time and a lot of the explicit boundaries around organizations and humans are based on our legal and capital structures, but don't really serve people. It's like we have to stay in these constructs because that's how we comply with laws and pay taxes and get benefits, but that's not really how work works and value gets created. So in a world where we don't have to do that, that makes a lot more sense to me. And just drawing inspo from like the Dao arc that we were on, the amount of opportunity that that unlocks for people in all different parts of the world with all different kinds of access to education, etc., I think could be like pretty revolutionary. And so obviously they're gonna be like some real challenges and downsides to all of this I would imagine but I just feel like the Dow thing was like point one percent of what we're talking about here But you still saw people have access to Opportunity and to be able to contribute who just like never would have otherwise. Thanks What do you think, Sam? What do you think about all that noise?