
58: AI is a Stress Test for Your UX: What Cracks Will It Show?
Finding Our Way
The Evolution of Design: Visualizing User Experience and Multimodal Input
This chapter delves into the dynamic role of designers in shaping user experiences and digital product architecture. It highlights the challenges of effectively communicating complex ideas for both human and AI comprehension, while also reflecting on the historical development of user interfaces and the need for adaptive design.
Show Notes
Jesse and Peter explore how AI is revealing the true value proposition of design teams. They discuss why “whoever controls the prompt controls the product” and why design leaders must understand their organization’s expectations before embracing AI. The more things change, the more they stay the same—AI may be new, but the fundamentals of design leadership remain critical.
Jesse’s presentation “The Elements of UX in the Age of AI” is now available as a digital download. Get your copy today.
Peter has just launched his masterclass “UX/Design Leadership Demystified” in two formats—self-study and cohort course. Learn more here.
Transcript
Peter: Make sure to stick around to the end of the episode to hear a couple of new offerings from Peter and Jesse.
Jesse: I’m Jesse James Garrett,
Peter: and I’m Peter Merholz.
Jesse: And we’re finding our way,
Peter: Navigating the opportunities
Jesse: and challenges
Peter: of design and design leadership,
Jesse: On today’s show, reflecting on my talk: The Elements of UX in the Age of AI, Peter and I sit down one-on-one to talk about AI and its implications for design roles, design processes, and design leaders. We’ll talk about the new skills teams will need, the old skills that won’t be going away, and why. In an AI-enabled world, whoever controls the prompt, controls the product.
Peter: So, a few weeks ago now, you gave a talk on the Elements of User Experience in the Age of AI. And that’s where I want to start. As a UX guy, when I see commentary about the intersection of UX and AI, or rather, primarily, design and AI…
Jesse: mm-hmm.
Peter: …it typically focuses on the top two layers of the elements diagram, the surface layer and the skeleton layer. And really the surface layer. I don’t even know if we’re getting much from a workflow standpoint. I’m just seeing screen design…
Jesse: mm-hmm.
Peter: …being what’s being discussed. And I’m wondering, am I missing something? Where is the conversation happening about AI and how it’s affecting the lower levels of the diagram? The structural concerns, the scope concerns, the strategic concerns? ’cause that for me, given my background in strategic design, is where my focus is at.
And it also feels like, well, that’s the kind of thing AI can’t do. It requires my human brain. But I also don’t want to be that guy and be ignorant of the possibility that these tools are able to have a more kind of foundational…
Jesse: mm-hmm. Mm-hmm.
Peter: impact on the practice of developing user experiences.
AI’s Strengths: Analysis and Synthesis
Jesse: Yeah. Yeah. So the way that I tend to think about this technology is in terms of what I see as its two strengths, which are analysis and synthesis. Which is to say, finding patterns within a data set, and then extrapolating from those patterns something broader, right?
So the pattern finding is the analysis part, and analysis is the stuff that’s gonna kick in when you’re down at those lower levels on the elements of user experience, where you’re talking about strategy, where you’re talking about scope, where you’re talking about user needs, where you’re talking about business requirements, where you’re talking about business models, where you’re talking about functional requirements, content requirements, all of those kinds of things.
So this is where getting a whole bunch of data together and feeding it to the machine can help surface patterns that you might not otherwise see. And this analysis value proposition for the LLM is where I see it coming into play in these more kind of strategic and product strategy, scope oriented domains.
Then when you get toward the top layers of the elements, then you start to get into these areas where the synthesis matters more. Where it’s more about what can you create, what can you generate out of the insights that you’ve created, out of the, really, the constraints that you’ve identified on your design problem.
Because if we think about the double diamond, this is where divergent thinking comes in, where you are generating possibilities, creating ideas, and then convergent thinking comes in where you are refining those ideas based on criteria that you’ve developed. And so these are both areas where an LLM can potentially play a role in a user experience design process.
What we see though, is that in these analysis oriented areas, where you are turning user needs into insights, turning those insights into requirements where you are evaluating and refining possible strategic directions, these tend to be processes that are owned by people outside of design. They may be owned by people in a UX research role. They may be owned by people in a product leadership role. They may be owned by people who are in a business leadership role.
But often the direct purview of a design leader doesn’t actually extend all the way down the stack of the elements of user experience. And so what you see is a lot of the things that end up influencing user experience outcomes are actually owned by other roles in organizations.
So I think part of what you’re seeing is that what design leaders feel like they can authentically control is the stuff that’s closer to the top of the stack, whereas the activity that’s happening at the bottom of the stack is happening in other parts of the organization, or that design leaders are ceding their influence over those areas out of a sense that that’s somebody else’s job, and my product person is gonna handle the AI that generates requirements, and I’m not gonna try to handle that myself.
Peter: Weird that you say some of this, in part because I tend to think of the user needs part, the user research part is very much within the realm of design, typically.
Jesse: It depends on the organization. In some organizations, the people who actually own that stuff don’t report into a design leader.
Peter: That’s increasingly true, but not historically true, at least when it comes to UX research. There’s other forms of research.
Jesse: Yes. Yeah. Yeah.
Design leaders telling on themselves
Peter: As you’re saying this, I’m thinking about something I saw a day or two ago on LinkedIn where a design leader was saying that unless you know code or are some Jony Ive-level brand craft wizard…
Jesse: mm.
Peter: that AI is going to take everything in in the middle or AI is going to subsume the work you do.
Jesse: Mm-hmm.
Peter: And I responded, well, maybe, if all your leading is production, but that’s not design, right? And so one of the things that’s been clear to me is design leaders telling on themselves about how they have led design and how they have abandoned those lower levels of the diagram.
Jesse: Yes.
Peter: And I say abandoned. It was not taken from them. If they knew what they were doing, it was there for them to lead. In my role as a design leader, I led folks doing the research. I led folks figuring out organizational models. I led folks developing the insights coming outta research that drove, that informed, I should say, product requirements. There were other means of informing product requirements…
Jesse: mm-hmm.
Peter: …certain kinds of customer conversations or whatever.
But, maybe the conversation about AI and UX is really just a conversation about UX and design.
It’s casting a light on just all the different ways, the varieties of ways that this has been led, this has been practiced in organizations, because, my point of view, both having led teams and working with design leaders of teams, is most of the teams I’m involved with have some responsibility all the way up and down the stack.
You know, they maybe have more responsibility the higher up the diagram you get, sure. And the lower down the diagram, there’s a conversation to be had, but it’s a conversation. They’re not simply taking user needs from someone or the strategic objectives from someone. I guess, if as a designer and design leader, all you did was executing on the synthesis parts as you were calling them, the upper levels of the diagram, yes, it does appear that much of that work can be done by machines…
Jesse: mm-hmm.
Peter: …and that, for me seems like an opportunity for design.
But it’s, intriguing how many people see it as a threat.
Jesse: Right. Well, so, as you know, this talk came out of the work that I’ve been doing with design leaders for the last several years as a leadership coach. And in working with design leaders on their leadership challenge, it was this recurring theme that kept coming up of like, I’ve gotta figure out what I’m gonna do about AI.
And what I found in those conversations is the value that AI potentially can deliver to your team depends a lot on the value that your team is seen as delivering to the larger organization. So if the value proposition of your team is narrowly focused on quality and speed of delivery of design assets, the value proposition of AI for your team is very different than it would be for a team where your value prop is more rooted in product strategy, user research, driving requirements, that kind of thing. And that kind of thing is gonna be highly variable because we’ve seen, as we’ve talked about with the variety of design leaders that we’ve talked with on this show over the last few years, we’ve seen a wide range of different frames for the value proposition of design as a function.
And so where AI fits in, I think really relies on the leader clearly understanding what the organization thinks design is there to do for them.
Peter: I had a similar conversation with some folks probably two years ago now about design systems, and the rise of design systems. And these folks were thinking of putting together an assessment of your design system situation. And as they were sharing this with me, what I realized is that that assessment had very little to do with the design system.
Rather, that assessment was a probe on the organizational maturity when it came to matters of design and user experience. That, what you got out of that assessment was going to more be an indicator of what you were just talking about in terms of how the organizations that these leaders are in, understand design.
And I think this is something we’ve talked a lot about, but I don’t know if we’ve talked about it on the podcast. And I wrote a little bit about it a couple weeks ago, which is the benefit for design leaders in considering their team, as a function, as an organizational function of the firm.
Jesse: Yes.
Peter: … not as a set of practices or activities.
Jesse: Yes. Or as a group of people to defend or protect, right?
Design is a symbiote
Peter: Yeah. It’s more than just a group of people. There’s lots of ways you can slice these companies. You can have departments, you can have business units, you can have functions.
And when you think of design as a function, it gets very simple in terms of what others expect of your team. You mentioned the word value proposition, right? What is the value proposition of your team? And, when you think of the value proposition of your team, as if it were a function, you can start looking at analogies of, well, what are the value propositions of product management, of engineering, of marketing, of sales? And how do you line up with that? And design has a really hard time lining up with that. That value proposition is different across different companies because design as a function is like a, um, a symbiote, I was gonna say parasite, but let me, let me say symbiote…
Jesse: [Laughter] better.
Peter: …design as a function is a symbiote in that it ends up taking on the shape of the organization it’s part of in a way that other functions I don’t think do, right? Marketing is gonna kind of look the same wherever it is. Sales is gonna look the same wherever it is. Engineering’s gonna mostly look the same wherever it is. Design is going to have to take on the shape of the organization that it’s part of in order for it to deliver its value, because design’s value is, much more about multiplying the success of other functions than delivering something straight on its own, right?
And so, maybe before people start getting caught up in, how am I going to be disrupted by AI…
Jesse: mm-hmm.
Peter: … to do some groundwork in thinking about, how is my team showing up as a function of the firm? What is our value proposition? What do people expect of us? How satisfied are we with those expectations? Or do we need to change those expectations? Are people thinking of us primarily as UI production, or are people coming to us for the full stack of user experience delivery?
And then once you have a sense of what your value proposition is, what people’s expectations are of you, now I’m wondering, how AI can be a tool to enable you to realize your objective in terms of how you want the rest of the organization to see your team.
Jesse: Right. So there is delivering within your existing mandate, so to speak. Like, we’re gonna leverage this technology to better meet the expectations that have already been set, right?
And then there’s the question of, well, can we push the boundaries of those expectations? Can we make a play for a broader value proposition for design in the product development process, for design in the product strategy process, to have more of a voice, to have more of a point of view that it brings to the table on where all of this is going.
Whomever controls the prompt controls the product
Jesse: One of the things that I mentioned in the talk that I think is a really important piece for design leaders is, in an AI-enabled digital product development workflow, whoever controls the prompt controls the product. Whoever is talking to the robot that makes the thing is the person with the power. And so your choice as a design leader is to figure out which things do you want control of the prompt over.
Which areas of the product do you want your people to be the most prominent voice around, and building processes that support that voice and that engagement with the technology to elevate that value proposition.
So it may well be that you’ve got, as we were talking about, product and maybe even research, people who are nominally owners of the lower planes on the stack who don’t have a strong point of view, who don’t have, you know, a facility with the technology or an ability to wrangle it toward those objectives. If you can step into that void, if your people can do that better than their people can do that, you can make a play for a wider value proposition for your team and for design as a function. But you gotta master the prompt craft first.
The Role of Power
Peter: This is interesting. I’m glad we’re getting to prompts and, I suspected we would get there. This ties into something else that I don’t think you and I have discussed on the podcast, but have discussed outside of it, which is the three types of power.
This is something that I was introduced to about a month and a half ago at the Advancing Research Conference in a talk given by Robert Fabricant. And it’s a model where within any group of people, but let’s think about it within organizations, there are three distinct types of power that show up in these organizations.
What most people think of when they think of power is positional power, right? The senior most person in their ability to tell other people what to do. The second type of power that comes up is expertise power, that someone has special knowledge of a thing, and because of that, other people will listen to them because they don’t have that knowledge. So this person might not be particularly senior, but they’ve, to use the example you just shared, they’ve mastered prompt craft. They know prompts better than anybody else.
The third type of power is relational power. and that’s how people, it’s gonna sound mercenary when I say it, use relationships to make those connections with others within the organization to then realize their power for getting things done, getting the things they want done.
And what’s interesting is, as you’re talking to me about prompt craft, that suggests a kind of expertise power, right? I know how to wield this tool better than anyone else, and this tool is super important, and so you’re all gonna listen to me because of that. In these analyses of power, far and away, what is considered most important is relational power.
Jesse: Right.
Peter: What often comes out are people say, well, what about positional power where you can just tell people what to do? If you think about. how people in higher positions tend to wield their power, they rarely do it by fiat. Yes, in the public consciousness, that’s what we see. But look around, even if you are a design leader, you’re not just telling your team what to do. You’re inspiring them, you’re engaging them, you’re making them want to do that thing. Not because you told them to, but because even if you’re senior to them, you’re wielding relational power to bring them along.
Jesse: Yes.
Peter: And so I, find myself getting maybe a little stuck on this idea of the person who controls the prompt controls the product, because that is this demonstration of expertise power, which UXers often fall back on as why they should be listened to, because they’ve done the research, they’ve talked to the users, they’ve generated the insights. We’ve observed the tests.
We know what is going to be the best experience, so we should be in charge. And that never works or rarely works. So there’s clearly some value in expertise power. It gives you your credibility that you’re someone to listen to and engage with, but it feels like there’s something missing in that equation that you’ve been shaping.
Jesse: Yeah. Well, so two things come to mind for me around this. First of all, I definitely do not mean to leave relational power out of the equation. You’re not gonna get anything done just by having. a phalanx of the most expert prompt crafters in the room. You get your hundred monkeys in there, and demanding that you be handed the authority over the entire product.
But the other part of it that I think makes this craft expertise different from other craft expertises, is that it is manifestly an accelerant for creative processes, for product development processes for product delivery processes, where your expertise doesn’t just make you an expert. It makes you the person who can deliver a better thing faster.
And so in these areas where, again, if there is a power void in the organization where somebody else hasn’t figured out how to close the gap around, let’s say, using AI to create really robust PRDs, if you’re able to take the junk that comes outta your PMs and turn it into really robust PRDs, you become the center of that expertise. You become the center of that influence, if you’ve mastered the technology that can bridge those gaps for the organization.
I’m not saying that that doesn’t come with a lot of political scaffolding to create that opportunity for the team. So the leader has still got to be engaging with and negotiating with all of their cross-functional partners, all of their executive stakeholders, to be able to make the case for why we should do things following all of this stuff that their team is producing with AI support.
But if they are able to do that, it starts to create a leverage point for more human-centered influence in product development. And so that’s, I think, the really interesting opportunity.
Peter: In your talk, you mentioned that you created the elements of user experience diagram because no one knew why you were there, like, why you were in the room. Why would I work with this information architect slash user experience person…
Jesse: He doesn’t even draw. Why is he here?
Peter: Yeah, how are you helping us develop products? And the diagram was a means to answer at least parts of that question. And so thus people knew, to bring you into the conversation.
Jesse: Right.
Peter: And that still feels in many ways, like, the circumstance today, like, UX is not eagerly sought after. There might have been a period where it was, but even at its most eagerly sought after, it was still relatively minor.
Jesse: Mm-hmm.
Peter: People recognize its value, people understand its importance, et cetera. Another way to say it is there was never a UX gold rush. There’s never been a design gold rush.
Jesse: Right.
Peter: There’s an AI gold rush going on right now.
Jesse: Yeah.
Peter: And, there was something about when you were talking about the elements of, user experience as, this thing you needed to bring people along who could barely be bothered to understand why you were in the room. And now with AI you have to kind of beat them away, and many of them don’t even really wanna understand it, right?
They just want to do it. It’s, I gotta get on the AI thing. There’s something about this dynamic between UX still trying to pull people in, and AI being this gravity well.
AI in 2025 is like The Web in 1997
Jesse: Right. Well, so you may recall from our early years in this industry in the late 1990s, this brand new technology called the World Wide Web came along, and it was gonna transform everything, and they were wiring everything to be webby in one way or another. And nobody really knew why. Everybody just knew it was important and many of those things did not work, right?
Many of the projects and experiments and attempts to integrate web technology into enterprises around, you know, in that sort of 1995 to 2005 kind of timeframe, just plain didn’t work because they were bad ideas. We are in the bad ideas phase of this technology right now for sure.
I feel that what UX design was able to do for the web 20 years ago was provide some filters, provide some frameworks, provide some ways of thinking about these challenges that helped people separate good ideas from bad ideas. And I think that there’s a similar role for design to play now, in continuing to bring the human expertise to separate good uses of the technology from bad uses of the technology.
You know when we talk about use of research, one of the big things that comes up with AI is the concept of synthetic users. The idea of doing user research by basically asking LLMs to pretend to be users. This is a bad idea. This is not a good use of the technology. It is not a substitute for actual data. Again, if you want to do some analysis down there at the bottom of the stack, then you’re gonna get some high value use cases.
So separating the high value use cases from the low value use cases is part of the work that has to happen here. And I think that work is mostly going to fall, honestly, on design leaders even more than some of their cross-functional partners because, as you pointed out, the territory of design is so vague that if your goal is to drive human-centered process, drive human-centered outcomes, you might need to be piecing together a much more diverse portfolio of AI support tools than somebody whose narrow focus is just, get the code out faster, as might be the case with your engineering partner.
Earn Trust First
Peter: This is putting me in mind…. our conversation with Amy Lokey, Chief Experience Officer at ServiceNow, where, as she told it, her team has for a couple years now, really been at the vanguard internally within ServiceNow in figuring out how to best take advantage of AI tooling and AI opportunities, largely in service of creating highly usable and effective experiences, right?
They’re kind of the most boring of enterprise software, and I mean that with love, but very, very pragmatic enterprise software. It’s a lot about data-driven or data experiences. Lots of cutting and pasting from one thing into another thing, et cetera.
And recognizing that AI can play a role in automating a lot of this labor. And the value that they were able to articulate, that she was able to articulate from a user experience standpoint, was kind of classic 1994-era cost-justifying usability of time on task and how long it took people to do a thing.
And what was interesting about her story, I think in this regard… One, she and her team, she had the credibility such that others were listening to her, and that credibility had been built up over time by demonstrating that value proposition such that when she steps up to help the company figure out how to make the best use of this technology, people aren’t looking at her like, but you’re just the box-drawer, what do you know about AI?
But instead, oh, your prior work in helping us adopt the System Usability Score allowed us to see how, when we improved System Usability Score, we improved customer outcomes, which led to greater customer satisfaction on various metrics that we track, led to greater retention and, you know, business success.
So if, you’re coming up and saying, Hey, let’s let my team get out in front of this AI thing, we’re gonna listen to you because we know that you drive value.
Jesse: Right.
Peter: I think for me, I guess a lot of it is this functional concern. If you’re feeling fear as a design leader about AI, the solution isn’t to AI at it more.
The solution is to identify how you can raise the level of trust that others have with you in your organization, such that when you now want to engage with AI, they will listen to you.
Jesse: Yeah. Yeah. I think that AI is similar to UX in this regard, in that what it really has organizationally is a multiplier effect, but that multiplier effect depends on what’s already there to multiply. So, if you’ve already built the bridges, if you’ve already gained the trust, if you’ve already built the value proposition, AI will let you activate and multiply that value proposition.
If you haven’t already done that, if you’re dealing with a pretty scant value proposition, you’re not going to be able to multiply that very much with AI. So I think you’re absolutely right. The political groundwork has to be there. The operational groundwork has to be there. The cross-functional trust has to be there. The team engagement and commitment has to be there.
You know, there’s a lot of resistance on the part of design teams to engaging with these tools for the fear that that is going to take something away from them. It does absolutely depend on how the organization is approaching it, and if you are approaching it with thoughtfulness and sensitivity to where that multiplier effect can be applied, then you’re going to reap that effect more quickly. And if you’re just throwing stuff against the wall, you know, the spaghetti phase of AI development, trying to see what sticks, then yeah, things are gonna get messy.
So I think it is about being strategic. It’s about leaders being strategic about the value propositions of their teams. It’s about leaders being strategic about where that value proposition intersects with the larger ecosystem that they’re a part of, and where there is an opportunity to amplify that existing value proposition or build upon it.
Peter: Right, right. And I guess one of the things that we’ve talked about in the past is maturity, organizational maturity, design maturity. And one of the risks that many design leaders unknowingly kind of engage in is that, when it comes to design maturity, they are often way more mature than the organization that they’re part of.
Jesse: Yes.
Peter: And they tried to show up as this very mature person and the organization’s not ready for them. This is something going back, when we spoke with Jehad Affoneh he talked about how he’d had jobs in the past where he couldn’t talk impact because the people around him wouldn’t know what do with an impact story.
He had to talk about internal collaboration, ’cause that’s what they valued at that organization, was, did other teams like working with his team?
And so I guess on that note, in thinking about accelerants, about this situation, right? The risk here is that design leaders embrace AI in a way that misses, is not aligned with, is not able to be taken up by the organization that they’re in.
Right.
It might be amazing. It might be, something that could very likely drive tons of value for this organization, but this organization is just not ready for it. And so it frankly gonna be wasted time and effort.
And so design leaders needing to figure out where to pitch themselves, such that the AI- driven interventions that they are proposing are ones that can be taken up. And, when you meet your broader organization where they’re at, not where you are at, but where they’re at, click in with that and then over time bring the people around you along.
Jesse: Right. So, it’s about understanding the expectations being placed upon design as a function. It’s about being clear on what you see as your own value proposition, and the difference between those things and creating the space, if necessary, to expand how that value proposition is perceived by the organization or how that mandate is construed by the organization, yeah.
Practical Tactical
Peter: Let’s get a little practical, tactical.
Jesse: Yeah. Sure.
Peter: You talk about prompt craft and…
Jesse: mm-hmm.
Peter: …the person who controls the prompt controls the product. What is your understanding of the mechanism, of the process, by which that might actually happen? If you were to coach somebody, an interested design leader around getting control of that prompt, and then driving the direction of the product towards these human-centered ends, what would you coach them to do?
Jesse: Right. The first part is about just finding your opportunity. Finding the place where you can accelerate some part of the value that your team is there to create. So from organization to organization that might vary. Identifying the use cases within your workflow, your broader workflow, not just your design workflow, but the broader workflow of everything that you do together as a team to bring a digital product to market.
And looking at that through the lens of acceleration, and honestly, the lens of human expertise and figuring out where the human expertise is most valuable and preserving that, so that what you’ve got is AI not supplanting human expertise, but augmenting human expertise.
Often in digital product development there are these steps of translation. Translation of a strategy into requirements, translation of requirements into design specs, translation of design specs into actual design artifacts, translation of design artifacts into production code. Wherever you’ve got these stages of translation, those are places where the AI is gonna be a super valuable sort of an accelerant there.
So in different organizations, there are gonna be different specific use cases within their workflows based on, again, what the team’s mandate is, as well as what the capabilities are that the team brings to bear.
But ideally what you’re gonna do is you’re gonna get your most nimble, abstract thinkers, and you’re gonna get your best writers together, and you’re gonna talk about how we use language to define what we do. And start to develop shared language, common vocabulary, controlled vocabulary that enables you to have a shared knowledge base of repeatable stuff that works. You know from the work that you’ve seen me do with the prompt craft that I’ve been able to develop some highly reliable, repeatable tools for myself in supporting some of the work that I do.
I can see that being scalable beyond an individual to an entire team, where you are collaborating on a knowledge base of reliable language, reliable, literally grammatical structures, that people can take and adapt and reapply in new contexts to create new solutions. And so it’s that shared understanding that ends up being really the collective source of value that a team ultimately ends up developing through this work.
Peter: Shared understanding of what, and when you say team, which team?
Jesse: I think you can define the boundaries of a team as broadly as you want to invite people into your prompting circle, and shared understanding of what creates consistent results. So this is the big challenge with these technologies, is that by their nature they are probabilistic. We want them to be a little bit inventive and be a little bit creative and come up with things that we don’t expect. The trouble is that sometimes we really need to control how much the machine is giving us things that we don’t expect, and so the ability for the team collectively to understand, here’s how we constrain the framing of a problem so as to produce a consistent result, ends up being the shared craft of the team itself.
Peter: Apart from accelerating, and maybe automating, these interpretive breakpoints in the process, how do you imagine AI tools changing how we develop products? You know, we’ve got some fairly well worn, at least digital product design processes, not that everybody follows them.
Are we just doing our process a little better, a little faster, or do you foresee real shifts in how we work? For example, with the rise of design systems, some people thought that, oh, we should just start with high fidelity comps in our design process. Now, I actually think most of the time that’s a bad idea. But, you know, an argument can be made.
And so like what is being enabled that might shift the order of things, or the responsibility of things within a product development process?
Jesse: So I think it does depend on where you are in your product development process. Early stages, AI is gonna be great for rapid prototyping, right? We’ve already seen so many examples of this where, there’s enough in the training data sets out there that if you sketch out a general sense of the functionality that you’re looking for, it can create something that looks like it does that thing. It won’t actually do that thing, it’ll just be a prototype, but it’ll be a pretty good prototype and maybe even a testable prototype with some additional layers of prompt craft behind it. You could probably create some pretty robust, fully instrumented prototypes of different product features and functionality and put out into the world.
Once you get into later stages of development, I think that it becomes more about refinement and alignment, making sure that you are integrating features and functionality in consistent ways. In the talk, I talk about the prospect of human and machine readable documentation, the idea of creating product documentation that a person could read and understand what you’re doing together, and a machine could read and actually be able to take action on because it would have a fully formed understanding of what you were trying to create.
I can see organizations moving toward that as a means of activating this kind of potential. You know, you touched on design systems. I think this is one of the huge things where, as I see it, it doesn’t make sense to me for design systems not to have an LLM interface. To my mind, the future of the design system is that it’s a robot that you talk to, that you feed it requirements and it matches those requirements with the system that it’s learned and generates product for you, right? The idea that humans would continue to kind of like spelunk into design system documentation in order to cobble together bits and pieces of UI kind of doesn’t make any sense anymore in that world to me.
The Importance of Discernment
Peter: One of the things you mentioned in your talk that’s related to this is how LLMs and any tool built on LLMs will kind of regress to mediocrity.
Jesse: Yes. Mm-hmm.
Peter: And the role of the human is to help get the solutions past mediocrity.
Jesse: Right.
Peter: Right. And so something I’ve been hearing about the role of not just designers, but anyone involved in product development, but primarily, say, designers and product managers, or at least people doing the work, there’s gonna be perhaps even greater importance in that idea of discernment.
Jesse: Mm-hmm.
Peter: Taste as it’s sometimes called.
Jesse: Mm-hmm.
Peter: Talk a little bit about that, and what the implications are, that it’s less about just turning the crank and getting something out the other end, but this application of discernment.
Jesse: Yeah. yeah. So there was a slide in the talk that just says B-Y-O-B-S-D. Right? Bring your own bullshit detector, because the AI won’t be that for you. You have to be the one who knows more. So if you are working with the AI in a space that you are unfamiliar with, it is your responsibility to know more than the AI does, in order to be able to know when it’s feeding you something valid and when it’s not.
And so maybe that’s about choosing your use cases, and maybe that’s about developing more robust validation processes around the output that you get. But what we see over and over again, is where people go wrong with this technology, is when you tried to design a submarine, and you knew nothing about fluid dynamics and you knew nothing about, you know, the structural factors involved in submarine design. And so it gave you something that looked like a submarine but didn’t function like a submarine and, surprise, you drowned, right?
So this is the kind of thing that we’re seeing out there. Whereas, if someone with that taste, with that discernment, with that expertise, is able to leverage the tool and screen what comes out of the tool and say, this is valid, this is not valid, I’m gonna pay attention to this. I’m not gonna pay attention to that, that’s where you get the multiplier effect. But what gets multiplied is human expertise. Human capability.
Peter: And that leads to something that I wrote just this past weekend. I was trying to figure out what I wanted to write for my newsletter, and I ended up writing about something that you and I had spoken about a few days prior, which is the definition of skills when it comes to design.
So in my org design work, particularly when I create career frameworks for design organizations, at the heart of those career frameworks is a taxonomy of skills, interaction design, visual design, information architecture, et cetera. And I looked at my skill rubrics to try to get a sense of, what does AI do to these definitions of skills? What does it mean to be an interaction designer in an AI world? And as I looked at my rubric, I pleasantly realized that rubric as I had defined, it was already tool agnostic.
Jesse: Mm-hmm.
Peter: It didn’t say anything about OmniGraffle, Vizio, Figma, that’s not what the skill is.
The skill of interaction design is, are you able to design a system that allows people to interact with the system to accomplish their goals…
Jesse: mm-hmm.
Peter: …probably feel some sense of satisfaction, maybe even delight in doing so, and that is tool agnostic. Humans have been designing all kinds of stuff that provide that kind of sense for decades, if not millennia.
As you were talking about kind of enhancing these abilities though, or this concept of discernment, one of the challenges that comes with skills definitions is, skills are often about aptitude, but aptitude is different than taste. It’s hard to measure someone’s discernment ability.
Jesse: Yes.
Peter: I can say that, as you become more senior as an interaction designer, more advanced and developed as an interaction designer, you can design more and more kind of complicated and complex systems, wiring together different technological platforms, maybe online and offline platforms, like you can handle that complexity.
That’s usually what you think of when it comes to scale. And that’s typically what the definition of the skill involves. And so I’m, thinking about how discernment’s gonna become way more important, right? So much of the value that people are currently delivering is their ability to, themselves, do the task…
Jesse: mm-hmm.
Peter: …just get it done. If we can delegate much of that “getting it done” to a tool such that our job is to now shape it, mold it as you, I think you said in the talk, think about it like, you’re throwing clay, right? You throw the clay on the wheel and now you’re, spinning it and you could make an ashtray like I’ve tried in the past, that looks like ass, or you can make something beautiful. For me, it raises this interesting question, which I don’t really grapple with, with my career frameworks and career architectures, which is assessing discernment ability, assessing taste, not that that’s not important, but it hasn’t been very important in UX design, right?
In UX design, what’s been more important is the ability to create something that works, that’s usable. I think we’re gonna be shining lights on different parts of the work than maybe has been shining on it before.
We were so focused on someone’s ability to grapple with tools, right? The number of resumes in the past that talked about, I can use Photoshop, or I can use Illustrator, and now I can use Figma. That’s all going away. And so what’s left as we consider candidates…
Jesse: right.
Peter: …as we build teams, as we think about the folks that we’re bringing together to do this work.
Jesse: Yeah. So to my mind, it comes back to really what designers have always done, with an important twist to it, which is, can you visualize the experience that you can see someone having?
How fully can you visualize that experience that someone is going to have with your product? How fully detailed is that vision? How many of the different parts of it can you really see in your head? And then having visualized that, can you conceptualize what it would take, architecturally, to create that as a digital product?
Can you conceptualize the breakdown of screens and components and in some cases data structures and other things that are necessary in order to realize that vision? And then the third part, and this is where it gets tricky for a lot of designers, can you express that in language? Can you linearize that in a way that an LLM can ingest and interpret and make sense of and move toward, and then can you take that result and iterate upon that, and build upon what it creates?
Peter: So I have three things that I wanna make sure we get to before we go.
The first you talked about, can you express it in language and linearize it? I’m wondering when you say language, do you mean specifically words or could it be words and pictures?
Jesse: It absolutely could be words and pictures, yes.
Peter: Okay. Because, thinking of designers, right? Designers are visual people, but with pictures you can communicate multiple streams of information that the LLM could be taking in to better understand what it is that is being asked of it.
Jesse: Yes. Yes. Multimodal is what they call it. Mm-hmm.
Peter: And, I think when people think of prompts, they think of typing lots of words.
And so it’ll be interesting to see how prompts evolve to accommodate multiple modalities of input.
My second question, in the talk, you mentioned how, back in the day, 2004, 2005, you were giving a talk around, websites that evolve based on use, that can adapt to use.
And this is something you and I have in common. This is something we both pursued a long time ago, and we’ve seen bits and pieces of it, right? If you look at any page on Amazon, that’s actually a demonstration of an emergent information architecture. The things that you are shown are based on prior behavior.
But one of the things that people keep talking about, at least in the design space, is kind of emergent UIs and how the UI can shape itself to what you need, not the content that it’s giving, but literally the tooling, the interface elements that you’re exposed to.
And I’m wondering, what do you think of that? Because this is something we’ve also been talking about for 25 years, and I keep not seeing.
Jesse: Well, I’ve never been into this vision to begin with. There’s not a lot of precedent for humans preferring infinitely customizable tools. Humans would much rather use a larger set of more narrowly focused tools than one big, giant Swiss army knife with 1700 blades on it that’s gonna flip different blades out depending on the context.
The various attempts at this, you know, just straight up haven’t worked. The closest thing that we’ve seen, I would say, have to do with more sort of task- or context-focused workspaces in UIs, where you can flip between modes, where I think about something like Photoshop, where you can just like really dive in and just do, like, pixel-level editing and like push all of the other stuff out of the way, and then when you’ve got to do some big kind of document stuff, you can bring the tools back in and do other kinds of things with it.
So I don’t see AI creating infinitely variable tools because humans don’t like infinitely variable tools. Humans like tools that they can habituate to. And that’s not to say that there isn’t a place for AI in creating other kinds of dynamism within these environments, but I think that that probably is going a step too far for human brains.
Peter: And one last thing kind of drafting on this, or maybe a different way at it, and something I’ve been suspecting, is how the development of these AI tools and the accessibility that they give so many people in now building their own software is… Are we going to see more and more products for, I don’t wanna say smaller and smaller audiences, but for a bunch of audiences, every, every audience, whatever it might be, can get its own product because with these tools, you can spin up something that really serves that particular segment.
This might not be a tool that you know, gets to a billion dollars ARR, maybe it only gets to $50 million in ARR, but $50 million isn’t nothing.
And are we gonna see more and more folks creating tools that generate a $100,000 to $10 million in revenue and be fine with that? And it’s not quite artisanal. I don’t know if you can call something that’s created with AI artisanal…
Jesse: that’s an interesting question.
Peter: …that’s a whole different conversation about craft and the role of craft in this. But, that mindset of smaller…
I.
Jesse: Yeah.
Peter: … special purpose, you know, kind of Kevin Kelly’s “thousand true fans” oriented software, instead of what always feels like everybody around us is trying to do, which is create something that goes big.
Jesse: Right? Yeah, I think so. I think there’s absolutely an opportunity there. Honestly, I think that’s a part of the larger thing that we’re likely to see, which, when I hear about vibe coding these days and people generating apps out of nothing, what they’re mostly making are tools for themselves to fill some gap, to fill some hole in their own workflow.
And so I could definitely see a lot of creative professionals out there potentially creating tools to support their own workflow in different ways out of this technology. What it takes to scale that, to be a commercial product, to give it the stability and the security and the reliability necessary to be a thing that you could sell to somebody is maybe a different level that a lot of people aren’t gonna get to.
But to make something that can run on your machine, that can help you quickly, you know, organize your task list or prioritize features or whatever the particular thing is, I can absolutely see a lot of that going on.
Peter: So just last question for you. What are we not talking about? What have I not asked about? Or what are you not seeing in the discourse that you think is important and worth exploration?
Language Matters
Jesse: Hmm.
It’s hard to think of what’s not in the discourse because there’s so much discourse.
I’m gonna come back to the emphasis on language actually, because I think that there is not enough talk about the linguistic craft here, and the ways in which small changes in grammatical structures in word choice, in the way that you phrase and frame problems– because that’s what the work is, prompt work is problem framing for a machine to generate a response to the problem, and the more effectively you can use the mechanics of language to frame a problem in a way that the machine can understand, for you to have your own sort of theory of mind in the way that we use that phrase in philosophy and psychology to describe how we respond to the internal mental state of another entity in the world, the extent you can develop your own theory of mind about the AI and your own linguistic approach to engaging with it, that’s the skillset across the board, regardless of the problem that you’re trying to solve.
Peter: Sounds good. Let’s end there.
Jesse: Peter, thank you so much. This has been fun.
The Elements of UX in the Age of AI is now available as a digital download. Get your copy today at JesseJamesGarrett.com/ai.
Peter: Peter here. I’ve just launched two new formats of my Design Leadership Demystified Masterclass. You can take it either self-paced or with a cohort. For more information, visit petermerholz.com/masterclass.
Jesse: For more Finding Our Way, visit findingourway.design for past episodes and transcripts. You can now follow Finding Our Way on LinkedIn as well. For more about your hosts, visit our websites, petermerholz.com and jessejamesgarrett.com. If you’re curious about working with me as your coach, book your free introductory session at JesseJamesGarrett. com slash free coaching. If you’ve found value in something you’ve heard here today, we hope you’ll pass this episode along to someone else who can use it. Thanks for everything you do for others, and thanks so much for listening.