
65: Design—Stuck in the Middle with AI (ft. Christina Wodtke)
Finding Our Way
How Roles Shift with AI
Christina discusses product, design, and engineering overlaps, risks to certain jobs, and the enduring value designers bring.
Show Notes
Stanford professor Christina Wodtke joins Peter and Jesse in exploring the real contradictions of AI in design and product work: revolutionary prototyping speed versus the need for critical thinking, efficiency gains versus cognitive loss, and loving the technology while hating the exploitative companies building it. She shares exactly what designers and PMs must vigilantly protect.
Christina’s blog: https://eleganthack.com/
Peter’s website: https://petermerholz.com/
Jesse’s website: https://jessejamesgarrett.com/
Transcript
Jesse: I’m Jesse James Garrett,
Peter: and I’m Peter Merholz.
Jesse: And we’re finding our way,
Peter: navigating the opportunities
Jesse: and challenges
Peter: of design and design leadership.
Jesse: On today’s show, founder, author, tech industry veteran, and Stanford professor Christina Wodtke, joins us to talk about the sweeping impact of AI on design and product management, including shifting team roles, shifting creative processes, and shifting definitions of value in our work.
Peter: Hi, Christina. Thanks for joining us.
Christina: Hi Peter. Thanks for having me. And hi, Jesse.
Jesse: Hi, Christina.
Introducing Christina
Peter: So, in preparing for this conversation, I’ve been thinking about how I believe I’ve known you since, I want to say 1998 when I believe you were at Red Envelope as…
Christina: Nope. E-greetings. We were…
Peter: E-greetings, sorry, upstairs from Red Envelope, back in the late nineties when there was all kinds of tech going on in San Francisco.
Sorry, E-greetings. And I’ve known you as an in-house UX practitioner, a consulting UX practitioner, a product manager, a game designer, a teacher, an author. you’ve had a multi-layered, multi stage career. But what I don’t know is how you think of yourself today and how you introduce yourself.
So please…
Christina: [laughs]
Peter: Who is the Christina Wodtke of today?
Christina: You missed a couple. General manager and of course founder. Multiple time founder. But, you know, I keep busy. What can I tell you? Most of my time is spent at Stanford teaching. I’m a core lecturer there, which means I have a full-time appointment in which I teach product management and design, interaction design mostly, and game design as well.
So describing myself is never easy. I contain multitudes. But other than hanging out at Stanford with the kids, I write books on the odd occasion. I think I’m up to five now. And I also blog like a maniac, still.
You and I are some of the original bloggers from back in the day.
And then I help companies every so often when they come by and say, help, help. And I say, what you got? So the helping companies have been interesting. It tends to be a type of coaching that’s less life coaching and much more about how do I get my business sorted out. I work with CEOs mostly helping them figure out what’s my product strategy, what’s my business strategy, what’s my hiring strategy?
The word strategy tends to come up a lot.
Peter: Are these startups or more mature companies or all kinds of companies that you’re working with?
Christina: All kinds of companies. When it comes to the OKR work, I get a lot of big companies, you know, your Pepsi’s, et cetera. But for the more hands on coaching, those are almost all startups. I’m a startup person. I always have been. I always feel funny when I’m talking to a company and they’re like, we can’t do that because we’re cross-matrixed in these 15 different ways, And I’m like, Ooh, that sounds unpleasant.
So I’d much more prefer, the startups.
Stuck in the middle with AI
Jesse: So we were interested in having you join us because I’ve been reading a bunch of your writing lately around the topic of AI, and you’ve been doing some interesting explorations, and it’s been really interesting to see you sort through the implications of this stuff in your writing. And I’m curious where you would place yourself right now on the sort of skeptic-to-booster spectrum when it comes to AI.
Christina: Yeah, the doomer versus the booster. I like to think I’m stuck in the middle with you guys. There’s clowns on the left of me and jokers to the right.
Peter: Sure.
Christina: Yes, indeed.
Jesse: I’m keeping my ears covered.
Christina: Sorry, But it’s really maddening because it feels like if you say anything critical about any of the current generative AI companies, you’re somehow an idiot and backwards and a Luddite.
And if you say anything positive, you’re a lunatic, an overhyped, bubble loving fool.
So I say both things and I can be all those things at once.
If I was gonna nail it down and say, this is an extraordinary piece of the technology. I kind of doubt it will change the world the way the internet did. It doesn’t seem to me to be as deep as the internet, which really did change how we do almost everything.
But it is incredibly powerful and it’s amazing when you use it. You just become supercharged.
It also is not easy to learn. Everybody makes it sound like you just spend five minutes and all these amazing things happen. And I don’t find that to be the case at all.
I would say I was using it for six months before I got my head around starting to get value out of it. But of course I do have a day job, so maybe that might’ve been faster. But the hardest part of AI is that the companies suck so hard. It’s very, very painful.
Like, to use AI is to be complicit in this way that’s intensely unpleasant. I’m currently trying to stick only to Claude, but that puts me in a place where I don’t necessarily know what’s going on across the board. So occasionally I’ll use, OpenAI’s offerings, or I’ll use Gemini, but I can’t bring myself to touch Grok or Meta because I feel slimed if I do.
So I’m kind of trying to find a path forward through that. And I think the piece that I’ve written that most people are most familiar with is the one where I said I love AI, but I hate the companies who make it.
And of course I hate them. I’ve written five books. They have literally built their business on the back of mine, right?
I would say that I sell fewer books now. I will say, that I get less consulting engagements now, which in a strange way, I don’t mind.
When I go into do OKRs, people all have the same problems and I kinda wanna just go, just do a close reading of the book. But I love the really hard problems that I get from the people who want me for my mind instead. That’s really fun.
But how can I like a technology that’s built on my effort without paying it for it? It’s kind of, difficult in its heart.
And then of course, it’s the fact that they’re acting like colonizers as Karen Hao says in her wonderful book, The Empire of AI.
They’re exploiting people in poorer countries for both their work.
I didn’t know until I read Empire of AI that they were doing the same mistakes, … mistakes… the same cruel, active choices that Meta made around content moderation. They’re doing it all over again, causing deep trauma in people who can’t afford to get food. And so if you choose between trauma and food, you tend to pick food.
And of course there’s the energy contribution, which isn’t as bad as concrete, but it’s still kind of bad. I can go on for a really long time, the lack of any oversight.
Gee, you think if people talk to AI that maybe suicide might come up? Oh no, that would never happen.We don’t have be prepared for any negative consequences.
But two days vibe coding with Claude Code and then all of a sudden there’s an app and it’s working and it’s beautiful and it’s magical. And I’m like, yay. And then I deploy it and I’m like, I haven’t, yelled deploy, deploy, in such a long time. Maybe not since my startup, and it just feels good.
So yeah, it’s hard. Anybody who thinks it’s simple is somebody who doesn’t need to have AI in their life. And I don’t really have a choice because now it’s going to be a standard technology, so I have to figure out how to teach it, or I’m not a good teacher anymore. It’s hard.
Peter: I want to, I guess, kind of start with where you just ended off with teaching, but I’m going to, perhaps take an unexpected route, which is: you teach at Stanford, which means you are in the heart of the belly of the beast of tech and some of the bad acting that you were just identifying.
Christina: And I’m in computer science. I’m not over in the D School. I am in the heart of the AI planet.
Peter: And I’m curious from that point of view, I mean, many of the people who are these bad actors are Stanford graduates, but I’m curious what you’re hearing from your students. And if you teach undergrad, grad, that would be helpful to understand as well.
What are you hearing from your students around some of these things that you’re talking about? What is your sense of how are they seeing things?
What the students say
Christina: Absolutely. I mean, the first thing to realize is that Stanford is not a monolith. It’s sort of like when people talk about Google, Google’s like a bunch of city states. Stanford’s a bunch of city states.
The people in the business school, GSB, they’re very different than the people in the design school, which is not a truly a school,but we won’t go into that. But the D School folks are very different from the people who are in HCI inside of computer science which is where I’m living.
So what I find is my students are not as fiercely bottom line, money, money, money as some of them. And when they do tend to care a lot about money, it’s usually because they’re first generation low income students who have never had any money and would like to eat.
Once again, eating comes up a lot.
But I’m lucky because I became part of a program, oh gosh, I wanna say, four or five years ago, that was started by our chair Mehran Sahami called Embedded Ethics. And I decided to be one of the first pilot projects in which we work with a philosophy postdoc to put in ethical questions that tie into the topics that we’re covering.
So each of my class always looks at things through the design lens, through the technology lens, through the political lens, from the ethical lens. And I feel like because I teach mostly upperclassmen, I tend to teach junior seniors and master students. It’s sort of a odd middle ground. But they’re very influenceable at that age, which is really wonderful.
And so the things I can say, I can make them go, oh, they’re human beings that my stuff will be touching and do I wanna touch them inappropriately? No, we really don’t want to. We wanna do a nice thing. We’re gonna be thoughtful, we’re gonna think about the end possibilities that happen.
So I like my job ’cause it’s a little bit of a mission is to help them do the hard thinking through the complexity of what they’re facing.
And every once in a while I get the, you know, money, money, money types. And then we, yeah, enjoy lots of fine arguments about, you know, how much of your soul do you really wanna sell when you’re thinking through this and what does that mean? And I always enjoy those conversations.
I think I answered most of your questions. Did I? Oh, what do they think? What do they think?
Honestly, it could be a self-selection bias, but most of them are struggling with the same thing that you and I and most reasonable people are struggling with, which is, do I go all in in AI?
You know, because it will get them jobs. And there’s no question about that. And because they’re from Stanford, they shouldn’t be worrying about getting jobs, but they still do.
They care deeply about doing good. They often come to me, especially in my game design classes, they come to me and say, how can I be making a game when climate change is going to destroy the earth in the next few years? You know, how can I spend my time thinking about these unimportant things? We need to work on hard problems.
They’re very hungry to create meaning and make a difference. And that makes me really happy. And of course then I can say to them, well, you know, games can make a difference too. You can learn a lot from games, right now.
So I think most of them are really worried about the state of the world, and I don’t blame them. I’m worried.
Jesse: Despite your worries, you’ve chosen to jump in with this technology anyway, and I wonder what has helped you get to that place where you can sort of reconcile yourself to it?
Christina: Well, I tried to figure out who was the least evil. I ranked companies by how much harm are they really doing to the world. I could have gone even further, I suppose, and I could have gone with small models.
I haven’t quite got myself all the way up to running a local model, which is where I’d like to be, ’cause then I can get even more in the world of least evil. But this is the world I work in. This is where I live, you know? And I don’t think that you can ignore AI tech as a viable choice.
And the fact is that my students who I adore are gonna go work in tech and they need to know what AI is. They need to know what it doesn’t do. They need to know how it works. They need the answers to the same questions that I need.
And so helping them think through it, first I had to do the thinking myself is really the short answer to that. Yeah, it gives me some cognitive dissonance.
I realized I’d gone too far in the booster side when one day I was on LinkedIn and there’s a post by Timnit Gebru, who’s this amazing ethicist who’s kicked out by Google for being honest about what she thought in public. Don’t do that.
And she shared a really interesting article about the effect of AI, like I said, in the global south. And my first instinct was like vague nausea. And I said, oh, I know this, this is cognitive dissonance. I need to change what I’m looking at and reading.
And so I started following all the doomers, and, not the people who think AGI is gonna take over the world, but the people who are very worried about the real harms that we’re experiencing right now.
And I got myself to a place where I can look at the boosters and I can look at the hype and I can look at the dark side and I can look at the problems. And neither of them makes me wanna throw up any more than any normal person would wanna throw up, I suppose.
And the tech itself is, it’s just so remarkable too.
I mean, the reality is all three of us started in this world at the same time, and then it was the internet. It was making web pages, you know, you could kind of poke at it and you could throw something together, and you got that joy of seeing something go live. You got that joy of seeing people use the things you made, and that’s heady stuff.
And I have to say, I haven’t had this much fun making stuff since, you know, the early two thousands, I think. To sit here and fiddle, paddle around and be able to launch something is just profoundly satisfying for people like us who are makers. So, it has its cost and it has its pleasures.
Peter: You’ve been teaching for about a decade now, like focused on teaching. Is that right?
Christina: A little bit longer, I think, if you count CCA and General Assembly.
AI in the Classroom
Peter: You’ve been teaching for a while, and right now you’re teaching, it sounds like, both a kind of design class and a product management class. And I’m curious how embracing AI has caused you to evolve your syllabus, your curriculum.
What did you used to do? What did you let go of? What are you embracing? What are you recognizing is, like, essential that is not changing? Like, how has that happened for you?
Christina: Well, I started watching AI when ChatGPT first launched, and I didn’t do anything with it for a long time. I was just like, I’m gonna wait and see if it arrives. ’cause you have to remember, we’ve been waiting for AI to arrive for 20 years or longer. So seeing if it was actually gonna be a real thing that made a difference in everyday life and I think this last year was when it really went super mainstream, which is when I started to do with my deep dive, and it was with the desire to bring it into my classroom.
And so this is my first AI enhanced or AI embedded class which is the product management class. Actually, I guess you could argue, I started doing it last spring where I, just said, guys, you can vibe code, you can use AI to make assets for your games. You can use it to make music. You can use it to make images.
Do not steal other people’s work. If you could hire someone that would be better. If you can draw it yourself, that would be better. I built in some extra credit for people who are willing to do the creative work themselves, but…
The quality of the games went through the ceiling. It was just amazing. I was blown away. And they were just much more finished. And I think part of it is they didn’t have to spend a lot of time thinking about Unity and what was Unity gonna do.
They could spend more of that time thinking about how does this game have interesting pacing? How does the game mechanics work together? They were just able to get a lot farther along.
And then this fall, this is when I’m like, okay, you can use AI for this. You cannot use AI for that.
The rule of thumb is, first, what level of accuracy are you going to get? People don’t talk about hallucinations enough. When you realize that it’s 30 to 60% hallucinations depending on your setup, and that you have to do a lot of work to bring it down into single digits.
And there is no such thing as zero hallucinations. Being willing to make business decisions based on an AI “summary,” and I use scare quotes with summary is a scary premise for a business.
So a lot of what we had to do is go, okay, now how are we gonna generate this information? How are we gonna validate this information? When do we do our own research?
I became really interested in product sense, which I think a lot of people are referring to now as taste, which, you can’t fight language, but boy, it makes me want to.
So, I think it’s just really important to remember that when they’re using the ai, they’re not using their brain. And when they’re not using their brain, they’re not creating memories. When they’re not creating memories, they’re not gonna get it better at product sense.
And that’s what I explained to ’em. I just opened it up in the very first class and I said, here’s where you’re gonna use AI. Here’s where you’re not gonna use AI. Here’s why. If you wanna cheat, go ahead, because I don’t wanna play cop, but I will grade you on the quality of your results.
And if you use AI to do the writing, the results will be mediocre. And you’ll get a grade that reflects that because that’s kind of the nice thing right now is AI produces mediocre writing and they do a lot of writing in my class.
And I said I would rather see typos than have to read something boring.
And so far it seems to be going pretty well. I’m enjoying it. So almost everything has some AI and, in fact, this Thursday we’re gonna do a whole class on vibe coding.
The other thing I changed this year was I brought in a lot more guests than I usually do. People who have been doing PM and working with AI for a while, and that’s been really powerful as well.
Actually, it’s nice too, ’cause then I can sit down.
Peter: Ha.
Jesse: I wonder about where the taste issue comes into play in terms of the way that we interact with the technology itself. Because, you know, one thing that has come up for me over and over again in looking at what people get excited about and looking at what people are putting out in the world, is that the stuff that’s not very good that gets out, gets out because the people making it couldn’t tell that it wasn’t very good.
And so I find myself wondering about as practice evolves, how does that intuition get built in a practitioner? And how does the process potentially have to evolve to make sure they have the opportunities to leverage that intuition and not have all the decision making being taken away and given over to the technology.
Maintaining vigilance in the face of AI
Christina: Yeah, well, I mean, we’re at war with our own brains. As you guys I’m sure know, our brains have been evolved to be incredibly efficient because they’re huge calorie eating monsters when they’re being used to do thinking. And so there’s always a part of us that really wants to take the easy way out always. And you have to actively fight against that.
I really think that you have to be active in your care and feeding of your brain, so to speak.
So I’ve been referring to the product sense exercises as product sense pushups. So here you’re going to analyze the onboarding flow of these three different companies in three different sectors and write about what you see the differences are, and why do you think the choices they made were made? Those are the kinds of things that you have to do.
I mean, it was the same for me. I started writing a novel and my first instinct was, wow, this is so fast. This is amazing. This is interesting. And then it was like, well, this is sort of crappy. Why is it doing this? I don’t understand. It’s boring…
Oh, okay. Now I wanna go edit it. Oh wait, because I didn’t write it in the first place, I can’t hold the whole plot in my mind. What am I gonna do now? I’m gonna have to rewrite things. It’s been like this ongoing back and forth between, oh my God, it’s so fabulous, Oh my God, I’m not getting smarter.
And that’s part of it, is you have to commit to your own brain. So just like you wanna listen to good podcasts and you wanna read good books because you wanna make sure your brain is in a good shape. You need to do the same thing with AI. You have to be careful what you outsource.
Peter: You’ve used the phrase product sense a few times, which is a phrase I’ve heard since I left UX consulting and went in-house and found myself exposed to product managers who would use this phrase, but no one has ever defined it for me. And I’m assuming since you’re teaching it, you have a definition.
And I would love to hear how you define product sense. Is it a framework, is it a set of tools? Like what, what is that thing?
Defining “product sense” vs “taste”
Christina: So just like when you’re learning an instrument, you’re going to want to listen to lots of music, get a feeling of what makes a good piece, a bad piece, spend a lot of time practicing and doing repetitive activities. It’s all the same.
And what are we really building up? I think we’re building up an intuition about what’s good and bad. Like we often talk about the iPad is really intuitive and all it means is we know how it works and when we pick it up, we recognize the patterns that we’ve learned elsewhere, on our iPhone perhaps.
So I read this book a long time ago called Working Knowledge and they define intuition as compressed experience. And I think that’s what product sense is, is it’s compressed experience. You’ve seen lots and lots of products and you’ve seen lots and lots of data. You’ve seen lots of user tests. You’ve read about the financial changes in a given business.
And out of that body of work that you’ve taken into yourself, your brain has turned it into patterns and you can quickly reach into yourself and plot those patterns when you’re testing something new.
And so the reason I don’t like the word taste as much is because I think taste gets conflated with being a tastemaker, having, I guess, it’s also a feeling for what’s interesting, but that feeling seems to be…
Jesse: fashionability.
Christina: …yeah, it’s, it is too much about fashion and I think taste worries me that way. It’s like, hamburger menus are cool, but do they actually work? And when you look at the stuff that goes down the runway at a fashion show, is that something a real person would wear most of the time, no, but it’s very, very interesting.
So I feel like taste can become experience on steroids where you’ve become so deeply experienced that you’re bored with ordinary things. And I think in our particular business, product management, design, what have you, we can never get bored by the ordinary because ordinary people like ordinary things.
Blurring of roles
Jesse: I’d like to talk about product management, design, what have you, and especially the what have you, and the blurring of these roles that more and more people are speculating about, projecting is going to happen, as a result of this technology. And I’m curious about your point of view as somebody who has been a designer, has been a product manager, has taught design, has taught product management.
Like what do you see as how the interaction between these roles evolves with the introduction of these technologies?
Christina: Yeah, because I have my feet in multiple different spaces, I have sort of a sense of how people are reacting towards them. And the short answer seems everybody thinks their job’s going to go away which I’m not very convinced of. And actually it’s probably engineers who are in the most danger at this particular moment, especially at the, sort of, crappy low level ones. The very senior ones will be needed ’cause they’re the only ones who know when the AI is lying to them.
But with design and product management they’ve become more and more alike over the years. When I first started, the product manager was firmly in the business space and the designer was firmly in the experience space, in the “how does this software behave?” How do we organize the information that it holds? How do we present it to a user in a way that they can understand and find pleasurable?
But it’s been interesting to see how product has shifted much closer to user experience in that they do their own research and they, they do more research than we used to. I talk to any good product manager here in the Valley and they’re like, yeah, we talk to users every single week, period. We do research, we do testing, we do product discussions. We write, and I can’t think of the word, but quick prototyping exercises with them. Participatory design, that’s my word.
And these are product managers doing this. In fact, we should ask what’s going to happen to the user researchers who seem to be the ones who are probably in the most danger right now. And we often forget to talk about them. I saw something interesting, I saw that there was a new survey they looked at all these job listings across all the different job boards.
And it turns out designer is one of the most highly sought after one. And every time I go on LinkedIn, everybody’s like, nobody listens to me and there are no jobs. I’m never gonna get a job. And then I see a survey like this that says people are looking for good designers and I wonder what’s going on here.
This is an interesting little mystery. I think design and product have always had a tumultuous relationship because they both own the same thing, the user’s relationship to the product. And they both think they own it, and they both think that means nobody else gets to own it, which is problematic.
So the way I think of it is, I love Theresa Torres’ product trio, where you have an engineer, you have a designer, and you have a product manager. The problem I have with that when it’s drawn as a Venn is product managers like to put themselves in the middle.
And I disagree with that. I think that everybody wants to be in the middle, right?
I think that they need to remember that they’re representing the business first and foremost, and a happy customer makes for a good business. Unless you’re United. And then making people profoundly unhappy means rising profits, apparently. But we won’t go into enshittification just now, I don’t think.
But I think that you really wanna think about, as a designer, what do I bring to the table? And it should be your years of training. It should be your experience. It should be whatever product sense or designer’s intuition that you’ve been developing over the years. And the product manager needs to respect that the designer does have a body of knowledge that they do not own.
The best people like Marty Cagan and Theresa Torres talk about how much they adore working with designers and how critical working with a good designer matters. But not everybody is a good product manager. Sturgeon’s law applies everywhere. 90% of everything is crap if you don’t know it.
When the team includes AI
Peter: Well, I’m wondering if either as part of your teaching or, your consulting, or maybe going back to when you were more active, like directly leading teams, the concept of team. In fact, I think you wrote a book about teams, and that seems to be relevant to this. Now I’m remembering “The Team That Managed Itself”.
Am I remembering the title right?
Christina: Yes, the book that didn’t want a title.
Peter: Yeah. Well, but that I think reflects on this, right? Because I think something that’s often missing in the development of our practices, design, product, et cetera, is this recognition that you don’t do it alone and that you’re likely going to do it with people who are explicitly not like you, and that you need to figure out how to engage with them better.
And some schools, in some contexts try to do, you know, like, let’s have the business school students work with the I-school students. But whenever I’ve actually been involved in those things, like as a, you know, some type of sage, a graybeard, trying to help them out, it’s clear that no one sat them down and helped them figure out what it means to work together.
With this evolution of these roles, teaming seems even more important because we can’t rely on what we thought we knew about these roles. How do you approach that? How do you help coach or guide people through being better teammates?
Christina: Well, product management is my youngest class. I think this is the fourth year I’ve taught it. And I had to think about a lot of this stuff when I offered to start teaching product management because there were no product management classes for undergrads. I think there still aren’t, at Stanford, you have to be a grad to be able to get into a product management class.
And we were having a bunch of students who were graduating and becoming product– PMs. So the HCI group and I agreed that there should be a product management class and I agreed to teach it. And so I had to ask myself really, what is the unique value within design and what is the unique value of product management and how do I make sure I’m not turning out more product managers who think they’re designers.
I mean, I don’t know how many product management conferences you go to like Mind the Product or Product At Heart or whatever. But they resemble design conferences.
Business is incredibly important, right? You really have to understand pricing, you have to understand marketplaces, you have to understand go-to-market approaches. There’s like some very incredibly important business concepts that you have to understand as a PM.
There’s interpersonal dynamics. I have always taught a little bit about how to effectively team in my classes. Everybody works in teams and I figure if you’re gonna work in teams, in my class, I’m gonna teach you something about how to work in teams. How do you give feedback? How do you hear feedback? How do you create norms around the work you’re doing? And how do you decide how you’re gonna work together?
Just like a bunch of really sort of basic, I’d call it interpersonal hygiene, where you’re trying to figure out how not to get into unnecessary fights so you can save yourself for the really good and juicy ones.
And then the last piece was execution strategies. Like they should know what Agile is and they should know what Scrum is and they should know what the lean startup is and they should understand what it means to bring a product to market.
And it’s very different, ’cause when I went to design, yeah, there’s definitely a lot of user research and some of my students do go on and become user researchers. But starting to learn things like you don’t come up with one idea. You come up with 15, you embrace bad ideas, you generate wildly, you refine them, you think with your hands, like there’s just, once again a bunch of skills that I think are profoundly important for design.
And so that’s pretty much how I approached the differentiation and really tried to focus on the thing that brings your unique value forward.
Peter: It is almost reminiscent of Cooper’s paired design where you have one is the generator and the other is the synthesizer.
And they were, yeah, they were, this locked pair. what you’re talking about in terms of design and product management, designers generating and product managers kind of sifting through and synthesizing and distilling and there being a feedback loop there.
Christina: Yeah, a little bit. I like the product trio because I think that engineers are often overlooked for how creative they are. It’s gotta be one of the most creative professions out there. They literally make something out of nothing. The ability to figure out how to bring things into the world using only ones and zeros, I don’t know how we could not say that’s creative.
So when you have these three people together, you could do almost anything. Somebody who knows how to make enough money to keep the thing going, somebody who knows how to make it desirable enough that people will wanna use it, and somebody who knows how to make it actually work. It’s kind of ma– it is magic. It’s something that’s wonderful.
And I think one of the hardest problems about teaching is it’s really, really hard to have a class that isn’t all the same, like designers take design classes and engineers take engineering classes and product manager take, which means if I put together a team for an activity, I’ve got five PMs.
The very first year I taught it, I tried to get them to pick a role. You know, okay, who’s gonna be the designer and who’s gonna be just, it didn’t work. Everybody wanted to do everything. And that could be maybe, ’cause they’re Stanford students and they’re all good at everything as far as I can tell, which is kind of terrifying when you try to go in and teach them.
But it just doesn’t work. So I’m still trying to figure out how to do interdisciplinary training because I don’t think we have enough of it.
Jesse: I wanna come back to the question of what happens to the magic created by the product trio when they adopt a new robot pal. And the implications for how each of those disciplines continues to deliver against its value proposition with this new player in the mix.
Prototyping
Christina: Yeah, I think we’ve identified the first truly valuable use case, which is prototyping. I mean you know, you hear people say, oh, AI can’t do production code, but just the idea that no one ever has to sit there and manually hook together pages in Figma. So you could have some vaguely clickable prototype where half of it doesn’t work and the data is wrong.
And now you could spend that exact same amount of time and have a completely interactive prototype with good data. And of course everybody’s gonna say, oh, it’s done, let’s ship it, which it isn’t. But that’s always been true for every prototype. That seems to be the nature of prototypes, is the higher fidelity they are, the more people think it’s ready to launch.
I can’t believe how much better a designer it makes me, I’m sitting here, like I said for the last couple days, prototyping a new app in Claude Code. I’m a design… ex-sdesigner. I guess, I’m a guess, ex-everything. But I’m an ex-designer making a prototype in a command line interface. It’s like, what, who am I, what is this world I’m living in?
And I can quickly launch it and play with it and say, okay, I see that doesn’t feel right, I’m gonna change these. I remember reading How Designers Think, which I really love, by Brian Lawson. It’s an HCI classic. But it’s really more architects he is working with.
And he talks about how a designer while drawing is in a conversation with themselves. They’re not drawing to explain things, although sometimes they are, but as they draw out these ideas, and you know this very well, Jesse, with your love of diagrams, me too, you’re diagramming as a way of going, does it fit like this? Or does it fit like this? Is it more like that? More like that.
And so if you’re able to prototype something natively in the material that it will eventually be in, which is code, you can get such a good sense of what works. And I think that’s gotta be the biggest winning use case of all. It’s not as scary as trying to synthesize user data where you have problems with getting it wrong.
Like I said, you don’t make decisions off of hallucinations, but you’re not losing anything either. Like if you never planned to learn to code, which I haven’t coded in well over 20 years, then I’m, didn’t think I was gonna start now, but apparently I was wrong.
So, it’s just so satisfying.
And we’ve seen it, like we have all these coding Lovable and Bolts and everybody talk about vibe coding. When people say vibe coding, just replace it with the word prototyping and you can see the value and it’s more testable. You know, if you have something that has more links and real data, you get significantly better feedback from your customers because that you can have them act naturally.
Jesse: I wonder about the implications of that for the skillset of the person sitting at the prompt and what they need to bring. Because like we’ve talked about what a product person brings, we’ve talked about what a design person brings, and we’ve talked about what an engineer brings. And I wonder what the potential mix of those value props might actually be required to be really good at this thing that you’re talking about, the prototyping work.
Christina: It makes me crazy that everybody on LinkedIn, which is probably a terrible place to figure out what people are talking about. But…
Peter: it’s the best we got.
Christina: Ever since I rage-quit Twitter over Elon Musk, I’m kind of lost in good places to have conversations. So I still held a hope Blue Sky will finally get fun.
Peter: Maybe we can bring mailing lists back. That’s where we all met originally.
Christina: I miss mailing lists. Don’t even get me started. I never feel so old as when I talk about how much I miss mailing lists.
But I think that when people say vibe coding means we don’t need a PRD anymore, or stuff like that, I’m like, no, no, no. Let’s talk about the importance of figuring out who your target audience is, of figuring out what the market is like right now. Figuring out how should it look and how should it feel and how should it behave.
These are things that vibe coding… can’t be done until you get your head around that problem. I think that what’s exciting to me is I can go straight from a piece of paper, which is incredibly fast, to an interactive prototype, without having to work with all these klugy things that we’ve put in between it.
Like, let’s be honest, in a lot of ways Photoshop can be kind of klugy. Figma can be very klugy when you’re trying to do interactive stuff with it. Or you could argue, no, I’m fast with Photoshop, I’m faster with Photoshop than I am with a piece of paper and some colored pencils, and I’d rather do it that way.
But you can do either one and then you can upload it into the vibe code tool of your choice, and then it can make you an interactive prototype without spending hours of putting things together. So I think the thing that made people valuable is not going away even a little bit. I think the way that we realize that value is changing radically.
Team norms with AI
Peter: Jesse asked a question about this robot member of your team, and it kind of builds on what we were talking about earlier in terms of team dynamics and how teams learn to work with one another. And it made me think about norming.
What does it mean to norm with a robot buddy? Which then made me think about something you wrote recently about context engineering.
And I’m wondering if you could talk about that maybe in the frame of norming, like how you set up your robot buddies to better work with you and extrapolating that to what does it mean to work with the team?
Christina: Well, you know, it’s always a little bit dangerous to talk about robot buddies when we’re really talking about something that’s a probability machine, the most amazing, incredible probability machine that anybody’s built so far. But still, it is, it’s not a human who has feelings.
But on the other hand, the context engineering is a great way to talk about norming because when we come together and we talk about norming, norms are, of course, the often unspoken values that we share.
We have an understanding of, do we interrupt each other? Do we not interrupt each other? Do we raise our hands or do we just speak out? Like there’s a million tiny social decisions that different cultures have come up with different ways to solve for them. And so by spending some time when a team is first assembled to write down those norms, what are we gonna do?
What happens when you and I disagree? Do we take it offline? Do we argue with it right in front of the team, right then, there? If I have a problem with you, how do I handle that? With how we’re interacting? Like these are all the norms.
So the question then becomes, when I interact with Claude, which is as I’ve said, my favorite, I wanna tell Claude, how are you gonna interact with me?
I wanna say, please do not blow smoke up my butt. I’m not interested in your sycophantic ways. Stop using all those exclamation points. You sound like a damn cheerleader. If I wanted that, I’d go to ChatGPT.
You know, I really feel passionately about, here’s how I want you to talk to me. So I went ahead and I put in at the system level the norms of how do we talk to each other. I want you to challenge me.
I want you to, I went through a whole period where I was like. I’m very much a one shot prompter. I just like to write something in the chat box and go and hope for the best. So I had it always asking me questions. I said ask me questions before you do something. And then it would just ask me questions endlessly. So I got to ask me questions when you’re about 70%. You think with 70% confidence that you understand what I’m talking about.
And that seemed to solve the problem for me. So now I do one shot and ask me all these really useful questions, and then it just builds something and it’s really pretty perfect.
And in a lot of ways when we’re working with a team, we design our norms, but then we check in on them every week. We say, okay, are they still working? Do we need to come up with a new rule? Do we wanna get rid of one of our rules? Is it silly? And I think that’s the same thing as you have to think about context as sort of something that you’re continually refining as you work with these.
And it requires a lot of metacognition as a human being. You have to spend a lot of time thinking, what works for me? Do I like the blunt honesty or do I like a little compliment here and there? And be self-aware enough to be able to make those choices. And I think a lot of people aren’t. But it makes a huge difference to have that context there.
The robot buddy. Yeah. I think, I feel, I feel more like a cyborg. It’s like an exo-suit. I put it on and I can do all these things I couldn’t do before. Rather than a different individual, even though maybe it’s like Jarvis, you know, if you think about Iron Man, where there’s a voice there, but it doesn’t really belong to a body, it belongs to the suit.
Jesse: That’s quite the image.
Christina: I love it.
I love putting on my Claude suit and going out and conquering the world.
Jesse: Well, so much of the talk is about efficiency, right? And so much of the talk is about using AI to do more with less and being able to kind of get the most out of your teams and stuff. And I wonder about how you see the tension between kind of efficiency and quality, because you were talking about how AI really raised the quality of your students’ work. And at the same time when we talk about AI processes relative to other kinds of processes, it’s not necessarily that they’re making you more efficient, you’re just spreading your energy around in different ways, right.
Christina: Maybe I think that the question we should be asking ourselves is how are we gonna spend that time? You know, it’s just daylight savings time and everybody got an extra hour and there were endless newsletters going, what are you doing with your extra hour? Which is kind of silly because. We still sleep eight hours a day or eight, eight hours a night and or awake 16 hours a day.
But anyway I think with AI we really are getting back hours. Like something that would’ve taken me three days to write, took me three hours to write. I mean, it is, quite a large jump in certain types of tasks and all tasks. So now that I’ve gotten time back, what am I doing with that time?
And I fear that a lot of executives are going to say, “More, just do more.” But I think the real value would come if we did nothing with that time. Like we need to use that time to go for a walk and think about strategy. We need that time to leave the house and touch grass, to be healthier, to spend more time with our family because it’s been shown over and over again that when we go to a four day work week, that productivity goes up, not down, and that’s before AI.
I was talking to one of my students, my students are so frigging smart, it’s just such a delight to have hard conversations with them. And I said, Hey, it looks like, you know, inference is getting cheaper and cheaper. What do you think they’re gonna do with it? You know? And he goes, oh, they’re just gonna spend all that energy that would’ve been used on bigger models. And I feel like that’s the instinct. And capitalism is just to consume every single thing it can as opposed to stop and take a pause.
And it’s a very shortsighted version of capitalism that’s not science-based. It’s mythology based, I think, where if you leave the office, the, I can’t believe 9-9-6 is back. It’s, it’s, it’s nuts. It’s like, if you’re gonna be successful in AI, apparently you have to work from 9:00 AM to 9:00 PM six days a week.
And I’m like, no, no. If you’re successful with AI, you’re working four to six hours a day and you’re spending the rest of the time like reading and thinking and exploring and napping. All of these things have been shown to increase productivity. So I think the question is what are we doing with those hours?
And I would say it’s don’t use your common sense. Use the science instead.
Peter: Big fan of napping.
Christina: Napping is beautiful.
Jesse: He is a big fan of napping.
Peter: I, I am.
Jesse raised again, the work you do in game design and that was a thread I wanted to pursue a little bit, you’ve been working in game design for decades now.
Not necessarily full-time, but you’ve definitely had a foot in it. You’ve attended the Game Developers Conference. That’s a space that you’ve continued to attend to.
Christina: I worked at Zynga. I’m working with some game design companies on and off.
Peter: And I’m wondering, you know, I think Jesse and I have benefited in conversations with folks who are operating in spaces adjacent to the kind of digital product design, UX design, maybe even service design space that we tend to find ourselves in.
We had a great conversation with someone from the consumer packaging goods world and talking about brand and packaging design and those experiences and the challenges that community is facing. And I’m wondering what you are seeing in game design, what trends or realizations are occurring in that space that would be of interest and relevant to, a more kind of UX or digital product design practitioner or leader, like, just breaking us out of our kind of typical modes and, how might we learn from others’ experiences?
Christina: Yeah. Well, I mean, you know, I was saying earlier how AI and actually also VR and AR have been arriving for a very long time and never seemed to actually arrive, but they arrived in game design ages ago, like GDC quickly had an AI summit. Before ChatGPT was launched. And that’s where most ordinary people think of AI as beginning.
And AI has fully taken over all the other tracks as well, there’s a summit beforehand, and then there’s the main conference. And AI is everywhere. Because it’s such a useful tool to allow you to do the onerous work of making a game.
Also games are a hit business. A lot of things fail. People don’t wanna spend much money on creating them because it’s a huge risk every time you make a game. So having AI that can, you know, make your animation go faster, make more sprites more quickly, is valuable.
Then you have things like procedural generation, which is always super interesting. Like how do you automatically create levels or challenges? AI is just very obviously applicable to game design the way it isn’t as clear within UX.
There are many, many winning use cases in game design and it’s often hard to bring that back to UX. Because I think that UX is sort of problem obsessed, but not all opportunities are problems.
And game design is not solving the problem of boredom. It’s instead speaking to a certain longing that’s inherent in us to play. Playing is probably one of the most fundamental human activities that exists. Because through play we develop new technologies. We learn how gravity works and how the world works.
Play is how children learn. And it’s how adults learn too. It’s not just the province of children. So, it is hard to talk about what is game design doing that we should learn from? ‘Cause I did a bunch of that when I was very first, you know, doing my Mechanics of Magic talk, which you were kind enough to host me when you were at Groupon.
Peter: That’s right. That’s right. Well, 15 years ago or something.
Now, one thing I’m picking up on is, and it’s something that we don’t maybe talk enough about in the space of UX and digital product design, which is production.
The three of us have been doing this long enough that UX design, even the production aspects of UX design were innovation. We were creating stuff. We were the first to make shopping carts or checkout flows or, you know, whatever things that are now very beyond rudimentary.
And I think there’s still a little bit of that mindset that like every designer on the team should be doing–like, we need designers to be doing production art and production level stuff.
And we haven’t yet figured out how to gracefully hand this off to a tool the way that it sounds like game designers are like, yes, if you could just like, figure out all the in-betweens between, you know, this version of this character and that version of the character, great. Oh my God, that would be amazing.
Whereas we would be like, no, no man, a human’s gotta draw every single one of those in order for me to feel validated. And, there’s probably some type of production opportunity that’s untapped in the spaces we’re operating in that tools like this could help with.
Jesse: Well, so for our audience, you know, we’ve got a lot of folks who listen to this show are in-house leaders of design teams. They are, in some cases working inside large organizations with a million constraints. They’ve got product partners and engineering partners breathing down their necks, and they’ve got the executives telling them to figure out this AI thing. What would you advise those design leaders as to how to think about and approach this challenge?
The Power of Variations
Christina: One particular thing that I think AI is incredibly useful for, everybody just goes, AI is good for brainstorming, and they kind of wave their hands around.
But what AI is incredibly good at, because it doesn’t get bored, is variations. So if you wanted to see 20 versions of a certain page, all with a slightly different shade of teal, you could do that in 20 seconds, you know? And immediately go, oh, okay, I think this one’s better than this one. Or this is more on brand for us.
I love going, give me 20 ways that x, y, z you know, give me 20 titles for my blog post. Or things like that. So it’s really good at mass producing choices and then you, senior leader with your marvelous taste and product sense and intuition, can pick which of those is exactly the right one to move forward with.
I think it’s really good for that. It’s so fast. I think the other thing is, so something I saw is speed changes your process profoundly. So while I was vibe coding, the fact that I could go from, you know, changing a task from being a dropdown to being a checkbox or, you know, in, 20 seconds and then go, no, I want this other thing. No, I want this other thing, you know, just very quickly, the speed of its response allows you to quickly tune the interface to something that’s much better and much accurate.
And that was something I had never been able to do before because it’s always so slow. Like, think about it, what it would take to switch out a radio button for check boxes.
I can’t think of any way that takes me less than a minute, except maybe if I’m a pretty fast coder already. But if I’m not a coder, it’s gonna be kind of a pain in the ass. But instead it happens in seconds.
And so you have a freedom to explore things that you might feel, oh, I never have the time to do that. I don’t have time to explore. ‘Cause then you, instead of that, you could explore a completely different checkout process that resembled it. It could be a stupid checkout process, but you have the freedom to do stupid stuff.
I think at the disposability of the stuff that’s made by the tool, ’cause it’s made in seconds, you don’t have loss aversion around them to the same degree.
So you can throw out an entire coded prototype and build a new one in completely different code because it’s no big deal. Maybe you were at it for an hour, but you built this huge, massive thing that you can now look at and go, Ooh, no, no. I think that’s the wrong direction. I can see a lot of potential.
The bad side, the thing that people are gonna struggle with, especially designers, is it isn’t very good at not looking like crap. A lot of these prototypes look like poop. And if you have a vision in your head and you know how it should look, it’s really hard to get the AI to prototype something that looks the way you want it to look.
It’s significantly better to go off to one of the earlier tools, your Figma and Photoshops, make it exactly what it looked like and then say, here, go build this. It can do that. But if you wanna say, can you put four pixels of padding around that, good luck.
Peter: As you were talking, I was looking at my bookshelf and on the bookshelf not too far away, was a book called Information Architecture Blueprints for the Web. You might be familiar with this.
Christina: It’s a good little book. Still quite relevant, strangely enough.
Peter: Yeah, well, I’m sure, for those listening it was written by Christina,
Christina: In 2000.
Peter: 2003 according to the copyright. We came up as IAs, IA Summit talking on the sig-IA mailing list, attending IA conferences. I’m curious…
Because we’ve heard about the death of IA for a very long time, but it never quite goes away. And in fact, from what I’m seeing, it feels like in the last two or three years, even like recently, it’s, there’s something, there’s life being breathed back into that again.
And I’m wondering from your perspective, again as a teacher from that perspective in the classroom or, you know, consultant, whatever, like what is the state of information architecture from your point of view now?
Christina: it’s interesting because you know, we’re mostly talking about AI and so when we talk about AI, a lot of people assume we’re talking about generative AI. However, there are other models for creating stuff. And one of ’em, which could be significantly more energy efficient is based on using ontologies as opposed to just massive quantities of data that get squished.
And I, this is not my space, this is a, space that my friend Madonnalisa Chan, she’s like the superhero that nobody knows ’cause she never blogs or anything, but she’s insanely, insanely, ridiculously, stupidly brilliant. She’s just so good. And she’s been working with Salesforce for years, designing useful AI and machine learning based on her ontology work.
She is your classic library sciences style information architect. So it’s clear that there’s a huge potential in understanding how language is structured. So you’re working with the structure instead of the actual language. And that’s pretty interesting.
I never went down that far. I mean, to be honest, I think I’ve always been much more of an interaction designer than an information architect, if you are fussy enough to worry about what things are called. I’ve kind of given up on the naming thing. I wish the words were more respected, but they aren’t.
So once “literally” stopped meaning “literally,” I just kinda went okay, words, they’re just moving around.
But understanding how things are organized and what classification means and how humans use classifications for shortcuts is gonna be clearly vital to making an AI that’s more effective, more powerful, and significantly cheaper and faster. So yeah, some cool people are doing cool shit with AI and IA.
What’s Coming Next
Jesse: So as all of this stuff continues to evolve what are you most curious about?
Christina: I’m most curious about what’s gonna show up tomorrow, because it feels like every day something wild is showing up. Like suddenly there’s an AI only Instagram. I’m like, what the heck? Okay. Didn’t see that one coming. I feel like I say almost every day, didn’t see that one coming, all the time.
So in some ways, I’m incredibly curious what shape it takes. And I’m kind of thriving in a way that I haven’t felt in a long time because everything is so unpredictable and I just thrive on chaos and unpredictability. It makes me incredibly happy to be just delighted to suddenly go, whoa there’s a use case I hadn’t thought of.
So I’m curious what is going to happen next. I’m profoundly curious about what it’s going to mean to our workflows. I’m sure you saw recently the latest study, which is a proper study. Like, you know, the MIT 95% of AI things fail, was not a study. It was like, it’s got huge methodological problems.
But this one, this one was a proper study and it showed that AI can only do tasks as well as human beings, expert human beings, I should say, 2.5% of the time, more or less. Which is really rare. And so it’s clearly not eating our jobs tomorrow except it’s transforming our jobs today.
So how is it going to transform? What are the superpowers that are gonna be unlocked? What are we gonna do with that?
You know what I’m curious about? I’m curious about when we see the first AI-native product. So, you know how cars originally looked like carriages, right? The horseless carriage business and then, you know, TV looked like plays and internet was just brochures online and then something came along. It understood the medium well enough to be something brand new.
I am so curious what that something brand new is gonna be for AI. ‘Cause I don’t think we’ve seen it yet. It’s gonna be so cool. It’s gonna be so interesting.
Jesse: Exciting.
Christina: Yeah, I know. That’s, that’s the struggle, I’m telling you.
These stupid AI companies are dick wads and they’ve got the coolest toy on the block. It’s painful.
Jesse: Yeah. Christina Wodtke, thank you so much for being with us.
Christina: Thank you for having me.
Peter: Yes. Thank you. This was great. We could talk for another hour easily.
Jesse: If people wanna keep up with you and your ideas online, how can they do that?
Christina: Oh. You know, I still blog an Elegant Hack 25 years later, still putting stuff up there. You can follow me on LinkedIn, which apparently I’m on all the time. Gotta put all that random thinking energy elsewhere.
Peter: Thank you, Christina.
Christina: Okay, thanks.
Jesse: For more Finding Our Way, visit findingourway.design for past episodes and transcripts, or follow the show on LinkedIn. Visit petermerholz.com to find Peter’s newsletter, The Merholz Agenda, as well as Design Org Dimensions featuring his latest thinking and the actual tools he uses with clients.
For more about my leadership coaching and strategy consulting. Including my free one hour consultation, visit jessejamesgarrett.com. If you’ve found value in something you’ve heard today, we hope you’ll pass this episode along to someone else who can use it. Thanks for everything you do for others, and thanks so much for listening.


