Content + AI

Larry Swanson
undefined
Feb 25, 2024 • 32min

Rebecca Evanhoe: Conversation Design for AI and UX – Episode 19

Rebecca Evanhoe Rebecca Evanhoe practices, teaches, and writes about conversation design, a key UX practice that is taking on fresh importance in the age of chat-based AI applications. Since the publication of her book Conversations with Things (co-authored with Diana Deibel) three years ago, the tech and media worlds have fundamentally transformed, but the conversation-design principles that she teaches remain as relevant as ever. We talked about: the conversation design and UX writing courses she teaches reflections on the book she co-wrote several years ago, "Conversations with Things" and the changes in the conversation-design world since how the focus on principles in a framewwork set out in their book that helps designers decide on whether or not and how to ascribe personality to a chat agent her identification as a UX designer how she's incorporating LLMs into her course curricula her take on the misappropriation of the term "prompt" in new practices called "prompting" and "prompt engineering" and their divergence from traditional use in the conversation design field the differences in the conversation designer role in the LLM world compared with NLP the linguistic concept of "conversation repair" and how it manifests in "bot land" how to adjust confidence level in conversation design how intent classification in NLU works her preference for humans and human conversation the importance of including people with a humanities background in conversation design the ongoing importance of humans in the content and conversation design process for our ability to think strategically about how to maximize the success of conversational technology Rebecca's bio Rebecca Evanhoe is an author, teacher, and conversation designer. With degrees in chemistry and fiction writing, she's passionate about how interdisciplinary thinking can combine arts, humanities, sciences, and tech. She teaches conversational UX design as a visiting assistant professor at Pratt Institute, and co-authored Conversation with Things: UX Design for Chat and Voice (Rosenfeld Media, 2021). Connect with Rebecca online LinkedIn Video Here’s the video version of our conversation: https://youtu.be/xJkB03uH8ek Podcast intro transcript This is the Content and AI podcast, episode number 19. We're all talking to computers a lot more these days - telling Alexa to set a timer, asking Midjourney to create an image for a party invitation, or prompting ChatGPT to draft an outline for a slide deck. Rebecca Evanhoe is an expert on the interaction design practices that guide these conversations. Three years ago, her book "Conversations with Things" set out a principles-based approach to conversation design that remains super-relevant in the age of large language models. Interview transcript Larry: Hi everyone. Welcome to episode number 19 of the Content and AI podcast. I am really happy today to welcome to the show Rebecca Evanhoe. Rebecca is really well known in the conversation design world. She's a conversation designer. She's the co-author of the really excellent book Conversations with Things that came out a few years ago, and she teaches conversation design and other kinds of design work at Pratt University in New York. So welcome to the show, Rebecca, tell the folks a little bit more about what you're up to these days. Rebecca: Yeah, hi Larry, it's nice to be back. Yeah, these days I am teaching, I think you said conversation design, and specifically this semester I'm teaching a class in UX writing, which I love because it doesn't matter what kind of writing I'm teaching, it's like a chance to think about language and celebrate how cool language is with my students. And yeah, I've been teaching, I am doing some work at a cool place that I won't get into here. But yeah, it's been a really interesting couple of years. Larry: Yeah, because we last talked right before your book came out, I think it was maybe a few months before the book came out. And since then, I mean conversation had been a thing. I had talked to Phillip Hunter and several other content designers before I had you and Diana on the show, but it seems like I'm going to guess that more has happened in the last four years than in the four years before you wrote the book. Is that accurate? Rebecca: I think that's definitely accurate. Yeah, our book came out in April of 2021, and I think that ChatGPT became publicly available in November of 2022. So our book has been amazingly well received, tons of enthusiasm. It really seems to be sticking around and people are finding it useful. But if you control F and search our book, there is not one mention of the term large language model. And I think there's only one mention of natural language generation. Rebecca: It's been interesting to look at our book through the lens of the fact that technology keeps changing. And I think, and other readers think as well that it's based enough in principles that it really applies to any conversational technology, or at least that's the hope. And when I think back about the things that have happened in the last few years, when the book came out, I remember people kind of wanting us to put a couple things in the book that we didn't. Rebecca: People really wanted more information about how you build an Alexa Skill or a Google Action. Those were very visible at the time. People also wanted us to put a list of prototyping tools for conversations into the book. And we didn't, and I think things like that future-proofed it a little bit, because Alexa Skills and Google Actions... Like Google Actions aren't around anymore, Alexa Skills are very much de-emphasized. And a lot of the prototyping tools that we had a few years ago were acquired or were kind of sunset. So yeah, I think we made some lucky decisions to future-proof it. But certainly it doesn't have a mention of LLMs, which is- Larry: Well that's really... I got to say, this is super interesting because I remember from, we talked on the Content Strategy Insights podcast about this, that you and Diana both emphasized principles. And I don't know that you specifically stated that, but in retrospect it's like, yeah, that's way better than focusing on any specific technology or practice. Can you talk... I remember you covered those really well in the book. But is it possible to do a quick overview of some of the guiding principles? And maybe more to the point, how are they helping you through the arrival of LLMs and generative pre-trained transformers and all that stuff? Rebecca: Absolutely. I think that one of the concepts from the book that has become even more important today is the idea... And in our book we call it level of personification, and it's in the personality chapter. So I think a lot of people are thinking more about personality design, but also specifically how much of a character, how much of a mind the AI is sort of presenting itself as. Rebecca: So is it presenting itself as a fully realized character that's your friend and it refers to itself as I, or is it behaving more like a machine? So the example that I always give is if you have a remote control where your voice is the input, it doesn't need to be named Sandy, and it loves, Thanksgiving is its favorite holiday, and... It doesn't need to be a character in mind. So thinking through that spectrum really for any AI experience you're creating, I think is really important. How much of a person should it present itself as? I think that becomes a lot more visible. Rebecca: And an example that I would give for the LLM world, it's like, if you talk to chat GPT or Claude, those bots use I. And you can ask them a little bit about themselves and they'll tell you, they'll generally clarify like, oh, I'm an AI so I don't have feelings, but I can describe feelings or talk to you about feelings, stuff like that. But then there are other LLMs that they don't have any personification at all. Rebecca: So for example, Perplexity AI is a platform that is an LLM and you can talk to it and it does all the LLM-ey stuff, meaning you could ask it to summarize, you can ask it to give you bulleted lists, you can ask it to imitate a turn-taking conversation with you, but it doesn't really present itself as a character at all. And I think those kinds of decisions are still very much ones that conversation designers should be involved in, because that level of personification really impact user expectations, how they're going to behave toward it, and then how successful those interactions are going to be. Larry: Yeah, that's really interesting. How do you make that decision? Because I can picture making the wrong decision for good reasons. Oh, we like our customers, we want to be close to them, so we're going to act like their friend, where it would probably be more appropriate in a business setting to not be that way. Are there sort of guidelines around that, how you decide the kind of personality? Rebecca: Yeah, I mean in our book there's sort of a framework that walks through a lot of the facets of it. But generally I would say I think people over-personify these interactions. They think that having a character must be more interesting and fun, and they forget that people want their thing fixed, they want their task completed, they want their problem solved. And people also forget that lots of people are very happily solving these problems already through an app, through a website. People do like to solve their own problems, and conversations are not necessarily easier and more efficient unless they're designed to be so. Rebecca: So yeah, I think one of the things that we think through in the framework is first defining interaction goals that are independent of the conversation. So an example that I always use is, if you're making a voice bot that takes orders for a drive-through,
undefined
Feb 18, 2024 • 30min

Andy Crestodina: Using AI to Improve Marketing Content Quality – Episode 18

Andy Crestodina discusses how AI enhances content quality at Orbit Media, focusing on audience research, persona development, and gap analysis. He emphasizes quality over quantity, efficient prompt management, and effective collaboration with AI tools in marketing content creation.
undefined
Feb 11, 2024 • 35min

Markus Edgar Hormess: Teaming with AI in Service Design – Episode 17

Markus Edgar Hormess Markus Edgar Hormess offers this advice: "Never prompt alone." Markus was working with AI long before the current wave of excitement. He experimented with early versions of ChatGPT and quickly identified new opportunities to collaborate with both his human colleagues and his new AI coworkers. He's currently building a community - Teaming with AI - to study and share these new practices and to explore the future of teamwork in the age of AI. We talked about: his background in strategic prototyping and how he's applying it in his Teaming with AI initiative his first exploration of AI, in 1986 one his first applications of current AI tech, a use of ChatGPT-2 to accelerate service design prototyping activities his work and experimentation on ways to engage AI tools as collaborators on design teams how to consume research on AI, but also the importance of getting out in the field since research develops more slowly than professional craft his insight that you should "never prompt alone" so that you and your collaborators can eliminate bias and get better answers some of the opportunities that AI creates for real-time research and accelerated implementation of research insights how important it is "to put people in the center of this" the benefits for design practitioners of diving in and experimenting with AI tools, always with collaborators Markus's bio Markus Edgar Hormeß is a well-known consultant, practitioner and educator in the field of service design and design thinking. In his daily work, Markus helps organizations tackle complex business problems and make team cultures more agile and human-centered. The focal point of his work is strategic prototyping, where he constantly pushes the boundaries of what a dedicated team can achieve with limited resources. Markus is a strong believer that we should break down the perceived boundaries between technology, design and business – and that cheap experiments and prototypes are efficient tools to move your company, your strategy, your team, or your project forward. Based on this mindset, he has shaped multi-year programmes to help multinationals shift towards a more hands-on, pragmatic and effective approach to customer experience and innovation. Markus has a passion for good design, human technology, practical experiments, authentic services, and playfulness in all things. He is co-Founder of WorkPlayExperience, a service innovation consultancy which helps organizations worldwide change how their staff, partners, and customers work together – and – how they can strategically discover and create new products and services. His practice builds on his experience of service design and business consulting, and on his background in theoretical physics. In 2010, Markus co-initiated the world’s biggest service innovation event: the award-winning Global Service Jam. This was soon followed by the Global Sustainability Jam and the Global GovJam, and Markus has been a leading figure in establishing the culture of experimentation and prototyping which Jammers worldwide call “DoingNotTalking”. Markus co-wrote “This is Service Design Doing” and “This is Service Design Methods”, top-selling books which have become the standard reference books for many practitioners and academics. He teaches service design, innovation, and sustainability at various universities globally, and is adjunct professor for service design thinking at IE Business School in Madrid. In 2023 he co-initiated the Teaming with AI conference and community. His growing interest centers on how AI influences our approach to teamwork and collaboration, as well as the broader impacts on innovation and the development of strategies that are resilient in the face of future challenges.. Connect with Markus online LinkedIn Teaming with AI website Video Here’s the video version of our conversation: https://youtu.be/HlHhpsr2lW4 Podcast intro transcript This is the Content and AI podcast, episode number 17. As AI tools arrive in our workplaces, we're discovering that this isn't just another technology adoption cycle. The generative nature of tools like ChatGPT permits rapid iteration on ideas and quicker learning about their impact. For a prototyping strategist like Markus Edgar Hormess adding these AI agents to his service-design teams has been a boon, letting him and his colleagues collaborate and experiment in ways they couldn't have imagined just a few years ago. Interview transcript Larry: Hi, everyone. Welcome to episode number 17 of the Content + AI podcast. I am really happy today to welcome to the show Markus Edgar Hormess. I first met Markus a year ago at a service design workshop in Amsterdam, and we've been talking ever since about getting him on the show. So it's great to finally have you here, Markus. Larry: Markus, he's one of the co-authors of the book This is Service Design Doing. He's real active in the service design community and in that world he's really focused on strategic approach to prototyping, which is what we first wanted to talk about. And then AI came along. So we're on the Content + AI podcast. So anyhow, welcome, Markus. Tell the folks a little bit more about what you're up to these days. Markus: Hey, Larry. Thank you for having me. Yeah. So you mentioned it, so I'm super interested in strategic prototyping and prototyping in all kind of aspects. And when this whole wave of AI came about, we thought, "There is no books, there is no papers that tell you how to do this, so we need to prototype our way into this new world," and that's why we set up an initiative, which is called Teaming with AI, where we focus on the impact AI tools have on the way we collaborate in teams. So a small group of people that have a common goal, that trust each other, hopefully, and try to make something happen in the world. Might be nonprofit, for-profit, wherever you are. Markus: And so we set up a couple of events, a little Unconference early last year and one in the middle of the year. Then we started writing a white paper about this. This is about to be published soon, so hopefully we get some conversation about this. But all of this is really about giving a space, a play space for people that are interested to explore what is happening there. Only a few people actually focus on that team aspect. That's why we have a strong focus on that, because you know it, service design, what we always say, it's what is the key skill that you have to have in service design? That's facilitation. That's working with a group, whether you're part of that group or if you're facilitating a different group. And now one part of that group is AI and how does it change things? It changes it, and it doesn't change it in other parts, but certainly a lot of shift going about. Larry: Yeah. There's two things in there that are really interesting to me. One is that we're all still humans and we're going to be throughout this, whatever this AI thing turns out to be, but also the fact that, I feel like, you're living in the future a little bit, because when I met you a year ago, you were already deep into this and really exploring it. And now you're way into this collaborative paper and you've given it a lot of thought and you're going to be providing these materials that you just said didn't exist yet. So thank you for that. But tell me a little bit about when and how did you first get interested in AI? And how does it fit in specifically with your... Because you first came to my attention or you really stood out in that workshop as the prototyping guy. And so talk a little bit more about that. Yeah. Markus: Yeah, sure. I gave this a bit of thought, and then I remembered something, that back in 1986, I think I was in seventh or eighth year in school, I did a big presentation about the state of AI at the time. That was during one of these first waves, big promises in AI, "We're going to fix this by the end of the decade," and it never happened. But that was still when there was this kind of, "Oh, we can maybe do this." So this was time of programming languages like LISP and stuff. I think that was where I got curious. Then I forgot about it for a long time. And then just after I finished university, I started to work at the Bavarian Research Center for Knowledge-Based Systems, which basically was a spinoff of the chair of AI at the LMU University. But that was, again, during a time where we were in niche use cases. The machines weren't fast enough to do the big stuff that we can do today. But that's where I learned that, yeah, niche use cases can be useful and they still are to this day. Markus: And then fast-forward, me getting into service design and innovation. And three or four years ago, no, three years ago now, when GPT-2 came out, it was accompanied by a wave of tools that would allow you to come up with better marketing texts. And that's where we pick them up and use them in prototyping. Because in service design, if you design a new customer experience or service, how do you make this tangible, right? And one super simple way is to create a little advertisement for a new idea that doesn't exist yet. It's easy to test because people know the format. So it's a really good way to test the waters if people like that or value that way you're trying to sell. Markus: And using these tools, there's this little, "Oh, give me 10 variations of a Facebook advertisement or a Google ad." And then the teams would just use these tools within our workshops. We get these 10 and then curate the ones where it's, "Oh, yeah, that fits what we thought." And they could go faster, which is, I'm not obsessed by faster. There is a caveat there, but within the design process, being able to get something faster means you can iterate more, and that means you can learn more. So you can reflect on, "Oh, what does this do?
undefined
Feb 5, 2024 • 30min

Dan Porder: From Poetry Teaching to Python Programming for AI – Episode 16

Dan Porder A few years ago, Dan Porder was teaching poetry to university students. Now he's at IKEA training large language models to generate useful, usable content for user experiences. He's picked up new skills along the way, like Python programming, but much of his work still relies on well-established content and design crafts like content strategy and inclusive design. We talked about: his role as a senior content designer at IKEA, where he focuses on AI some of his early experiments in composing and evaluating poetry his longstanding interest in AI and the development of his tech skills how content designers can leverage their skills to work in AI his perception that there is currently more opportunity than threat to content professionals in the AI world the make-up of the cross-functional teams he works with: data scientists, engineers, developers, content people, designers, subject matter experts how to brief and guide generative AI to get the outputs your users need how writing abilities prepare content designers to do prompt engineering the stack of data and technology that underlies AI and the orchestration mechanisms that connect them some of the tools he uses in his AI design practice the role of data in content design for generative AI the importance of staying aware of bias in training data and always wearing your inclusive design hat the role of explainability in AI ethics the importance of knowing how to ask data scientists and engineers questions that reveal as much as possible the inner workings of the "black box" in which AI content is generated his take on democratization opportunities that arise with the arrival of AI tech Dan's bio Dan Porder is a Senior Content Designer and Content Engineer at IKEA. His recent work focuses on the intersection of AI, structured knowledge, and experience design. Outside of work, he runs an international writing community. Connect with Dan online LinkedIn Video Here’s the video version of our conversation: https://youtu.be/VFXLG4h6ylE Podcast intro transcript This is the Content and AI podcast, episode number 16. AI is quickly changing the way content designers work. New content duties are emerging that require fresh skills, but at the same time traditional skills like content strategy are becoming more important. In his work as a content designer at IKEA, Dan Porder has developed new skills, like Python programming, and has applied the writing skills he perfected as a poetry teacher as well as the inclusive design practices he developed earlier in his content design career. Interview transcript Larry: Hey everyone. Welcome to episode number 16 of the Content and AI podcast. I'm really happy today to welcome to the show Dan Porder. Dan is a senior content designer at IKEA, where he's currently focusing on AI stuff, and his title is content designer, but he is really more of a content architect. So welcome to the show, Dan. Tell me a little bit more about your AI and content adventures. Dan: Hey Larry. Thanks for having me on. Yeah, maybe I could just start by giving a little bit of background. I think at heart, despite what I'm doing now, I think of myself as a writer, and that's been my life's focus since I was young. Writing poetry, writing fiction. I did my bachelor's in English literature and later did a masters, masters of fine arts, actually, in poetry. Some of it was more on a conceptual side, thinking of language as data. So there was some unusual experiments in the tech world even then for me. Using Google data to create poems. So imagining Google queries as a representation of the collective zeitgeist, and how can we leverage that data to create meaning in poetry? Or using NLP to find meaningful relationships in texts where you didn't know they were there. But all of that then led me into copywriting, so like brand copywriting, product copywriting, ads, copywriting as creative direction. Dan: And then eventually back to the Google data, so SEO copywriting and SEO strategy. And I focused for a while on optimization, research, data analysis for SEO, some technical SEO. And then, yeah, my recent journey has been more in the design world. Content design, content strategy, user experience design. And I'd always been interested in AI and the question was always, how do you do that as a job? Particularly from the position I was coming from as a former student and teacher of poetry and writing. Of course, when ChatGPT came out, like for many people, the connection became clear to me and I started incorporating it immediately into all my work. Dan: I realized that I also needed to brush up on my coding skills, and particularly get more invested in Python. And I took some courses specifically on generative AI and machine learning for that purpose, just to make sure I was prepared. But now I think I'm leaning more into the world of knowledge, thinking about the data that we need for AI. The data structures that create meaning for these systems to ingest or to retrieve or to do with what they need. And in the case of generative AI, this is content. This is a task that requires a content designer, content strategist. It's going to be primarily images, text, audio. So that's what I've been up to lately. And yeah, I'm excited to talk to you about it. Larry: Well, that's great. I got to say, it's hard to imagine anyone better prepared for this stuff, because to go from playing with Google and poetry stuff, the notion of vectorized word embeddings was just like, "Oh, cool, that's another way to do that." I can almost picture this evolution going pretty smoothly for you. But a lot of content people are not as technically curious as you are, or haven't had the same technical opportunities. And you have a lot of colleagues who are more like conventional content design kind of folks. Have you thought about how people who are less natively technically inclined can jump more into AI stuff? Dan: Yeah. I think it's about leaning on their expertise, especially abstracting that expertise. So for a content designer who maybe imagines themselves more as a UX writer or comes from a copywriting background, it's an understanding of information, of messaging, of what content works best for people in what scenarios. And that kind of knowledge, that's less of the craft side and more of the wisdom of content, is incredibly valuable to data scientists and to engineers working on AI. Dan: This is some of the expertise that's needed, is subject matter expertise, including on content. So generative AI consumes data and puts out data. That data is content. You need a content person to figure out what it will be, what the use case is, and what content you want these models to produce on the other end, either for a system or for an end user. So you're giving up a bit of control on the craft side, but on the strategic side you're actually, if you're willing to have those conversations with the technical people, you are asserting control in a way. Larry: Right. That's so interesting because, as you're saying that, I'm picturing... it's sort of like the way, a lot of these models, there's attempts to capture subject matter expertise and incorporating that in there. But you also need that subject matter expertise to train the models. To write the prompts, do all the other stuff as well. Can you talk a little bit about that relationship between... and this gets at people's concerns of AI replacing them, because if we capture all that subject matter expertise, then all of a sudden it's like, "Oh, we don't need content designers." I personally don't think that's coming, but what do you think about that idea? Dan: Yeah. People have talked a lot about this. I think some of the concerns are overblown. Of course there's a grain of truth in this. Theoretically, if we were to all give all of our best data, most of which is just in our minds as experts, so it doesn't exist in the right data format, but if we were and we were to train models that somehow are still usable and not unwieldy as a result of that, you would start to replace people. Dan: That's not what's going on right now. That's not the technical capabilities. Anyone who's using these tools or working with them can see that. And also just the actual process of properly curating the data and testing and iterating on methods of fine-tuning and reward functions, and getting the right feedback from the right experts. That's a lot of work. That's a lot of resources, even for small use cases. So I don't think that's the worry. I think it's more like an opportunity. This is an exciting opportunity to make your work more scalable faster. I think, especially from the content design perspective, also to be able to assert governance over content creation through the consistency of machines that doesn't necessarily exist in people always. Larry: Right. And what you just said, I realized that my question was sort of like I'm projecting the alarm that I feel in a lot of circles. But I think more often the answers are like what you just said. It's much more hopeful and optimistic in that, at every juncture, there's going to be more need for our expertise that will, probably not for the next couple of decades, anyway, be codified in machines. So that kind of leads me back to one of the things that wanted to talk about a little earlier, actually. It's just, done both conventional content design for regular, old digital products. And then now you're working more on the AI side. Can you talk a little bit about the evolution of the practice as you go from one realm to another? Dan: Yeah. Well, I think, as we were just talking about, one thing to notice is the importance of cross-functional teams. So having not just the tech people in there, the data scientists and engineers and developers, but also the content people,
undefined
Jan 28, 2024 • 31min

Rebecca Nguyen: Collaborative Content Design Leadership at Indeed.com – Episode 15

Rebecca Nguyen In her work as a content designer at Indeed.com, Rebecca Nguyen is finding new opportunities to assume a leadership role on teams working with generative AI. Rebecca feels fortunate to work with teams that recognize the value of writing and design skills. She's also finding that generative AI is the perfect place for content design to take the lead. We talked about: her work as a senior UX content designer at Indeed and her recent shift to focus on product teams using generative AI how well-suited content designers are to AI products the unique challenges of working with non-deterministic large language models their process for designing prompts and how they evaluate them her learning curve around the loss of some language control that you get in conventional content design the main differences between prompt engineering (the how) and content design (the what) her ability as a content designer to lead more in the AI space than in prior design roles how they balance the use of outsourced LLM solutions like OpenAI versus developing their own models the lack of genuine intelligence in LLMs how her fear and concern about AI is eased the more she works in the LLM world how the evaluation component of designing content for AI creates more work for content folks one of the main benefits of LLMs - their ability to take on tedious rote content work the child-like nature of LLMs the surprising liberating effects of simply not worrying about whether or not you have a seat at the proverbial table Rebecca's bio Rebecca Nguyen (she/her/hers) is a Senior UX Content Designer at Indeed. She’s been part of marketing, UX, and product design teams at Bankrate, Northwestern Mutual, and LPL Financial, where she established the content strategy practice. A Confab speaker and workshop instructor, Rebecca is also an award-winning memoirist. Connect with Rebecca online LinkedIn RebeccaAnneNguyen.com Video Here’s the video version of our conversation: https://youtu.be/8WnxlXXKxeY Podcast intro transcript This is the Content and AI podcast, episode number 15. Just as content design was emerging as its own craft and profession, along came generative AI. At first it looked like ChatGPT and large language models might displace content designers (unfortunately, it appears from recent layoffs that some executives may still think this is the case), but at Indeed.com, Rebecca Nguyen has found that working with LLMs has given her more work, not less, and that her content design efforts are now more interesting, rewarding, and impactful. Interview transcript Larry: Hi everyone. Welcome to episode number 15 of the Content + AI podcast. I'm really happy today to welcome to the show Rebecca Nguygen. Rebecca is a senior UX content designer at Indeed. Welcome, Rebecca. Tell the folks a little bit more about what you do at Indeed. Rebecca: Hey, thank you so much, Larry. Great to be here. Yeah, I'm a senior UX content designer at Indeed. I've been there for a couple of years now, going on two years, and I work on product teams to make sure their content is useful and useful and accessible and inclusive and all those goodies that we're used to. And in the past six months or so, my role has really shifted and I've been almost exclusively focused on working with product teams who are using generative AI in their products. Larry: And that's why I wanted to have you on the show is we talked about this a while back. And that's one way to think... One way I think about that is all of a sudden we have new collaborators in two senses. One, we have these new, we're talking to machines in our work because they're generating some of the language we work with, but there's also a lot of other new collaborators. Tell me a little bit about how the people around you have changed over the last six months. Rebecca: Yeah, that's such a great point. So we're probably, if we're working in product content, we're used to working with product managers, we're used to working with UX designers, engineers. And that has shifted in that the team that I am partnering with now is made up of engineers and product managers, but we're also working really, really closely with data scientists and we do not have a UX designer or UX researcher on the team right now. So UX content design is really the entire voice of UX in this group, which is really cool. Larry: That's really interesting because often we're the last one in. How does that feel going in there as a sole UX person? Rebecca: It's exciting. It's been a little bit intimidating, but I haven't found myself feeling completely lost or anything. I think it's been great. As we were chatting earlier and you said we're really... We're creating a content product when we're working with these language models. The output is text and language, and so who better could be suited to drive and design the language when working with one of these models? It's been a really natural fit. And then the activities and tasks and approach has been different from anything I've done before, but it's well-suited to a content designer skills, I would say. Larry: Well, that's it. So what has that transition been like? You said the activities and the tasks differ. It sounds like it kind of rhymes with your old conventional product work, but how is it different now? Rebecca: Like that. Yeah, I sort of talk about it as if we think of a sandwich and in that content creation moment, that's the meat. That's sort of like when we're going through a design thinking process, we're doing some discovery or research or we're deciding on the problem that we want to solve, and then we get to that moment where we make the thing, we design the thing and we might be writing words. And after that we are iterating and getting feedback and seeing how it performs and measuring and iterating more, et cetera. Rebecca: The difference for me with generative AI has been spreading my focus out and becoming more of the bread. So instead of the meat, that creation moment, when you're working with a language model, the model takes on that task. They're the ones creating the content. And your focus as a human is all of that stuff on the periphery of that, so the prepping, which we would be sort of the prompt engineering and design where we're telling the model what we want it to do, and then the evaluation piece where we're looking at what the model did and saying, "Okay, was it successful? Did it follow directions? Could we do it better?" Rebecca: So it's almost like you become a teacher of content design instead of a content designer where you're actually making the thing yourself. Larry: Interesting. I have not heard it articulated that way, but that makes perfect sense because... Well, they're called learning models and you're the teacher. That's great. And you mentioned both prompts and one of the things you just said made me think that people always talk about prompt engineering, and you talked about engineering and designing prompts. Do you go into prompt creation with a designer hat on because you're working with engineers? Do you think more as a designer in that world? Rebecca: Yeah, I definitely think so. Particularly as a content designer, thinking about how does the language inside the prompt impact the output and to make sure that content design considerations are represented in the prompt as much as possible to make sure that we're getting the output where we want it to be. We're sort of preemptively correcting mistakes or anticipating mistakes that could happen. Rebecca: For example, when you get familiar with a model like ChatGPT, you can see, and we all can see as content designers sort of that out-of-the-box tone that the model assumes, the model that's been trained on the internet. So it's a very casual tone. It is, in my opinion, it's overly friendly in a way that can be kind of annoying. There's lots of exclamation points, there's a lot of celebration for small things that may not require such celebration. It helped you with a task and it's like, "You're so welcome. Awesome." And you're like, "Calm down." Rebecca: That tone and that voice isn't always appropriate for a product. And so when you're getting in there and designing prompts, you have this opportunity to modify as best you can. And the cool thing about prompt engineering is that you can do a lot of playing around and you can see how different instructions impact the outputs and then tweak and adjust from there. But that was surprising to me because I think that on my team and at my organization, at first we were sort of thinking about this on the other end, once we see the output, then let's evaluate it and give feedback. But the problem is that once the output has happened, it's too late. It's not like working with a human where you can revise it and create this static thing. It's always going to be different every time. It's that non-deterministic nature of a large language model. And so really we want to get in at the prompt stage to try and drive and direct before the output happens. Larry: That's so interesting. But you're still getting some feedback from it, too. You mentioned earlier how one of the pieces of bread is about iteration in your sandwich. And then, as you're talking there, I'm also reminded back when you said that you don't have UX researchers on the team. Are there more automated ways of getting feedback? Like for you, because you're still looking at it after the fact to see compliance with... Not compliance, but sort of alignment with voice and tone and that kind of thing. I guess tell me a little bit about that loop. Rebecca: Yes. So we have content design at the beginning, which would be the... Actually, even before we do prompt design,
undefined
Jan 21, 2024 • 33min

May Habib: Pioneering AI Innovator and CEO of Writer.com – Episode 14

May Habib May Habib is the CEO at Writer.com, a generative-AI platform that has been helping enterprises use AI since 2020. Her company builds its own award-winning large language models and is pioneering approaches like "headless AI" to help employees across an enterprise use AI to be more creative and productive. We talked about: her work as CEO at Writer.com, a "full-stack generative-AI platform," for the past four years her decade-long work in the AI and NLP space, beginning with translation solutions her take on the "over-chat-ification" of AI products, the reliance on chat interfaces as opposed to other ways to access AI capabilities her prediction that 2024 will the "get real" year for AI the use of fine-tuning and/or RAG to connect learning models the inadequacies of vector databases for knowledge retrieval and their exploration of knowledge graphs to fill the gap a new role, the "AI ontologist" another new role, the "AI program director" which includes a mix of left- and right-brain thinking and technical skills some of the use cases for "headless" AI their approach to securing and protecting the various kinds of data used in their LLM how she sees the role of data scientists in AI their tactical approach to building knowledge graphs for specific business use cases their work at Writer on no-code and low-code tooling to help their customers build solutions and tooling on the platform new content job roles that are emerging as AI takes hold in enterprises May's bio May Habib is CEO and co-founder of Writer, the only fully-integrated generative AI platform built for enterprises. Leading companies, including Vanguard, Intuit, L’Oreal, Accenture, Spotify, Uber, and more, choose Writer to help them deploy generative AI across their businesses, allowing them to automate and augment key operational activities and increase employee creativity and productivity. Writer’s family of large language models (LLMs) are state-of-the-art, topping leaderboards for natural language understanding and generation. The company’s security-first approach means that Writer’s large language models and generative AI platform are deployed inside an enterprise’s own computing infrastructure. Launched in 2020, Writer has seen immense success with customer adoption, has grown revenues by 10x in the last two years, and has over 150% net revenue retention. May and the Writer team have successfully raised over $126M in funding from notable investors, including ICONIQ Growth, Balderton Capital, and Insight Partners. May began her entrepreneurial journey as a teenager, and founded her first language startup, Qordoba, a localization software company, 10 years ago. May is an expert in AI-driven language generation, AI-related organizational change, and the evolving ways we use language online. She has been recognized for many different awards, including the recent 2023 Forbes AI 50 and Inc.'s 2023 Female Founder Award. She is a MELI Fellow with the Aspen Institute. She graduated from Harvard University and spends her time between San Francisco, where Writer is based, and London, where her two children live. Connect with May online LinkedIn email may at writer dot com Video Here’s the video version of our conversation: https://youtu.be/lFTfA4X8CkA Podcast intro transcript This is the Content and AI podcast, episode number 14. Over the past year and a half, innovative artificial intelligence startups have taken the tech and content worlds by storm. In her position as the CEO of the generative AI platfom Writer.com, May Habib has been right in the middle of the excitement, and out in front of it. Writer and their clients were deploying LLM-driven generative AI programs inside of large enterprises long before OpenAI's ChatGPT 3 captured the headlines and launched the current wave of AI disruption. Interview transcript Larry: Hey everyone. Welcome to episode number 14 of the Content and AI podcast. I am really delighted today to welcome to the show Me Habib. May is the CEO and co-founder at Writer, an app many of you're familiar with. They're just having a great year and I was excited to get her on the show towards the end of 2023 to talk about topping the MMLU leaderboard with the Palmyra, their LLM, closed the nice funding round. Sounds like things are going well at Writer, May. May: Oh, thanks Larry. It's so nice to come back and chat with you. Yeah, we've had a great year, thank goodness. I've got our last all-hands of the year after this conversation, and so it was definitely nice to look back. We do these weekly updates to the whole company. I write them, and I went back and looked at week one and compared it to week 52, and then one's like, "Oh, let's go back a little further." I went 2022, week 52, and then 2021, week 52, and yeah, it's awesome to see things build and all the progress. Larry: Yeah. Well and one thing, and you've been part of that progress. ChatGPT, which is where the current kerfuffle is all about, that's barely a year old, but Writer's older, and Cordova was even older than that, right? May: Yeah, well we've been in the NLP space for a decade, me and Waseem, and starting in machine translation. I think we were able to come to the world of transformers with maybe two distinct advantages, I think, over the folks that are in the space now, OpenAI and others as well included in that. May: One is, we were very much less a technology and search for a problem. Because we saw so many content challenges in the enterprise that could be solved with AI, having come from translation. So, it allowed us to really take a solution-based, outcome-based approach to thinking about how to productize this cool technology versus not. May: Now, obviously a general-purpose AI-based chat has captured the imagination, and has been an incredible thing that the OpenAI team has introduced that we obviously didn't think of, but in a lot of ways what it's done is open up what people thought could be possible with AI, and it's made room for solutions like ours to really explode, because we really serve that enterprise need, that very solution-specific application that is enterprise-ready, is secure. So anyway, it has been a fun road and a really fun four years with Writer. Larry: One of the implications, and you know as much about this stuff as anybody, in fact you're right up there with OpenAI in terms of your accomplishments and the power, the service you offer. I'm curious, what are you like... And I think there's a couple of things in this question and that I'm hoping to get out of this conversation. One is just the general state of the AI market. A lot of what you just said, I think it's going to help people ground themselves and feel it. But I think one of my questions is, for example, is this just another SaaS app that the software in the background is an LLM, or will there be fundamentally different things you think that content folks have to consider as they go into both working with these tools and working on these tools? May: Yeah, I think maybe taking that question a couple of ways. One, the user-experience cut of the market, and then the where-are-dollars-being-spent cut of the market. I think it'll allow you to see the gap that we see and that we feel, actually, looking at it in these two ways. I think from an end-user perspective, that cut of the market, there is this over-chatification of what AI can do, and everything is a fricking dialogue to get stuff out of AI, and it's just so early, and the interfaces obviously shouldn't all be chat UIs, but that's kind of the case right now. Whether somebody gave you a Copilot license or you're personally paying for ChatGPT Enterprise, I think most people aren't getting the value they thought they would, given all of the headlines. May: That adoption gap isn't because the capabilities aren't there because we are building the capabilities. They are fricking crazy magical, and I think when we chatted last, I said probably something along the lines, if this was 18 months ago, I probably said, "Larry generative AI is like giving everybody an assistant and a chief of staff." I mean, that's not what it's like anymore. It's giving you the best version of yourself, 20-years expert into the future. There is so much, even in 18 months, so much that the models can do. May: Anyway, all to say that the end-user experience cut of the market is super under-optimized and today, despite all of the hubbub, I can't go into my sales force and say, "I'm in London in January, who should I see of our deals that are closing in Q1?" So even the AI that's supposed to get built into all of our systems of record, isn't really doing the things that we want it to do. Folks who are trying to connect Copilot to their Microsoft data aren't seeing the kind of answers they would like. And I think power users who have figured out how to get a lot of value from ChatGPT are, but your median user really isn't. So, that's that cut of the market. May: In terms of where there are real dollars being spent here, I think the enterprise is probably over-investing in the infrastructure and the utility model layer, and are trying to rebuild from scratch every use case, and there are a lot of things that are breaking about that experience. And the total cost of ownership, I think, isn't making sense for a lot of companies. The accuracy and business impact of some of these pilots and POCs isn't materializing. May: So, next year is going to be get-real year, which is exciting. I think we'll see a lot more exciting end-user interfaces and experience that build on the toy making and piloting of tools this year. And then I think enterprises are going to be really looking for just more comprehensive solutions to filling generative AI needs internally. Larry: Yeah, a couple of follow-ups. First thing,
undefined
Jan 14, 2024 • 30min

Laura Costantino: Scaling Content Design to Work with LLMs – Episode 12

Laura Costantino, a content designer at Google, shares insights on training Large Language Models (LLMs) and navigating the challenges of designing content at scale. She discusses the importance of collaboration, adapting design systems for AI integration, and embracing new technologies with a proactive and curious approach.
undefined
Jan 7, 2024 • 29min

Chris Cameron: UX Writing for a Travel-Planning App – Episode 11

Chris Cameron At Booking.com, they've been helping travelers with their trip planning for many years. The arrival of generative AI has given them new ways to help travelers with this business-critical task. Over the past year, Chris Cameron has applied his UX writing and content strategy skills in ways both familiar and new to help build a new AI-powered Trip Planner tool that integrates with Booking.com's travel-booking app. We talked about: his work as a principal UX writer at Booking.com on their "writing system," which is sort of like their version of a design system for UX writers his recruitment to a "tiger team" at Booking to develop a new travel-planning AI chatbot for their travel-booking app the key differences between his prior product work and his work on this AI product the new kinds of collaboration that have arisen in his work on a generative AI product, in particular his work with machine-learning engineers the transition from the prototype of the app to its current position as an established product the product-feedback mechanisms that are built into the Booking "Trip Planner" how to jump start your learning if you're new to working on generative-AI tools how they were able to leverage components in their current design system to build the new Trip Planner app the prompt engineering skills he developed by creating an AI "story robot" for his three-year-old son his optimism about the employment prospects for UX writers how traditional content strategy practices like establishing voice and tone and consistent terminology manifest in AI product design how new AI practices are just as likely to show up as enterprise productivity improvements as in customer-facing products and features Chris's bio Chris Cameron has over 13 years of professional writing experience across journalism, marketing, and UX. As a Principal UX Writer at Booking.com, Chris oversees UX Writing Systems, managing the tools and workflows that enable over 80 UX writers to efficiently create high-quality content localised into over 45 languages and dialects. Born in Boston and raised in Phoenix, Chris now lives in Amsterdam with his wife and son. Connect with Chris online LinkedIn Video Here’s the video version of our conversation: https://youtu.be/bptOvimY4uU Podcast intro transcript This is the Content and AI podcast, episode number 11. As generative-AI tools are introduced into consumer products and enterprise workflows, the core work of content designers and UX writers still feels familiar, but the context for the work and many of its details are evolving. Over the past year, at Booking.com, where he has been working on an AI-powered travel-planning app, Chris Cameron has seen first-hand how the traditional concerns of content strategy and UX writing manifest in the world of generative AI. Interview transcript Larry: Hi, everyone. Welcome to Episode #11 of the Content + AI Podcast. I'm really happy today to welcome to the show Chris Cameron. Chris is a principal UX writer at Booking.com, the big travel booking agency based in Amsterdam. Welcome to the show, Chris. Tell the folks a little bit more about what you do there at Booking. Chris: Well, thanks, Larry, for having me. Yeah, I'll give a bit of my background as well. Like yourself, I started in journalism and then got into copywriting. And after moving to Amsterdam from the US at a very young age, 25, I guess, I eventually joined Booking in 2016, a little over seven years ago. And back then, the role was actually called copywriting. There was about 25 of us. And over the years we sort of discovered that we were actually UX writers, and we've become now this community of over 80 UX writers. And now, I am a principal UX writer, and the area I look after we call writing systems. And what that is is sort of like the writing version of design systems, but it's not so much a system, it's more like the tools and the workflows that we use to get our jobs done. So my role is to work on those tools and work on those workflows and make sure it's easy for our writers to get their jobs done in an efficient and easy way so they can create high quality content. And more recently, one of the areas I've been interested in looking into is GenAI and how we might use that to improve our workflows. Larry: Yeah, that's why I wanted to have you on the show. You told me about this product you developed, the Trip Planner, that's based on AI. Can you tell us a little bit about how that project arose and how you got involved with it? Chris: Yeah, definitely. So my involvement with AI and GenAI in general started when ChatGPT came out. I think a lot of people took notice back then. That was late last year, 2022. And I started playing around with it. I'm always a bit of a nerd and early adopter of technology, so I started using it for different things. I have a toddler at home, so I was actually using it to create bedtime stories for him. I would say, "Let's ask the story robot what kind of story you want to read tonight", and he would just generate a story idea, and ChatGPT would help us with the rest. It was a lot of fun. Chris: But professionally, I started thinking, "Okay, how could this be useful for our writers at Booking or how Booking as a company could use it?" And early this year, 2023, the company was seriously looking at GenAI and thinking, "Okay, what are we going to do with this?" And because I was already exploring it, I got pulled into some early discussions, and I thought, "Okay, we're going to have some brainstorms, some chats about how GenAI did," but actually the company was already like, "Let's go build a GenAI chatbot and put it in the app, and this is going to be the only thing you focus on for the next couple of months." And I'm like, "Okay, let's do it. Let's roll." Chris: And so basically, a task force was formed within the company, sometimes called a tiger team, we called it sometime, and it was representatives, multiple people from writing, design, research, product, and then also machine learning, our iOS and Android engineers, of course, data science, and marketing and legal. It was a big team. In the end, it was almost like having a little startup within the company, it was about 70 people. And the UX work stream was sort of one half of it, and the other half was all the machine learning and the engineering that was going on. Chris: And this sort of kicked off in mid-April when we started this, and two months later, we were able to launch the AI Trip Planner in June. Just so people understand what it is we built, we built basically an AI chatbot into the Booking app, and people can chat with it and ask their travel questions, and it can help them get inspiration for where to go or what hotel to stay at or build an itinerary, these sorts of things. And it integrates some of the traditional booking experience, like with carousels and images and property ratings and things like that, right into the chat so it feels a bit more natural. And then if they tap on a property, they can go straight into the booking process and make a reservation. Chris: And so a lot of it uses some of our existing machine learning knowledge we've built up at the company over the years and then relies a bit on OpenAI ChatGPT to do that generative AI piece and really create a nice conversation. So if people want to try it out, if they're in the US or they can VPN to the US, they can sign into the Booking app on iOS and Android and make sure their language is set to English, and they should see the AI Trip Planner right on the home screen. Larry: That sort of gets at some of the complexity around this, because I know you localize into 50 languages and cultures. Chris: Yeah, I think 45. Yeah. Larry: And so right now it's just English only and in the US, so that's interesting. And really, as you talk about that, I'm wondering from a user perspective, it's almost like just a UI thing. For an end user, you could almost perceive it that way. "Oh, another way to interact with this thing and do my trip planning." But on the backend, like you said, it's 70 people on this tiger team that put that together. How similar was it to other products, because as a principal you've worked on a lot of different projects probably at this scale, how much of it was familiar and similar and how much of it was new? Tell me a little bit about that. Chris: Yeah, definitely. There was a lot that was familiar to just a normal building a product, but there were some key differences. For example, for working with a GenAI product specifically, it's such a new thing that there's not a lot of existing research. So if you're going to go to your researcher and say, "Okay, what do we know about GenAI?" It's like, well, they're still learning too. So a lot of that was involved, looking at what is out there in the market, what competitors are doing, but then we were also able to combine that with the existing understanding of user needs, because essentially this is a search experience that we've been dealing with for a long time at Booking. So we know a lot about what the user's looking for in that moment when they come to the app. So those needs didn't change, but the way they were expressing those needs is the whole new thing. Chris: And in the early stages, when we were trying to test something, it's not that easy to build a GenAI prototype. If you're building a prototype in Figma, you can't really insert the AI part in there very easily. Maybe soon that will be a thing. So we had to wait until we actually had a working build of the tool where we could play with it internally, and that's when we started actually doing a lot of the understanding of, "Okay, what's working, what's not?" that sort of thing. So there was that challenge. Chris: But from a content and writing perspective,
undefined
5 snips
Dec 17, 2023 • 34min

Lance Cummings: AI Content Operations and Structured Content – Episode 10

Lance Cummings discusses AI content operations, structured content, and the impact of technology on content creation. Topics include collaborative prompting, value in community interactions, and how structured content can enhance creativity. The podcast explores the integration of AI in workflows, authenticity in the creator economy, and the creative potential of AI in content development.
undefined
Dec 10, 2023 • 35min

Dave Birss: LinkedIn Learning’s Most Popular AI Instructor – Episode 9

Dave Birss (AI-generated) Dave Birss has had a busy 2023. Since developing his first AI course for LinkedIn Learning early in the year, he has produced five more courses and has become the learning platform's most popular AI instructor. We talked about: his experimental approach to teaching AI how he helps companies understand the true benefits of AI the importance of using AI to augment people's skills rather than just to try and save money the elements of his AI manifesto use AI responsibly be ethical support your employees assign leaders keep learning always add a human layer to AI output the importance of critically consuming advice from anyone who proclaims to be an AI expert the importance of companies learning for themselves because there are few reliable consultants available now how unlocking the true benefits of AI can change companies' perspectives and help them see new opportunities the crucial task of understanding people and addressing their needs as AI is adopted his observation that it "cannot be AI or human, which is the way that a lot of companies are seeing it, it's got to be AI plus human" how the adoption of AI supports his point of view that generalists have an equally important role in the modern workforce as specialists Dave's bio Dave Birss combines the analytical mind of an AI geek with the butterfly mind of a former advertising creative director. This helps him make the ever-changing world of AI approachable, relevant, and occasionally entertaining. At the start of 2023, he launched his first LinkedIn Learning course on Generative AI. Since then, he’s released another five courses, all of which have gained fantastic ratings and reviews. In July LinkedIn announced that he’s now the most popoular AI instructor on the platform. But Dave isn’t just about online courses. He’s also a globe-trotting educator and public speaker, helping companies and individuals get more value out of Generative AI. He’s also a best-selling author with several books on creativity and innovation. And a former broadcaster and film-maker. As a sought-after keynote speaker, Dave speaks about AI, innovation, and creative thinking with a blend of science and dad-jokes. He’s a Scotsman who lives in London with his Haitian-American wife and two delightfully confused children. Connect with Dave online LinkedIn DaveBirss.com Video Here’s the video version of our conversation: https://youtu.be/2QL01qN6uzY Podcast intro transcript This is the Content and AI podcast, episode number 9. Over the past year, we've all been getting up to speed on AI. Over that time span, Dave Birss has become the most popular AI instructor on LinkedIn Learning. Dave would be the first to tell you that he's not an expert on artificial intelligence. But he's a very experienced technology professional who has witnessed several major earlier tech revolutions, and he's an experienced teacher and consultant, so he brings a very pragmatic approach to incorporating AI in your work life. Interview transcript Larry: Hi, everyone. Welcome to episode number nine of the Content and AI podcast. I am really delighted today to welcome to the show Dave Birss. Dave is an educator, author, and consultant currently focusing on AI and AI education. He's the most popular AI instructor at LinkedIn Learning. Welcome, Dave. It's great to have you here. Tell the folks a little bit more about what's going on these days. Dave: Thanks, Larry. Yeah, I've been creating courses on AI this year, really. And I can't really call myself an AI expert. I guess I'm an enthusiast and I am an experimenter. I guess I do research to find out what works best, and then I share that knowledge with people. Dave: If you told me a year ago that I was going to be doing AI as my main thing, I wouldn't have believed you because OpenAI only released ChatGPT on, I think it was the 30th of November last year, so it's still less than a year old. And when they launched it, I just threw myself in, absorbed as much as I could, created some frameworks, easy ways of being able to teach people, and then I just released these as courses. Dave: I've got now six courses on the platform. Just released another one last week. And I'm about to release some courses on my own website as well. Yes, that's my life these days, doing courses and then helping companies get onto their AI journey in the best possible way because I think a lot of them have got the wrong attitude. They're not looking at AI in the right way. Larry: Yeah, interesting. Tell me more about that, because I think we all have opinions about AI. What are you discovering? Dave: Well, of course, companies, as you know, they will tend to have, "Here's our quarterly target, here's our quarterly goal. Can we make more money and spend less money in this quarter?" That's what they do. It feels as if that's the responsibility of a company is to do that. Dave: Now, if that's your attitude towards AI and that you're only interested in using it to save money, really, it's all about productivity, then you're really missing out on 90% of the benefits of AI. Because the real benefit of AI is not just to help you do less work and do work faster, it's to help you do better work. And when you do better work, that gives you an advantage in the marketplace. If you think that you've got this line across here, that this is the profit and loss of a company, you can only nibble away at that by starting to replace humans and tasks by AI. And what you do in return is you do a deal with the devil, which is you embrace the fact that AI is fantastic at adequacy, which means that you're going to get stuff that's all right, maybe just about average that you're going to get from AI. And if you're replacing humans by AI, you will save money, but you get adequacy in return. Dave: When you use AI, on the other hand, to make humans more capable of doing phenomenal work, of stretching further than they were able to stretch before, then really, the sky's the limit. At that point, you're gaining profit rather than trying to save cost. And the problem is that most companies are so focused on saving costs and this incremental growth, they're missing out on what is the real potential of embedding AI into your company, into your system, which is to do better work, to reach higher, achieve more. Larry: I love the way you're contextualizing that because going from that... I love that they're fantastic at adequacy. And it's like- Dave: They excel at it. Larry: They excel at adequacy. But the real potential here is in unleashing way more human potential on this work. And this speaks to the need for... Because that quarterly focus of enterprises, it's just notorious in any number of circumstances. Have you had any success or do you see ways that companies might get past that and start to think more strategically about how to embed the benefits of AI in their orgs? Dave: Well, when I talk to companies about it, they get it. But there are so many companies that their form of motivation for people in senior leadership is keep cutting costs. We're only focused on this quarter. And that kind of short-termism, I think, will really come round and bite you in the butt when it comes to business. Dave: One of the things that I've been doing from conversations with businesses over the last, well, this year since I've really been doing this AI thing, is that I've developed a manifesto that I'll shortly be releasing. Let me see if I can find my cursor on here and I can maybe bring up my manifesto. There we go. This manifesto is all about helping companies understand what they need to do to embrace AI properly. Dave: Zoom is doing funny things for me here. If I go back here, I can then share what I've got. I've created this manifesto, and I'll quickly take you through some of the points in the manifesto. I think that this first thing is what I was talking about, is that it's important that we use AI to augment people's skills rather than just to try and save money. I think that that's the main focus for companies that want to get real success and value out of AI. Dave: Obviously using data responsibly is an important thing to do. And that's something you have to communicate to your staff, what we mean by using data responsibly. You've got to be ethical because you've got to be guided by your head or your heart. Companies are used to being guided by laws, but we don't have those laws here yet. There will be lots of court cases that is going to generate some laws over the next few years, but you do not want to become a legal precedent. Because of that, you should make sure that you're thinking properly and guided by your heart and ethical responsibilities when you're making your decisions. Dave: You need to support your employees. That means give them training, give them guidance, let them know. If there are employees that are worried about this, you need to give them emotional support as well to help them on this journey. I think it's important of leaders. You need a butt to kick and you need a back to slap. And it's important to keep learning because this stuff's changing all the time. Dave: Just in the last few days, OpenAI has pretty much exploded as a company internally when Sam Altman was fired. And it looked as if he might be joining again, but now he's joining Microsoft. And everything's changing so fast. And then two weeks ago at OpenAI's Dev Day, they introduced so many things that really, really changed the whole world of AI. And then you've got some- Larry: I've got to interject real quickly. We're recording this on November 20th. And by the time this airs in a couple of weeks, it'll be completely out of date. But I think the elements of your manifesto are timeless. Yeah, sorry. Dave: Yeah. Yeah,

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app