Content + AI

Larry Swanson
undefined
Jun 6, 2024 • 0sec

Lisa Welchman: Content, AI, and Digital Governance – Episode 29

Lisa Welchman Over the past 25 years, Lisa Welchman has established and codified the field of digital governance. With an enterprise consulting career that spans the emergence of the web, the arrival of social media, and the rise of mobile computing, she is uniquely positioned to help digital practitioners, managers, and executives understand and manage the governance issues that arise with the arrival of generative AI. Lisa is the author of the leading book in her field, Managing Chaos: Digital Governance by Design. We talked about: her career in enterprise digital governance her concern about the lack of transparency in the existing governance practices at AI companies an analogy she sees between WYSIWYG and AI tools the contrast between more mature governance models like the UX field has developed and newer digital practices like the adoption of GPTs governance lessons that new tech implementers can always learn from prior tech eras her call to action for technical experts to alert executives of possible harms in the adoption of new technology the elements of her digital governance framework: understanding team composition and the organizational landscape in which digital practitioners operate having a strategic intent articulating governance policies establishing practice standards the range of digital makers she gets to interact with in her work the importance of accounting for the total business and organizational environment when jockeying for a seat at the table the responsibility of experienced digital makers and managers to call out potentially troublesome patterns in the adoption of new tech the importance for digital practitioners of staying aware of how much agency they have right now Lisa's bio Lisa Welchman is a digital governance trailblazer with over two decades of experience. She's passionate about helping organizations manage their digital presence effectively and sustainably. Known for her practical approach, Lisa has worked with a variety of clients, from global corporations to non-profits. She’s also a popular speaker and the author of "Managing Chaos: Digital Governance by Design." A mentor and educator at heart, Lisa is dedicated to helping leaders make the digital world a safer and kinder place for everyone. Connect with Lisa online LinkedIn Video Here’s the video version of our conversation: https://youtu.be/-UIj0YWxLaI Podcast intro transcript This is the Content and AI podcast, episode number 29. Whenever new technology like generative AI emerges, organizations have to deal with both the opportunities and the challenges that arrive with it. It often falls to practitioners like content strategists and designers to alert the C-suite of potential governance concerns that arise with the adoption of new tech. Lisa Welchman sees in this situation an opportunity for digital makers to take the lead on educating their organizations about these important issues. Interview transcript Larry: Hi everyone. Welcome to episode number 29 of the Content + AI Podcast. I am really happy today to welcome to the show Lisa Welchman. Lisa is a true legend in the field of digital governance. She pretty much established the discipline, I think it's safe to say, over the past 25 years. She wrote what I would argue is the leading book on it, Managing Chaos: Digital Governance by Design. But welcome Lisa, and the reason I wanted to talk to you this week is we're right in the middle of Rosenfeld Media is doing a conference on design and AI, and it seems like AI is an area that's really ripe for a conversation about governance. Does that make sense? Lisa: Yeah, it does. I will contextualize myself a little bit in saying that digital governance is a really broad term, and my focus is really around enterprise digital governance, how digital governance manifests inside of an organization that's making and putting things online. And there's a lot of other governances around there in the internet web space that are equally interesting, but not where I specialize. Larry: That idea of enterprise. And what's interesting about that is that the big companies that are doing this stuff, that are most prominent in the field, it's all Google and Anthropic and Microsoft and OpenAI and huge organizations like that. Do you have any feel for what governance is happening inside those orgs? Lisa: I don't actually have any kind of feel. I think the types of organizations that you describe have in some capacity mature governance inside of the organization because of the nature of the types of products and services that they offer online. And just from evidence. Now, whether or not we like the decisions that are being made within that governing framework that they have, that's an entirely different concern. I am concerned about those larger organizations married with the newness of this version of AI, that's like the iceberg, the AI iceberg is finally poking its head out of the water and we're paying attention to it now. And there's a lot of stuff underground that these organizations have been doing for years that we're not really aware of. I'm a little nervous about the lack of transparency around the preamble governance that may have happened, concerned about that. Lisa: But I'm not concerned that they aren't governing for many organizations, enterprise organizations, B2Bs who are coming into this technology afresh just as it's emerging to them I'm more concerned because they're more likely to take ChatGPT, and I know it's not a great analogy, but ChatGPT feels to me like a WYSIWYG AI tool. You don't really need to know what you're doing. It's like those of us back in the day who learned HTML, we actually had to learn HTML to make things work. And then you got these, what you see is what you get tools, these WYSIWYG tools come out of the framework and anybody could code a page and it made really sloppy, nasty code on the backend, but it didn't matter because the browser served it up. Lisa: And I see some of these new tools, particularly around generative AI as like WYSIWYG tools for AI. And it makes me nervous because not a lot of people are asking "What's in that black box and what's happening and who made the decisions about it?" Which is really what governance is about, "Who was considered, what are the policies, what's the value system around making this technology?" And I don't see a lot of people asking that in the enterprise. Larry: I think a couple of things about what we just said. One, the notion that these things are black boxes, that the LLMs, and in fact, even the engineers who build them often say they can't explain what's going on underneath them. But you contrast that with, I spend a lot of time with conversation designers and other UX designers, and in that world, it's so clear that transparency and explainability are crucial to consumer acceptance and adoptance and safety. It seems like reconciling that should be on the governance agenda someplace. Is that reconciliation of intent with customer expectations, is that something that governance can help with? Lisa: It is, but I would also argue that you're comparing apples to oranges because one of the things that I like to talk about a lot that a lot of people talk about are maturity models. And the maturity model for a new technology or a new anything is that it comes out of the chute hot and heavy. People don't really know what they're doing with it. They try new things. There's a lot of craziness on organic growth. We make a big mess, a lot of harm and lack of safety come into play and somebody screams and says, "We need to govern this," or, "We need to write policy around this." Or if you're more on the operational side, "We need to write standards around this. We need to become more transparent. We need all of these things happen." And then there's some struggle and then things mature, and then you have a more sustainable model. Lisa: You're comparing a UX model that's fairly mature with a coming-out-of-the gate one. And it's not entirely fair because UX has not always been that way and experience development and the development of an online experience has been quite chaotic and a lot of harm that we see has been a result of UX not thinking through problems early on or implementing things or not understanding the foundational functionality of what they're asking for, not understanding that certain types of online interactions will create certain data pools that can be exploited by the organization. That all happened in the UX world. It didn't come out clean. Larry: I want to follow up. There are two things about that. One you alluded a minute ago to the AI tip of the iceberg. The AI has been around forever, since the seventies and eighties, and it's just now the arrival of the GPTs and in particular ChatGPT 3.5 almost a year and a half ago now. There's that, but that's where people perceive the start of this to be, and that's where it does lag far behind UX practice, but in fact, it's been around for a while. Is this a common pattern, I guess to see the technology? Lisa: Yeah, it's just how it flows. This is just how things work. This is a presentation that I give about the history of automotive, automobile safety and things come out of the gate very hard. Usually in the US, other parts of the world, people are trying to make money or trying to figure out how to exploit this new technology that's become mature enough that it can actually be used to make money and to build product. We all know there's a huge preamble to every technology where people fail and fail hard and fail sometimes for 50 to a hundred years or more. They're failing, failing, failing. Finally, somebody comes up with something that's actually viable and it comes into the marketplace and then people think, "It's new." And of course it's not new,
undefined
May 27, 2024 • 0sec

Rob Hoeijmakers: Using AI to Transform Blogging Workflows – Episode 28

Rob Hoeijmakers LLM-based conversational tools are revolutionizing all parts of the content ecosystem, including blogs by independent professionals. Rob Hoeijmakers is an independent web strategist based in Amsterdam. He's using AI tools like Whisper and Perplexity to streamline and improve his research and writing workflows. This lets him spend more time on his websites' information architecture and improves the business results he gets from his blog. We talked about: his work as a web strategist and his multiple blogs his happiness with being able to delegate tasks to his LLM colleagues the freedom that AI tools like Whisper give him to research, think, and ideate as he walks how the abundance of content that AI tools provide helped him abandon his old scarcity mindset around information the huge time savings he realizes from using AI-generated summaries of transcripts of interviews how he uses AI tools to draft his blog content his insight that the real value in his blog is in its information architecture his preference for using his own images over AI-generated ones the details of his content "knitting" which stitches together his current and prior content the analytics tools he uses to track traffic to his blog how he uses his blog as a conversation starter Rob's bio Rob Hoeijmakers is a passionate web strategist with over 30 years of experience. Known for his curiosity and love for recognising patterns, he excels in crafting engaging content and innovative web solutions. Rob writes insightful blogs and is a hands-on builder of content, chat, and messaging platforms. A dynamic public speaker, he frequently discusses web strategy, digital marketing, and AI, always focusing on enhancing user experiences and client success. Connect with Rob online LinkedIn Instagram Twitter Web Strategies Web Strategies (Netherlands version) Chat voor Bedrijven (Chat for Business) Video Here’s the video version of our conversation: https://youtu.be/FRaHqLRWT9k Podcast intro transcript This is the Content and AI podcast, episode number 28. Many of the stories you read in the media about the adoption of AI tools cover enterprise workflows and other uses in large organizations. It turns out that LLM-based applications can also help tiny, one-person companies. Rob Hoeijmakers is an independent web strategist based in Amsterdam. AI tools like Whisper and Perplexity have revolutionized his research and writing workflows, letting him focus on his websites' information architecture and the business of blogging. Interview transcript Larry: Hi everyone. Welcome to episode number 28 of the Content and AI podcast. I am really happy today to welcome to the show Rob Hoeijmakers. Rob is a web strategist based in, are you in Amsterdam? I forgot. Rob: Yes. Amsterdam. Larry: Amsterdam. Yeah, in Amsterdam here in the Netherlands. I'm also here in the Netherlands. And also as part of any web professional nowadays, he blogs a lot and we were talking at an event a few weeks ago about his blogging and I said, Oh, tell me more. And I'm like, wait, I have a podcast. Let's talk about it on the podcast. So anyhow, welcome Rob, tell the folks a little bit more about what you're up to these days. Rob: Yeah. My name is Rob Hoeijmakers. I'm a web strategist and for content marketing, I blog a lot. It's not only marketing, it's also way of learning and keeping up. I am into LLMs driven chat bots. I did it with the ReSViNET, which is on the, which is RS virus thing. So that's something I'm working on currently. And then of course for my blogging, I write a blog in English, I write a blog in Dutch and I have another one in Dutch on chat for companies. That's what I do. Larry: Oh, nice. And the main thing, you do a lot, like all of us these days, but what I really wanted, hoping we can focus the conversation around is the way AI has helped you in your blogging workflow. Larry: Because when you think about blogging is like the old thing about the power of the press belongs the person who has one. We all have a printing press now. We have our own blogs, but we don't have the whole editorial staffs that giant publishers do. Is that how it feels with AI? Does it feel like you have a team now? Rob: Yeah. Absolutely. Absolutely. I feel like being the manager of a rather big team, and it's really a joy because it's so many chores I've been able to delegate and I've been able to be more productive. I. Rob: I've been able to be more deeper into things because I can have conversations, I can do research things if you have to do that through Google, and I'm basically doing all those things alone. I don't have a big group of people. I don't have a big office with all sorts. Of course, I have friends and colleagues who are into this as well, but they have busy lives. Rob: So I have loads of conversations with the LLMs to deepen my knowledge, to brainstorm, to get creative, to see relations, to see patterns to different sort of developments in society and especially in the digital world. Larry: Hey, one of the of your most recent blog posts was about the kind of epiphany or something you had around, because we're recording this just when the Scarlett Johansson thing around OpenAI came. Tell me about that blog post. I thought that was hilarious. Rob: That was extremely funny because I do a lot of research, but the thing that really helped me is Whisper and Whisper is the natural language recognition and generation within ChatGPT. So it makes me completely hands free, I put on my Air Pods and I go for a walk and I have conversations, long conversations on certain topics, which is fun, can be topics for work or also I have a lot of private things I like to figure out. Rob: Anyway, so the voice I had was Sky, and I thought it was really, really nice gentle voice. And then only till yesterday, suddenly there was a completely different sound, and it really gave me goosebumps because I thought, Hey, what's happening here? I really felt like stepping under a cold shower, it was really a shocker. Rob: Which is funny in itself, but also it worried me a little bit because I already noticed how attached I got to the voice and I was talking first person to it, but I'm talking to OpenAI, it's a company, they make a bug, they make money out of this tool, and I'm just a consumer, I'm a customer. Rob: So that was really, really a good wake-up call for me. Larry: Well, that's really interesting. As a web strategist, you probably, the time cycle of figuring that out was like, oh, wait a minute. This is kind of creepy. But it also gets at the power of the conversational. A lot of people have pointed out that these LLMs aren't that much fancier or they are in some ways, but the thing that really may have made them come to the fore is the conversational interface and especially the personalities associated with that, it sounds like. Rob: Yeah, definitely. Because it frees you up from your desk. I used to do research and I was in front of my computer, in front of my desk, and that limits your thoughts, that limits your possibilities. Rob: So because if you go for a walk, you give your eyes freedom and they can wander around and you're less time pressed. And for me, that was really a change of my life, a change of my daily life I mean, and by that of course, also my bigger life, but these are hours and hours. I can do that. Rob: As a strategist, you need to try and think a little bit deeper or not just choose for the possibilities that are already there, but come up with new things, a certain creativity, and it involves a lot of societal developments, but also of course, people and people you also need to study. Rob: How do they function? How do they work? How do they work as a team, what sort of infrastructure do we need to cooperate successfully? It could be things like very practical things like Canva. I do a lot with Canva for social media utterances or social media things. And then these are always complete worlds nowadays. So they're big. Rob: And then you need to figure out how to cooperate with an external team. Do they have to have a subscription? And if they have a subscription, can you share all your assets? So many things I needed to research I used to do in front of my desk, I now can do with the walk, but then I also noticed that you get a little bit attached to it emotionally as well, and I think that's not actually, I don't think it's a good thing. Larry: Yeah, that's really interesting. Well, first of all, the fact, I mean, I've actually done a fair amount of research around walking and creativity and ideation. It's one of the best things. If you're stuck, you just get up from your desk, go out and take a walk, but now you're out walking and you have this creative companion that you can chat with as you walk. Larry: That seems really powerful. But it's also like we talked before we went on the air, something you just said reminded me of this observation you made that some creative people feel like they're cheating when they use AI, but it's really more like delegating. And that kind of gets through the people stuff you were just talking about. Can you talk about- Rob: Have to be inherently lazy, shamelessly. And lazy because laziness of course has very bad name and we need to be productive all the time. And I've noticed that's okay. If you live in a world of scarcity, you have to be productive, you have to work hard, and you make sure to survive, etc. Rob: But when there's so much, when there's abundance, then a certain laziness and a certain things are actually tools for survival as well, because you can optimize and then you get so much information, there's so much available that yeah. Rob: I think actually it's really, really good to change your mindset with the changing tools,
undefined
6 snips
May 16, 2024 • 31min

Chelsea Larsson: Building an AI Learning Machine at Expedia – Episode 27

Chelsea Larsson The arrival of generative AI gives content designers a whole new toolkit. As with any new set of gear, there's some learning that comes with the new capabilities that the tools afford. At Expedia, Chelsea Larsson is leading her team of content designers into the AI design future with fresh takes on the planning, design, and evaluation skills that designers have always relied on. We talked about: her work as a senior director of experience design at Expedia how she is facilitating with her teams the shift from product development design to AI design how she has identified new capabilities that AI brings and is incorporating them into product road maps how content strategists and architects help them decide whether to use generative AI or structured-content methods their shift from front-end content design to working with back-end engineers and architects how new LLM-driven applications of conventional content-evaluation criteria permit them to scale up their content design work their goal of creating good-quality content at scale how content designers are shaping the future of conversational ecosystems how AI lets content designers do more strategic thinking, in particular about how to apply their insights at scale her take on the recent rounds of tech layoffs one of the new roles that are emerging for which content professionals are well-suited, like the new position of model designer the origins of their AI program in a simple application of gen AI to partner content creation how to bootstrap the implementation of AI content practices in your org how to identify opportunities to help your customers by matching their content use cases with your AI capabilities her message to content designers: "don't be afraid" and keep learning Chelsea's bio Chelsea Larsson is a Sr. Director of Experience Design at Expedia Group where she leads the B2B Content Design team, partners on strategic design initiatives, and builds AI travel tools. Chelsea loves to chat about Content Design in genAI and UX design for travel. She shares her thoughts on both topics via the Smallish Book newsletter and conference stages around the world. Her favorite book to gift loved ones is the delightful Chirri and Chirra series. Her favorite sandwich is a turkey club. Connect with Chelsea online LinkedIn Smallish Book Video Here’s the video version of our conversation: https://youtu.be/qKr7o5aKQrM Podcast intro transcript This is the Content and AI podcast, episode number 27. The arrival of generative AI tools gives content professionals a whole new palette of design capabilities. Learning how to take advantage of these new opportunities so that they can shift from product-development design into content-driven AI experience design challenges many content folks. Chelsea Larsson sees these challenges as a chance for both her and her team at Expedia Group to stretch and grow and to scale their impact as design professionals. Interview transcript Larry: Hi everyone. Welcome to episode number 27 of the Content and AI Podcast. I am really delighted today to welcome to the show, Chelsea Larsson. Chelsea is a senior director of experience design at Expedia Group. And welcome Chelsea, tell the folks a little bit more about what you're up to these days. Chelsea: Thanks for having me, Larry. As you said, I'm a senior director of experience design. I lead the B2B content design team at Expedia Group. So we call that the partner content design team, because we work with Expedia partners. I also lead the Generative AI Experience Design Program, which we'll get into later and lean in on a couple of strategic initiatives at Expedia. Larry: Cool. And I think one way we were talking before we went on the air is we were talking about the idea of these AI learning machine, and that seemed to resonate with you as a way to describe what you're up to. Can you tell me about the machine you're building there? Chelsea: Yeah, so when I first started getting into AI, which I think was around a year ago, and talking about generative AI here, of course, I saw a kind of paradigm shift in how content designers specifically could work in AI fields, and it kind of led me to create what you called the learning machine, because when you're working with AI features, the planning is different, the designing is different, and the evaluating is different. It's not fundamentally different, but there are new layers to consider. Chelsea: And those layers led to a lot of questions on, well, how do we plan for the right AI opportunities in our product roadmap? How do we design these AI interactions, questions, when do we disclose that AI is being used? How do we signify that AI technology is being used without words? So what kind of iconography do we use? And then how do we evaluate the output differently than we would evaluate the output if humans had generated the content? So, when you think about those three different pillars of work, planning, designing, evaluating, we were led to, and I spearheaded this, create a program of critiques, guidelines, leadership forums, ways of working, which kind of has created this learning machine as you called it, which I love, and I can get into that a little bit. Larry: Yeah, and I love that, they sound like familiar practices, but talk a little bit about how each of them manifests differently in the AI world. Chelsea: Yeah, so when you think about, they're absolutely familiar processes and it's what we've been doing as product development designers for a long time, but there are new considerations to take into account. So when you think about the planning, let's start there. Your company's not going to just put AI into the experience. AI is not a solution. It is a avenue to get to the outcome that you want as a business, but you do have new capabilities now. You have text generation, you have text classification, summarization, you have multimodal content generation. You can create photos, you can create videos, you can pull out sentiment analysis. So with these new capabilities, you can matrix those to the outcomes that you already want to have or the user problems that you have in place. And by matrixing those with the new AI capabilities, it results in a change to the roadmap. Chelsea: You can plan for new outcomes because of the capabilities that you have with AI. Without understanding those capabilities, that is a hard conversation to have. So that was the change that needed to happen is educating our designers, our content designers and our product folks on what these new are. And that education at Expedia has kind of come about in these forums that I spoke about where I have taken machine learning scientists, product people, and designers and kind of for the first time, put them in a shared space and critique where we are sharing with each other ideas, capabilities and user problems. And those are kind of coalescing in new road mapped opportunities. So that's kind of a different way that we've started approaching planning these AI opportunities. Larry: Right, and I'm wondering if each of those parties you mentioned the ML folks, the design folks and the product folks, do they each bring different perceptions of those capabilities and is the mix different than it was before in those kind of relationships? Chelsea: Yeah, so there's also a fourth person, a fourth role who I've partnered with quite a lot in the past year, which I guess I'll call them a content architect at Expedia. They're called content strategists, which I know is going to be super confusing for this community. They're people who are really highly skilled in structured and unstructured content. So they're the NLP experts of the world. They understand BERT, which is a bidirectional language assessment pattern. They understand structured content in a way that makes it really easy and valuable, to have them on your team, to let you know as a content designer, if your solution should be generative AI or if it should be structured content. And that partner brings that knowledge to the table, they let you know kind of what the content landscape is and what the best content tool is to use. Chelsea: I think in the future, our roles will probably become one because they also usually typically have a writing background, a taxonomy background, a library science background, but they also have kind of a data, an engineering understanding. If content designers could lean more into that content modeling, content architecture side of things, these roles would basically overlap. But right now, those are two different roles where I work and are very helpful for partnership. Chelsea: The machine learning scientists, they bring all of the LLM knowledge, so they're helping us understand the base model, the behavior of the base model, what we can expect. They're helping us fine tune the model based on our prompts, based on our system instructions, the definition of good that the content designers are creating. They also help us understand the cost of scaling out some of the proposals that we have. We have to pay for the tokenization of the outputs, so how expensive is it going to be to generate this type of content? Chelsea: So they're the system and scaling experts, and we work with them really closely on the behavior and the output. We work with the content architects where I've talked about before on the inputs. What does this content need to be, how does this content need to be structured as an input and with the machine learning experts, how can we fine tune this output to get to the place that we want? All of these people understand the input and output, but they all have different levels of expertise where I work, I think it's different at different companies. Chelsea: And then the product folks are still doing what they've always done,
undefined
Apr 28, 2024 • 38min

Patrick Stafford: The Future of AI and Content Design – Episode 26

Patrick Stafford Like most tech professionals, content designers are extremely interested in how AI might affect their work and employment prospects in the future. Patrick Stafford and his colleagues at the UX Content Collective recently conducted research to explore the impact of AI on the future of the profession, as well as the attitudes and opinions of content designers about new AI tools and practices. We talked about: his work as the co-founder and CEO of the UX Content Collective the high-level findings of his recent research on the impacts of AI on content design the coincidental timing of the release of GPT-3 and the wave of layoffs in content design and other tech professions his take on the current content design job market, that it's now a more typical market comparisons of the job market in 2016-18, 2019-21, and and from 2022 through now the decline in corporate training budgets recently his take on working "with" AI as well as "for" AI products the emerging critical role of content designers in ensuring the ethical use of AI his observation that most of the new AI jobs being created are being staffed from within companies, not by hiring outside talent the growing importance stated in many job postings of being familiar with AI tools the main benefit of AI for content designers: the ability to scale the important role of content designers applying best practices and design sensibility to gen AI output how the UX Content Collective curriculum has evolved in response to the arrival of AI the surprising finding in their research that 80% of people either feel the same or more hopeful about the industry after the introduction of LLMs and AI the upcoming revival of his podcast Writers of Silicon Valley Patrick's bio Patrick Stafford is the CEO and cofounder of the UX Writers Collective. He is a former Lead Digital Copywriter for MYOB, the largest accounting software provider in Australia, and has consulted with several businesses on UX content strategy. Connect with Patrick online LinkedIn UX Content Collective The Future of AI and Content Design research report Writers of Silicon Valley podcast (reboot coming soon) Video Here’s the video version of our conversation: https://youtu.be/ijMMmsWQZKo Podcast intro transcript This is the Content and AI podcast, episode number 26. The arrival of GPT-3 and the explosion of interest in generative AI caught many in the content-design profession by surprise. Arriving as it did around the same time that mass layoffs hit the tech industry compounded the anxiety around this new tech. Patrick Stafford and his colleagues at the UX Content Collective recently conducted research to explore the true impact of AI on the profession, as well as the attitudes and opinions of content designers about new AI tools. Interview transcript Larry: Hi everyone. Welcome to episode number 26 of the Content and AI podcast. I'm really happy, today, to welcome to the show, Patrick Stafford. Patrick is the co-founder and CEO at the UX Content Collective, which you hope you've heard of. Anyhow, welcome Patrick. Tell the folks a little bit more about what you're up to these days. Patrick: Thanks, Larry. I'm really glad to be talking to you today. It's always a pleasure to speak to you. So yes, as Larry said, I'm the co-founder and CEO of the UX Content Collective. We started in 2019, and we offer a range of courses and workshops related to UX content. So that could be from a broad beginning in UX writing fundamentals to more specialist skills like content ops or even things like systems thinking, which is a workshop we have coming up, and a range of different courses in writing skills and accessibility, localization, a variety of different skills that content designers or content adjacent professionals may get something out of. So that's what we're doing and of course we have a very big interest in AI at the minute, given everything that's going on, and we're starting to delve into that as well. So that's me. Larry: That was sort of the trigger. I always liked talking to you too, but the trigger for this specific conversation was you all just recently did a study on, I can't remember the exact title of it, it was about the impact of AI on our work. And I would love to hear ... I'd love to go through it. I know there's more to it, but you share, in the report, five insights and discoveries that you made. I don't know, maybe walk through the top-level findings of that survey. Patrick: Yeah, sure. And I have to say, Larry, a big round of applause has to go to you for championing this topic, I think, because for a lot of content designers or even just people in content, in general, when generative AI came along, people felt very lost, and they didn't really have an anchor to have to ground them in the future possibilities of what's going on. And so I think your podcast is a great foundation for people who are trying to understand what's happening. So kudos to you. I just want to start that off by saying that. Larry: Thanks. Patrick: So I'll back up a little bit to get the context for why we wanted to do this. So we have been watching generative AI for quite a while. We were publishing blog posts on this in 2019, 2020 before OpenAI made their model accessible to the public. And at that point, we were really sort of just keeping an eye on it saying, "What's going to happen when this releases? It's still a way off, but we need to start thinking about when AI can do some of the work for you, it really forces you to think what is the quote-unquote work that you are actually doing?" And we've encouraged people to a very strategic mindset about their work for a while. Patrick: And so when Open AI released GPT to the world, we were like, "Okay, well, this is something we've been talking about for a while. We kind of have the context for it. We're not taken by surprise here." But we are constantly asking our students, "What are your thoughts on everything that's happening in the industry?" And for a lot of people, they felt very uneasy about AI, and that is partly due to the fact that OpenAI announced the ChatGPT model, or excuse me, the GPT-3 model was the first one they announced to the public when all the layoffs were happening in content. And so you have this mix of things happening at the same time, and there's just a lot of unease. Patrick: But over time, I guess, maybe over the next year, we began speaking with people who are interested in generative AI within the content design community, and starting getting their perspectives, people who are actually working with it. And we just wanted to hear from a lot of people in our student community what they think about AI now that it's been out for a while, now that they've become familiar with it, now that they've probably messed around with it a little bit, now that we've seen a lot of headlines about it, we wanted to actually ... now that things have sort cooled down after the initial rush and burst of energy after everything happened, we wanted to ask people what do you actually see in this day to day? And we asked about 150 people, all content designers or content design adjacent. Patrick: So we do have some people in there who are in long form content writing, but the vast majority are content designers, a few technical writers. Some people focus purely on information architecture, which I know will make you very happy, Larry. But the vast majority describe themselves as UX writers and content designers. And we found that the vast majority, so nearly 100%, have tried language models, and they've tried working with them and playing with them and just experimenting with them. And what was more interesting, though, is that 4/5 of people now use them for work. So 4/5 of content designers now use language models in their work, day to day. Patrick: Now, that doesn't, necessarily, mean they all find them useful, so we drilled down into that and we asked them questions about how useful you find it. And that was one of the other major findings was that 21% find them very useful, and 40% find them somewhat useful. So, I guess, we can talk about some of the other findings, as well, and we will do that, but already, off the bat, right there, to me, that says a lot about where we are in the content design community. 82% of content designers are using them for work, and 60% either find them very useful or somewhat useful. So that actually struck me, because I thought that the number of people who would find them very useful would be lower than that. But I'm not sure what your reaction to that was, but that was certainly mine. I was quite surprised. Larry: Yeah, no. It's interesting to me, because I'm trying to remember what the rapid adoption and appreciation of a technology. I don't know if anything like when ... I'm old. I think of going from word ... from typewriters to word processors, and from word processors to the web, and all these things. Those were more than a year for them to get to those kinds of numbers you're talking about, where 80% are using it, and 60% of those find it really are somewhat useful. Patrick: So that's interesting to me that it's sort of ... and I don't know. Maybe that's just an ... everything is accelerating, and it's like we just happen to see this acceleration. But, hey, I wanted to go back to one thing you said in there, because I think there's ... I want to tease out something, and see if you developed any insight around this from your research. Is it just a coincidence there were already other dynamics going on that were resulting in some layoffs in the UX and the content design world, and the arrival of ChatGPT? I think, whether they're related or not, they're conflated in people's heads. Do you have any thoughts about that? Patrick: I do have thoughts about that, because we ...
undefined
Apr 21, 2024 • 31min

Wouter Sligter: Authenticity in the Age of AI – Episode 25

Wouter Sligter Figuring out how to best adopt new technology is difficult at any time for any organization. AI tech rachets up this challenge to new heights. Wouter Sligter helps companies understand the capabilities and limitations of LLMs and related technologies to create trustworthy experience-delivery platforms. Transparency is a key element in implementing solutions that evoke and support the authentic human experiences that underlie these systems. We talked about: his background as a UX-focused designer and his shift to conversation and AI design the growing number of business use cases that his work supports as well as the growing palette of tech tools that he has to work with how he creates authentic and trustworthy experiences with LLMs and adjacent tech the benefits of RAG (Retrieval Augmented Generation) the growing number of platforms that support building AI experiences the huge failure rate of conversational AI implementations, and how better design might improve the success rate the importance of being genuinely customer-centric when implementing AI projects how his background in language and music helps his AI design work, in particular the benefits of "being comfortable with the uncomfortable" the importance of companies being transparent about their AI implementations how localization manifests in the AI world the growing acceptance of chatbots by consumers his advice to jump into AI now, beginning with due diligence about how you'll implement it in your organization Wouter's bio Wouter Sligter is a Senior Conversation Designer and Generative AI Engineer. He has been a committed team lead and has consulted for a large number of Conversational AI implementations, most notably in Finance, Healthcare and Logistics. He has an innovative mindset and a sharp sense for understanding user needs. Wouter always looks to improve the conversational user experience by following iterative design patterns and verifying outcomes through data analysis and user research. Both predictive NLU and generative LLMs and SLMs are part of Wouter's toolkit. Wouter has a background in ESL and IELTS teaching at language centres and universities in Vietnam. He has developed a strong awareness for language and cultural peculiarities, with native fluency in English and Dutch and good conversational skills in Vietnamese, German, and French. Connect with Wouter online LinkedIn YouandAI.global Video Here’s the video version of our conversation: https://youtu.be/Ak0liSLR8_0 Podcast intro transcript This is the Content and AI podcast, episode number 25. One of main reasons that people have taken so quickly to AI tools like ChatGPT is their conversational nature. People like talking to each other - and to computers. In human conversation, we've developed skills and instincts that help us determine the trustworthiness of the person we're talking with. In tech-driven conversations, we often have reason to mistrust. Wouter Sligter helps companies build conversational systems that express the authentic humanity of their creators. Interview transcript Larry: Hi everyone, welcome to episode number 25 of the Content and AI Podcast. I'm really delighted today to welcome to the show Wouter Sligter. I met him in Utrecht in the Netherlands. He's in the co-working space we both work out of. There, he is a conversational AI consultant. He does conversation design and he's a generative AI engineer. He has his own company called You and AI Welcome, Wouter. Tell the folks a little bit more about what you're up to these days. Wouter: Hi Larry. Very good to be here. Thank you for inviting me. What am I up to? I think you mentioned the three things that I like most doing and that I do most often. I've come from being a self-employed freelance designer really, when in 2018, Facebook started with their chatbots on Messenger. I jumped in and quickly caught on and got a lot of clients worldwide, really building chatbots for them. At that time, I was mostly working on the content side with what you see is what you get kind of flow builders and slowly got pulled into the tech side as well. Wouter: I worked for enterprise as a consultant for a few years in the Netherlands, and then I decided last year to go back to being freelancer, and that eventually culminated in now having my own company, You and AI, with which I'm doing all kinds of outsourcing work from Vietnam. Of course, lately a lot of work is involving generative AI LLMs like RAG implementations and fine-tuning. In my bones I'm still a Uxer, so I'm always looking to build stuff that actually works for people rather than only playing around with tech that no one uses. There's really my strong point, I think. Larry: I love the way you say that. I have many engineer friends, but they're really prone to just building stuff because they can. We're both designers and I love human-centered design and human-driven design decision making. One of the things you said in there, you kind of reminded me of your heritage because you come out of conversation design kind of UX and conversation design specifically within that. That field has evolved. All these new generative AI tools have a conversational or a chatty kind of interface, but you've been working with that kind of interface, but the bones underneath these interfaces are way different now. Five years ago, it was all NLP and kind of flow building tooling. Can you talk a little bit about the transition and your skillset and the demand for your kind of talent over the last five years? Wouter: Right. Yeah, so I think in the beginning, because most of my work was involving Facebook, there was a lot of demand for the marketing use case, the sales use case, like getting cold leads to convert and to some extent also customer service. Then when the enterprise-level companies jumped in, the customer service field became much bigger. I think now today, that's still the major use case for most conversational AI. But now with the LLMs and all the generative AI functionality that we have, the possibilities have become so much bigger. There are so many more use cases that can successfully or let's say an acceptable level of quality be implemented or be used. Wouter: Right now, I'm getting all kinds of stuff in, so it can be like fine-tuning for reading Excel sheets, fine-tuning for creating posts on LinkedIn, but also still the follow-up of let's say the fall-backs on the traditional NLP bots where traditionally it would say, "Oh, sorry, I don't know that." We now often use LLMs to fill in those gaps and pull from the company website or company knowledge base to answer even those questions better than they could ever before. Larry: That's right. You're just reminding me, I kind of phrased it as an either/or an evolution in development, but we haven't left the old stuff behind. Like you just said, you're still the fallback if an LLM or another agent fails. You have just a bigger palette of conversational tools to work with, it sounds like. Is that accurate? Wouter: Yeah, definitely. Definitely. And that makes my job so interesting because we started with the rule-based stuff and then NLP came in and then we thought like, well, now it's getting really interesting and a little bit difficult. Now we're at a stage where we have these LLMs that produce or don't produce the output that we expect with all kinds of hallucinations and technical challenges, which I think make my job so much more interesting, but also more challenging in a way because you need to explain to everyone what every bit of tech does and make sure that the clients who are actually using it understand why we're using that tech so that they can also explain to their stakeholders why things work or why they don't work. Larry: When we talked a couple of weeks ago... Oh, I'm sorry. You were going to say something? Wouter: Yeah, yeah, no, go ahead. I can keep talking for ages about this stuff. I'm actually trying- Larry: I'd love to circle back to something we talked about a couple of weeks ago when we were preparing for this. One of the implications of that, what you just said, this evolution of the tooling, you go from rules-based, like it's all guardrails all the time in a system like that to NLP, which has intense understanding and all those utterance magic to these crazy hallucinating LLMs. I mean, I'm exaggerating of course there, but there's been an evolution in that practice. One of the big things that comes up just every conversation and every conference and event I go to is the importance of trustworthiness and authentic, because these things are conversational. Larry: They sound like a human, but it's not always authentic sounding. So there's this sort of combination of things, at least I conflated them in my mind, this notion of authenticity and trustworthiness. Can you talk about how you instill those kinds of... How do you help people trust these experiences as you're navigating them through? Wouter: There's a lot of levels on that question. Let me just pick one first. I think that when a business, when an organization chooses to work with the kind of AI that we have now, then they need to decide if they're comfortable with that level of risk that they're allowing in their applications because we know that LLMs are not perfect. They do hallucinate even if we put the guardrails in place. Actually, you have to decide for each use case and each implementation which level of risk you are comfortable with as an organization. For an internal use case, it might be okay if 85% of the answers are correct, but for a customer-facing use case, you might want to see 90 or 95 or even 100% depending on the context. I think that's one important thing to note. Wouter: With that extra level of quality really, of output quality,
undefined
Apr 9, 2024 • 31min

Lasse Rindom: Lying Robots, Chaotic Code, and Other AI Issues – Episode 24

Lasse Rindom Lasse Rindom both consults with enterprises on AI projects and talks with business and technology experts about their thoughts and discoveries. In both his consulting practice and his podcast conversations, Lasse has discovered both tremendous opportunities and potentially pitfalls when adopting enterprise-scale AI solutions. We talked about: his work as an AI leader at Basico, the origins of his AI-focused podcast, The Only Constant the unexpected opportunities that arise from the new ability to work with unstructured content that AI affords his quest for use cases that will help identify new governance structures and operational frameworks some examples of AI workflows that enable new business capabilities, like the ability for non-coders to query an agent that can write SQL queries for them his candor in his consulting practive about the possible pitfalls of AI tech, in particular the consequences of LLM hallucinations how current LLMs fall short of natural language, acting more like "chaotic code" the unfortunately common belief that generative AI can be applied one way that he is addressing the "lying robot" problem: using multiple AI agents to correct each other (instead of fine-tuning the models) the current strategic AI deficit in the market, resulting in consultants pushing untested engineering solutions the differences between how enterprises and SMBs consume tech solutions the importance of holistic thinking and staying focused on core problems as you explore AI solutions Lasse's bio Lasse Rindom is AI Lead at Basico and a leading expert on AI and automation. He has previously been global technology manager at facility management giant ISS and CDO of Baker Tilly Denmark. Lasse is a frequent debater on LinkedIn, a Gartner Peer Community ambassador and is host of the podcast “The Only Constant” in which he has deep discussions with global thought leaders on what AI and technology means for us as humans and as businesses. Connect with Lasse online LinkedIn Video Here’s the video version of our conversation: https://youtu.be/_fdAweq3Wuw Podcast intro transcript This is the Content and AI podcast, episode number 24. I generally focus these interviews on content practices, but I'll zoom out now and then to explore the broader strategy and technology landscape. Today I'm talking with Lasse Rindom, a thoughtful and knowledgeable consultant who works with enterprises on big AI projects. He's also a podcaster who talks with business leaders around the world about AI and tech. In his conversations and consulting work, he has discovered a world of lying robots, chaotic code, and strategic deficits. Interview transcript Larry: Hi, everyone. Welcome to episode number 24 of the content and AI podcast. I'm really delighted today to welcome to the show, Lasse Rindom. I'll have him pronounce his name correctly in just a minute. I don't speak Danish, apologies. But Lasse is the AI lead at Basico, a Danish consultancy that works with big enterprises in Denmark and other places, I'm assuming as well. But welcome to the show, Lasse, to tell the folks a little bit more about what you're doing there at Basico. Lasse: Hi, Larry, and thank you for having me on the show today. I'm really thrilled to be here. So my name is Lasse, Lasse Rindom. That's how you say it in Danish so people could know that. I always say it's okay to say Lasse. Everyone knows that, that's dog. Lasse: I am the a AI leader at Basico, which means I'm defining our go-to-market strategy and our products in the AI space, and we focus very much on the back office function. So that's your legal, facility management, finance, HR payroll and finance IT systems. So I'm defining how we want to approach the AI market in that space and primarily in Denmark. Prior to that, I've had stints at an analyst firm, very short stint, and I've been a chief digital officer and head of digital at an SMB and an SMB consultancy. Plus, I have also previously been very heavy in the automation space, especially around RPA, where I built the framework and the technical setup for ISS globally some years back. So I come from an automation background, but actually my major is in history. So I'm not necessarily the tech guy born, but I think I cover a lot of ground. I have a lot of long lines and I try to make sense of everything I know all the time. Larry: And you just mentioned that you're a history major and you're always trying to make sense of things, which leads to how I first discovered you is through your podcast called The Only Constant. And I just love the evocative name that like we're in an age where the only constant is constant change. Can you tell me a little bit about where the podcast came from and how it fits in your practice now? Lasse: So where it came from was basically two things or three things maybe. I wanted to do a podcast for a while. That's one thing. I think that was something I had as an aspiration. And the second thing is that I think the market needs an explorer room, some place where we can explore because we don't know what's going on right now. Lasse: The thing with the generative AI explosion last year, especially last year since November 2022, is that no one really knew what this was. We didn't have it before. It's something that took everyone by surprise. Even Gardner, even the big GSIs, everyone was taken by surprise by this. This means there's no one who can really explain what it does. Lasse: Everything at time, you've heard someone try to explain it. If you go back and look at podcasts a year ago until now and people explaining what it does and how it works, you'll see that they've been corrected so quickly. So everything gets dated very quickly if you do an explain podcast. Lasse: But if you do an explore, where you explore, what are the business problems, the things you should focus on instead and what are the big picture things, the ethics things and all those things? Then you might get to something that has a little bit more longevity and that's what I'm trying to aim for with my podcast. So it's explore more than explain it, I say. Lasse: And thirdly, I've had over the years, I have become sort of a LinkedInfluencer. Is that what we call it, LinkedInfluencer? I call it that. I've gotten to know a lot of people, especially in the automation space and also in the tech and business space in general. I've had a lot of coffee chats with these guys and I thought, "Why not turn these coffee chats that are really engaging and interesting into a format that other people could listen into?" Lasse: I think that was primarily where I thought I can get my content from there and get my good people from there. So that's why I wanted to do it, turn that into something that I could share with other people. Larry: Nice. And you've had some really great people on, and you're doing this in practice where you're working with big leaders in big enterprises in Denmark. What are emerging as some of the top level concerns of folks? And I love that so much of our world in the content world is about generative AI, around generating content and working with/in content workflows. Larry: You're at sort of another level trying to figure out how to help with what a lot of people would regard as mundane back office stuff, the HR and all that. But even there, you're finding a lot of opportunities for AI, right? Lasse: Well, the funny thing is that there's a lot of opportunities there, but they're not the typical opportunities you'd expect I think. People have been approaching this for a year as a chatbot feature, basically something that can generate stuff because that's what amazed us. But as Kurt Cagle said on my podcast, "This is a machine that lies." He was actually very fascinated by that. We've made a machine that lies. I think that's awesome saying just go back 500 years and say, "Hey, someday we'll make a machine that lies." And people are like, "What?" We have that now. But it makes it also very difficult to work with in almost any area. Lasse: It's also difficult in the typical ones we thought immediately, marketing or customer service because you can't have something that can be jailbreaked away or that can be lying or can be bland. That's not what you want. You want something that's cutting edge when you connect with your customers, right? And in back office you need accuracy. It just needs to work. Lasse: As you said, it's very transactional, it needs to work all the time, it needs to have no hiccups. This is something that just makes the business do what it does best by supporting it to do what it does best. So if you have an AI that messes that up or lies or something, then you're having problems there as well. Lasse: So where does this really fit? I don't know, maybe just telling short stories to your kids or something that's where it works out of the box or as an assistant where you can sort of chat with it and get ideas from it. That's also a thing where I think the chatbot works, if you use it in concert with yourself and your own ideas, you use it as a sparring partner you have at hand all the time. Lasse: But I think that what's emerging right now is that people are realizing that this is not just about playing canvas generation, but also about restructuring, interpreting, translating things into structures that we didn't have before. So basically taking something unstructured, getting some data from it, and then creating something structured that we can analyze on top of. Lasse: This means that we're also doing something we haven't been able to do for years in technology. We've never been able to work with unstructured data, but suddenly we have a means to do that. I think that's what people will realize over the next couple of years that this is actually something that's very,
undefined
Apr 2, 2024 • 35min

Gerry McGovern: The Environmental Impacts of AI – Episode 23

Gerry McGovern As we navigate our paperless offices and admire our sleek compact computing devices, it can be hard to imagine the impact that our digital experiences are having on our communities and the planet. Gerry McGovern studies the environmental impact of the digital industry. He has uncovered an alarming story of unsustainable growth, toxic side effects, and human misery, which he shares in his book, World Wide Waste. We talked about: how he became an environmental activist focused on the impacts of digital the phenomenal pace of growth of digital infrastructure the impact on local communities of the big data centers that house cloud infrastructure how the compute-intensive nature of AI exacerbates his observation of the long-standing lack of transparency in the AI industry the "snake oil sales" aspects of AI the troubling use of "forever chemicals" by the semiconductor industry the material impact of computer chip manufacturing how human over-consumption and the environmental impacts of AI overlap his advice for actions you can take to mitigate your personal impact: slow down and use your brain more think local - local foods, local computer storage, etc. prefer text over images and other high-bandwidth communications Gerry's bio Gerry’s latest book, World Wide Waste, examines the impact data waste and e-waste are having on the environment and what to do about it. Gerry also developed Top Tasks, a research method used by hundreds of organizations to help identify what truly matters. Connect with Gerry online Mastodon LinkedIn GerryMcGovern.com Video Here’s the video version of our conversation: https://youtu.be/W5-BMTTEUik Podcast intro transcript This is the Content and AI podcast, episode number 23. It's easy to think of digital media and experiences - including our new AI explorations - as ethereal things that magically traverse the computing cloud to enlighten and entertain us. Gerry McGovern is here to remind you that that's far from the case, that "digital is physical." The data centers that power cloud computing are lapping up water and consuming electricity at an alarming pace, and the arrival of AI is accelerating these troubling patterns of overconsumption. Interview transcript Larry: Hi, everyone. Welcome to episode number 23 of the Content + AI podcast. I am really delighted today to welcome to the show Gerry McGovern. Gerry is the author of the book The World Wide Waste: How Digital is Killing the Planet and What to Do About It. He's also probably better known ... and I originally met him almost 15, 20 years ago when he was talking about customer care words, and subsequently out of that arose, I think, his work on top task methodology. So anyhow, Gerry's a well-established figure in the discipline, has a lot of important stuff to tell us about the environmental costs of AI. But welcome, Gerry. Tell the folks a little bit more about what you're up to these days. Gerry: Thank you, Larry. It's lovely to be speaking to you again. I suppose what I'm up to mainly is ... In a sense, I never thought it would happen, but I've become a type of environmental activist focused on the impacts of digital and how to use digital in a better way, in a less damaging way. I don't think digital can be green in any sense, but I think it can be used to help more our environment and at least to reduce the damage it causes to our environment. So, that's the main stuff I'm focused on. Larry: Yeah. Well, I got to say, I love the idea that you're an environmental activist now, because we need plenty of that. But one of the things about your work that I think has really driven home the point to me that we think of digital as this ephemeral thing happening out there in the ether. It's like no consequence. You can just throw stuff in a hard drive or share something. But this is still connected to the physical world, right? Gerry: Absolutely. And the first sentence in The World Wide Waste says, "Digital is physical," and basically, the cloud ... It's on the ground in these mega data centers that are ... They say between now and '27, data centers will add the equivalent electricity demand of a Germany, or perhaps a Japan, of electricity demand to the global electricity network. So it's growing at a phenomenal pace, the quantity of architecture that's out there. It's very, very much physical. Larry: That's just amazing. And one of the things that the people building those giant server farms and things they're good at is, you don't really hear that much about it. They're almost doing reverse PR or something. Gerry: Oh, yeah. It's one of the most secretive, least transparent industries on Earth, and deliberately so. It's all part of the plan. They will never reply to a press call, or very, very rarely. They've become a little bit more in the last, but it's secrecy, secrecy, secrecy, secrecy, doubled on secrecy, secrecy, secrecy, secrecy. When they buy, you don't even know who owns the data center until the very last minute in the process. So it's all super, super, super, super secrecy stuff, because they know they don't have a good story to tell to the local community or the local area, because data centers are absolutely horrible for a local community. There's little or no jobs, a couple of security jobs. There might be 20 people, maybe 40 people maximum in a mega, mega data center running it, so they bring little or nothing to the local community. They might bring some tax, but behind the scenes, they're often getting more in tax breaks than what they're bringing. So there's not a good story to tell, and therefore they try and stay as secretive as possible. Larry: Interesting. And one of the things you were talking about, that I'm reminded that these are often in small communities out in the boondocks, because a key driver in these things is the need for water to cool. And can you talk a little bit about that, the types of communities that are affected by this, and that thing that you said, that the local governments are giving in tax breaks but getting almost nothing back? Gerry: Yeah. Certainly a large data center, which is a lot, mainly these big ones, these super data centers ... There are these massive, big warehouses, and they can be quite nice as well, so you don't want them close to homes. You don't want them very close to homes, and they need a huge electrical infrastructure, so you need utilities and backup. A lot of them have these mega backup diesel generators so that they've all sorts of redundancies. And then they've a massive water demand, hundreds and hundreds of thousands of liters, of gallons of water a day. And with AI, that's going to grow maybe five or 10, because with artificial intelligence, it's much more processing-driven, and the more processing there is, the more heat there is in the environment. The more heat there is, the more need for cooling. So Microsoft's water demand, I think, went up 20% in a year in 2022. Gerry: So we're talking about mega water demands, and ironically, still, you find them in places like Phoenix or whatever, which is strangely ... Phoenix, Arizona in the United States, which is undergoing a hundred-year drought, which is essentially close to a desert. But water is really cheap, or certainly historically has been really cheap in the process, because they've got this massive underground aquifer that has built up over millions of years and that they're essentially draining dry. It's not just the data centers. It's the industrial farming. And now the chip manufacturers, who are incredibly water-intensive as well, are coming there for political reasons, because of the US-China conflicts. So, you've got a lot of incredible material intensity behind the scenes of this stuff. I saw one study that said that by 2030, an average European would be using as much water for their digital activities as they drink on a daily basis. Larry: Wow. Gerry: And that's just the water. Larry: I have to tell you, I lived in Phoenix. Before I moved to Europe, I was living in Phoenix, Arizona, and on a flight back to Phoenix from someplace ... I can't remember where ... I was seated next to a guy who worked for one of those big chip manufacturers, and I said, "What are you doing in Arizona? There's no water here." And he goes, "Oh, there's plenty of water." So like you just said, the chip manufacturers think that, and those are unreplaceable aquifers. Is there data about ... For example, you can probably compute when Phoenix, Arizona will run out of water, or any number of other places in the world. Are there people looking at that? Gerry: There are. I think in the US, it's not the only place, but something like 80% of US aquifers are stressed, so watersheds are stressed. Arizona has a weird plan at the moment. They're looking to send a pipe down to the ... I don't know if it's the Gulf of Mexico, the ocean in Mexico ... and pipe water from the ocean, which is going to be very expensive, because it's much more expensive to desalinate water than it is to use fresh water. Because these data centers, they need very clean water for all sorts of reasons. You can't get dust or pollutants in the pipes and et cetera in the process. Gerry: So there are some plans there, but generally speaking, there was a study there recently in the New York Times that says the East Coast of the US is saggy because they've extracted so much water. And I think if you would've driven around Arizona a bit, you would've found quite a bit of collapsed land, because when these aquifers empty and the ground subsides and collapse, they'll never fill again, even if it rains, because there's no space for them to actually fill, or at least certainly take them a million years to fill. Gerry: So, yes and no. The scientists are saying yes,
undefined
Mar 25, 2024 • 34min

Mike Atherton: Serious AI Insights from a Whimsical News Show – Episode 22

Mike Atherton Mike Atherton is well-known in the content world for his work at institutions like the BBC and Facebook and for his co-authorship of the influential book Designing Connected Content. His latest content project appears at first to be less serious. Newsbang is a daily AI-produced satirical news show. Its content is based on real historical news but delivered by AI-created stereotypical newscasters. The result is fun, but the process of creating the show has added real-world technical skills to Mike's professional toolkit. We talked about: his work as a UX writer and content designer his experiments with AI tools, including the suite of generative tools he's using to create Newsbang, a completely artificial daily news program how he accomplished his goal of creating an ensemble sketch comedy vibe his workflow for the daily production of the "news" show some of the surprising traits of his news characters that emerged as AI generated them lessons learned about the cost of producing AI programming, like the costs of prompting the variety of models he uses to build the show, including open-source models that have more lenient guard rails to permit more edgy comedic content how he creates his own guardrails to achieve the effect he's looking for in the show while still creating a family-friendly show how he developed the technical skills it takes to create Newsbang how his work with Newsbang helps in his day job his hope that more content professionals will follow him into the AI playground Mike's bio Mike Atherton brings years of experience to the UX, IA, and Content Design field, having tackled content challenges at big names like Meta and the BBC. Now, he's focused on developing UX writing systems, exploring the use of AI to do big things with tiny teams. As well as the day job, Mike is the creative mind behind Newsbang, a daily satirical news podcast that's both written and produced using AI technology. With Carrie Hane, he also wrote the book ‘Designing Connected Content’, sharing strategies for seamless digital experiences. Mike lives in the British countryside and loves working from home. Connect with Mike online LinkedIn Newsbang Video Here’s the video version of our conversation: https://youtu.be/lpDa8szujWo Podcast intro transcript This is the Content and AI podcast, episode number 22. Most of the news coverage and social-media conversations around AI and content feel urgent and important. This is serious business, but you can have fun with this technology, too. Mike Atherton has done content work at places like the BBC and Facebook, and he still does proper content design in his day job. Newsbang, his daily, AI-produced satirical news show, has given him both an outlet for his inner comedian and a venue in which to hone important new work skills. Interview transcript Larry: Hi, everyone. Welcome to episode number 22 of the Content and AI podcast. I am really delighted today to welcome to the show Mike Atherton. You might know Mike, he's probably best known as the... Well, he's best known for a lot of things, but he's worked at the BBC and a lot of other interesting stuff he's done. He co-wrote the book Designing Connected Content with Carrie Hane, which a lot of people in my world appreciate. But he's now a content designer and creative technologist based in the UK. Welcome, Mike. Tell the folks a little bit more about what you're up to these days. Mike: Well, hey, Larry, thanks for having me on. It's great to be back. Yeah, I'm a UX writer and content designer by day. I work with various product teams in different kind of companies to write everything from the microcopy, the words on the buttons, through to taxonomy and control vocabulary and all the good stuff that we UX writers like to do. And as part of that, for the last few years, I've been dabbling with these wicked AI tools that have come our way and seeing what I could do with them to try and make them generate content in a particular voice and tone or in a particular way to fit in with a brand voice or a product voice. And that's really got me interested in the styles of writing and the styles of content that models can generate if you give them the right push. Larry: Yeah, well that's why, I mean, I'm always looking for an excuse to talk to you. But most recently in December, you launched this news site called Newsbang, which is entirely AI generated. And I mean, there's a number of taglines I've heard in it, but one of them is "a taste of truth served with a side of satire,"" and it seems like there's a lot of... But anyhow, there's always to what you were just saying, well, there's so much about this project I want to ask you about. But one of the first things is there's a distinctive tone to it throughout. There's a bunch of different personalities in there, a bunch of different topics covered, but you know you're listening to Newsbang, so maybe is that a good place to start with the- Mike: Yeah, absolutely. I mean, the characters are perhaps my favorite part of it. So Newsbang, for the uninitiated, is a daily news podcast, very much modeled around a kind of nightly news bulletin. But in the kind of silly way that you might find in Saturday Night Live's weekend update or old sketch shows, like Not the Nine O'Clock News or particularly my favorite one, The Day Today, which was a BBC comedy show that just turned 30 years old this year. Mike: And a lot of these shows, the joke is the kind of bombastic self-important reportage if you like. And that happens on our show through different archetypes that one might emulate in 1980s BBC science presenter and the other with a kind of hard-hitting investigative journalist and another with a kind of self-satisfied middle-aged sports presenter. And together that gives the show a kind of ensemble sketch comedy vibe, which was really sort of what I was going for and influenced, I think, a lot by these kind of news parodies. I say the shows I mentioned previously, but what if they were actually a real show and had to sustain their length and had to really go out every day like the news does? Larry: And the setup of it is, like you just said, it sort of has these conventions around it. And if you just listened to it and weren't really paying attention, to the extent you were paying attention, you'd get that it's a parody site. But just the tone, the flow, the structure, everything about it evokes that. How did that come together? Because we were talking before we went on just a little bit about, you said, nowadays if you can imagine it, you can do it with AI. Talk me through the steps from that, imagining it and the first episode. Mike: Well, sure. I mean, like many men of a certain age, I started to fancy having a podcast. I even bought this big stupid microphone, but never really got around to doing, kind of put that away. And then I was getting into AI, which was my latest all-consuming hobby. And through that I found a now abandoned project, sadly called Crowdcast. It was a GitHub piping script where it would take a feed of Reddit posts and turned them into some chatty podcast segments and then send them to the Eleven Labs API to turn it into kind of text. I go, "Oh, this is kind of interesting," this sort of scratches is that podcast itch, but with the added bonus of not actually having to talk to anyone or do anything. Mike: So I started to kind of play around with it now. I mean, six months ago, six, seven months ago when this started, I was in no way any kind of developer. I couldn't make head or tail of Python and what have you. But about the same time or GPT-4 came out and ChatGPT was running the kind of full-fat GPT-4. And it was fantastic as a coding co-pilot to be able to make sense of these scripts that I found and didn't really understand and get them running. And whenever I have an error message, I could paste that into ChatGPT and it would debug the error. And then through a lot of baby steps and trial and error, I managed to get a prototype together, and that's a prototype, it was called Relationships on Reddit. And don't look for it, it's not there anymore, because it's kind of an embarrassing now. Mike: But basically I cloned a voice, a celebrity voice, Stephen Fry, and I was pulling in real actual live Reddit posts from a subreddit, and then it would generate a sort of Agony Aunt segment, Agony Aunt show about these problems in the Stephen Fry. And I ran that for about three days and I'd listened to it in the morning and try and figure out, am I even interested in this? Is this content sort of worth hearing? And it was on day three that I realized that there was a bug in my code that was stopping the Reddit events from passing through to the LLM. And so it was happily making up its own stories that unbeknownst to me, and that was a really kind of strange feeling that a bug in code rather than just crashing the script, crashing the computer, would fail in less noticeable ways. Mike: So it also sort of brought me to my senses a bit, and I realized that deep faking a celebrity voice, offering artificial advice to real people's problems was not okay at all. But it did, I don't know, it gave me that aha moment that you could basically turn code into a piece of media with no intervention, recording steps, no editing steps or anything really. End-to-end, you could run a script or a set of scripts that would take information from some external data feed and at the other end spit out an MP3 of a radio show. I mean, what a time to be alive. Larry: You make that sound very simple and it probably is quite doable and easy nowadays, like you said with ChatGPT. Can you just walk me through your tech stack of that? How deep down are you training the model with these Python scripts? Or is this for prompt generation? Mike: Sure. Well,
undefined
Mar 11, 2024 • 33min

Elizabeth Beasley: A Financial-Industry “Risk Nerd” Navigates AI Adoption – Episode 21

Elizabeth Beasley As AI is storming into content design and operations, Elizabeth Beasley is taking a patient and deliberate approach to adopting it in her practice. Elizabeth works on security and identity products at Intuit, so the experiences she designs have to be reliable and trustworthy, hence her identification as a "risk nerd." She has also navigated big business changes before, like the shift from cable broadcasting to video streaming, and saw in those transitions the benefits of being a cautious and curious adopter of new technology. We talked about: her role as a content designer working on security, identity, and fraud at Intuit how her background in media and technology have made her a slower adopter of new technology like AI how being a "risk nerd" informs her concern around reliability and trustworthiness in AI how her cautious approach to AI adoption may actually put her in a better position to develop trustworthy AI experiences the new collaborators she is working with as AI arrives on the scene her work on an industry standards body around new security technology the utility of having troops back at the fort to keep the old operations running as your org explores new tech like gen AI how her interest in history informs her approach to change the inherent risks in being first to adopt new technologies her "peaceful Wednesday" practice for preventing and coping with stress and burnout how times of rapid change like this can prompt useful career reflections the recent evolution of her thinking on the "seat at the table" issue Elizabeth's bio Elizabeth Beasley a Senior Content Designer with Intuit’s Identity team. She approaches life with a healthy balance of optimism and skepticism. Because everything is going to be okay, maybe. She used to have hobbies like performing improv comedy and ballroom dancing. Now she enjoys watching other people doing their hobbies on YouTube. Connect with Elizabeth online LinkedIn Video Here’s the video version of our conversation: https://youtu.be/Ny2l_mZgLXQ Podcast intro transcript This is the Content and AI podcast, episode number 21. It's easy to get caught up in the frenetic pace of generative AI technology adoption - unless you have already created rituals to help slow your life down. Elizabeth Beasley created her "peaceful Wednesday" ritual ten years ago to bring some calm to her increasingly fast-paced work life. That practice is serving her well now as she and her colleagues at Intuit develop their approach to incorporating AI tools while continuing to deliver trustworthy experiences. Interview transcript Larry: Hi, everyone. Welcome to episode number 21 of the Content and AI podcast. I am really happy today to welcome to the show Elizabeth Beasley. Elizabeth is a Senior Content Designer at Intuit, the big financial software company. Welcome, Elizabeth. Tell the folks a little bit more about what you're up to these days. Elizabeth: Hey, it's so fun to be here. Yes, I'm at Intuit. Financial services is my life lately, and I've worked in a fun space. I think it's fun, security, identity. I always describe it to my mom or my friends like, I do the part where you create your account, you sign back into your account, you manage your account and I make that easy for you with content design, they still don't quite understand that, but that's the space I work in and I really, surprisingly enjoyed. I worked in banking previously and got into security and now I'm sort of obsessed with security and identity and fraud and it's a fun, exciting space to work, and also I love it because everyone uses it, so it's very relatable and it affects many, many people. So it has a lot of impact. Larry: You can't do anything until you get past that experience that you're designing. Elizabeth: Yeah. Larry: Then you're in and then you can start doing stuff. But you sort of established your cred. You're not like some kind of a Luddite about technology. You clearly, you're deep in it every day doing that, and yet the reason we connected and the reason I wanted to have you on the show is that we connected, I think on LinkedIn, I can't remember exactly how it started, but you're sort of like a slower adopter of AI technologies. And I was like, perfect, I want to get her on the show because every one of the 20 episodes before this were all, and I'm as into it as them, just deep into the technophilia and all the new work things around AI and you're more like, yeah, it's great and you're studying it, you're staying on top of it, but you're not just diving in with both feet, fangirl about it. Tell me a little bit about how that perspective arose. Elizabeth: Yeah, it is, sometimes I feel like I'm behind, but then I'm like, I'm just a late adopter. It's okay. I'm a late bloomer. And I think it's partly because I've seen technology changes before and I worked in television for the first 20 years of my career and watched changes even basically from tape to digital and that really changed people's jobs. And the biggest one though, that makes me think of the way AI was going is streaming sources, streaming video. I worked in TBS and we made that transition from, we are cable network to panicking because everything was streaming and there was a whole TV everywhere initiative where the cable networks are trying to get you to watch their stuff on multiple devices, and that was kind of the beginning. And we were trying to figure out what does that mean? What does that mean for our jobs? Elizabeth: We have produced things everywhere. And it was intense and stressful and scary, and then fast-forward 20 years and I'm looking at it thinking like, "That didn't turn out like I thought." It evolved. Streaming is now actually a lot like cable television again, I was telling someone, I was like, this is funny because now you go to Hulu and you can add channels and build your own cable service. So I think the thing that I've been taking away is, it's a long game and if you get stressed at the beginning, you can burn yourself out and create panic and you don't really need that in your life. So I'm trying to relax into it and just sort of, you want to be aware and learn, but I'm also, you know what? I want to see how other people are using it? How is this going to turn out? Elizabeth: What's the best thing for us? And particularly with AI, which is to me, radically different because there's these moral and ethical parts of it that I don't think we have had to, I haven't had to wrestle with in technology before. Before, it was more like, is this helpful? But this is more like, oh no, is this going to be bad? So it's a little bit more weight as well, if you adopt early and kind of get in there. So I like to play the kind of watch and see where this goes and where do I need to jump into the game. Larry: Yeah, and I love that you have the credibility of having been through this kind of thing before. And as you were talking about it, I was thinking about 20, 25 years ago, I remember just fighting constantly with marketing people who wanted to violate people's privacy. And Seth Godin had come along and said, you know permission based marketing? That's the way to do it. And that's like convention today and all the laws and regulations do that, but we don't have that now. AI is still like the Wild West. It's still unfolding really quickly. Do you see, when you look at, looking especially with that lens of your TV history, I love that perspective on this, are you starting to see any things that you're really paying attention to like, this might be the thing that we look back and go like, boy, that was the wrong thing to worry about? Elizabeth: That's a really good question. I'm kind of, gen AI is just curious to me because I was talking to a teammate yesterday and she's like, "I just don't want to release it until it's reliable," and I was like, "Yeah, that's the name of the game, right?" Getting reliable results and so, a lot of times I'm just wondering, I know we're excited about it and we sometimes want to just, let's use it. It's akin to sort of like, I got this new chainsaw and I need to paint the house. I'll use the chainsaw and it's like, well no, that's not the right tool. So really examining like, what's the right tool for this job? It might be a different form of AI than gen AI, and that's something I'm really conscious of because we get really excited about it and we... Let's consider the other ways to solve this problem and find the right solution. Elizabeth: Now certainly, we have this new toy, let's see if the chainsaw can work, but we might innovate and find a special way to do it, but I'm just sort of thinking, I'm really into the use cases lately. I was like, let's look at the use cases and how do we solve this problem and then really examine if this is the right method. Sometimes you have to go down the rabbit hole of trying it and be like, that wasn't the right method. Larry: Yeah, and as you described it, I think it's like, I love that I'm going to totally steal the chainsaw for painting the house. I loved that because that kind of gets it, you feel like there's some of that going on right now but I think more of the point is just that backing away from like, if all you have is a hammer everything looks like a nail, with AI technology, and thinking back to the fundamentals of well, and in your work, this is like, there's an interesting confluence to what you were just saying about reliability and your friend's concern about that and also working in financial services and security in that, you got a double load of the need for reliability and trustworthiness. Is that part of your concern about this? Are you concerned about the trustworthiness of the experiences you're creating? Elizabeth: Yeah, absolutely. And you just reminded me, I am a bit of a risk nerd.
undefined
Mar 4, 2024 • 32min

Maaike Groenewege: From Technical Writing to Prompt Design Leadership – Episode 20

Maaike Groenewege Maaike Groenewege began her content career in technical communication. She is now a leading voice in conversation design for AI. Maaike draws on her technical writing background in her conversational AI practice, having observed that whether you're writing for humans or designing prompts for LLMs, you have to truly understand your audience and consistently provide clear and specific instructions. We talked about: her work over the past couple of years as a prompt designer how the instruction design principles from her days in technical writing and technical communication prepared her for her current role how her early exposure to help desk duties prepared her for the many question-answering responsibilities in her current role how her writing skills, her critical approach to generative AI, and her love of technology combine to give her a unique perspective on conversational gen AI content how retrieval-augmented generation drawing on high-quality content datasets can help set a base level of knowledge for LLMs her opinion that conversational chatbots are a transitory stage on the way to transactional chatbots that can provide self-service problem-solving the workflow for incorporating retrieval-augmented generation into LLMs the similar meaning of the concept of "chunking" in technical communication and LLMs the differences between how LLMs process language and how humans read - and the implications of this for prompt design and engineering the emerging structure for prompts: assigning a role, describing the task, providing a context the differences between conversational prompting, prompt design, and prompt engineering how she works with her engineering partners the difference between the logical inference that knowledge graphs do and the statistical inference that LLMs use how she keeps up with the rapidly changing developments in her field her invention: ALIs, application language interfaces how she uses ChatGPT in voice mode to capture and summarize her thoughts when she's out for a walk her prediction that "the future is bright for those who know how to write" Maaike's bio Maaike Groenewege is a conversation design lead, linguist and prompt designer with her boutique consultancy firm Convocat BV. She coaches both starting and more experienced conversational teams in optimising their conversation design practise, NLU analyses and team communication. Her main focus right now is on how LLMs can benefit enterprise conversational AI. Maaike is the founder of www.convo.club, an online community for more than 700 conversation designers. Connect with Maaike online Convoclub LinkedIn Connect with Maaike at these events European Chatbot and Conversational AI Summit, Edinburgh, March 12-14, 2024 UX Copenhagen, March 20-21, 2024 Unparsed Conference London, June 17-19, 2024 Video Here’s the video version of our conversation: https://youtu.be/3qxxb18BqFM Podcast intro transcript This is the Content and AI podcast, episode number 20. A false dichotomy has arisen in the AI world between conversational prompting in chatbot interfaces and prompt engineering under the hood. Maaike Groenewege works in the middle ground, in a role she calls "prompt design." She also draws on practices from her background in technical communication, after observing that whether you're writing for humans or designing prompts for LLMs, you have to truly understand your audience and always provide clear and specific instructions. Interview transcript Larry: Hey, everyone. Welcome to episode number 20 of the Content + AI podcast. I am super delighted today to welcome to the show Maaike Groenewege. Maaike is ... Well, she's a principal at Convocat, her company, and she's an actual genuine, prompt engineer. So, Maaike, welcome. Tell the folks more about what it's like being a prompt engineer at Convocat. Maaike: Thank you so much for having me, Larry, and it's such a pleasure to be here with you. Yes, I guess that I can say that for the last two years, I've been working as a prompt engineer, or perhaps rather a prompt designer. When I tell people that, they're all going like, "Oh, that must be really sexy," and, "It's the job of the future," whereas, in reality, I basically write instructions for large language models. Maaike: I guess I wouldn't really associate it with being sexy because most of this is very much getting your feet in the dirt kind of work, Excel sheets, lots of analysis, lots of document analysis and content analysis. I guess it's basically a job for ... Well, can I call them language nerds like you and me? Maaike: So, yeah, right now I'm working for a large Dutch publisher. I help them finding out what kind of work we can automate by prompting. It's really interesting. But I've also worked in situations like hyper-automation, where the prompts are not the prompts that you write in ChatGPT, but they are part of a larger workflow. For instance, a workflow where you receive an email, you want to have a first suggestion for an answer, you generate that text, you put it in an email again, or perhaps in a phone call. So it's not really visible, but it's definitely there in the background. Larry: Oh, interesting. Yeah. Well, you've done so much. I guess, first, let me ... Can you do a quick job description for what a prompt engineer does? Maaike: Absolutely. Larry: You just outlined the main duties, like... Maaike: Yeah, yeah. Larry: But what does that look like day-to-day? What's your new job? Maaike: My new job ... And it's so funny because it's actually my old job. I feel I'm back to technical writing and technical communication again, because in order to write a good prompt for a machine, I actually apply the same principles that I use for prompting humans, instruction design. So basically ... Maaike: Let's take a situation, like a real-life situation. This is not from my actual client, but it was an assignment I once got. It came from a developer. He's quite well-known in the space. He's like, "Maaike, listen. I need to prompt a newsletter or a one-pager for three different audiences, and it should be based on two or three articles from the news, from actual news, about LLMs and machine learning." He's like, "Well, I prompted it and I must say that the output is meh." I looked at this prompt, it was literally like, "Hey, generate a newsletter." Maaike: I'm like, well, when I would tell a junior editor to create a newsletter, I would give him more instructions. I would tell him something about my target audience. Who is it for? What are the information needs? What is their level of expertise? Because technical writing 101, don't write for your own level of understanding, but make sure you understand your target audience. Maaike: So it was so funny because prompt engineering is positioned as a rather technical job sometimes or a very marketing job. But my job is right in the middle. So what I do is I do a lot of domain analysis. I need to know who I'm working for, and in order to determine the information need of my target audience, I, of course, need to know a little bit of what they're doing. Maaike: I do target audience analysis, user task analysis, all kinds of stuff. Then when I write my prompt, I do find I write it in the form of a traditional instruction, and that, of course, is one. I've been doing that for 25 years. So it's almost like being full circle back into the place I never left in the first place, because even as a conversation designer, I still feel very much ... At heart, I'm always a technical writer because I always support users in completing their tasks successfully or answering questions, solving problems, and these are just new incarnations, I guess, of that job. Larry: As you talk about your background and where you are now, I'm reminded that everybody in content design, conversation design, technical writing, I mean technical writing seems to be like that's the most straightforward one. People often study that in college and then go actually do it. But everybody else, the conversation designers and content designers and UX writers I know, they all come from some crazy amalgam of careers and backgrounds. In your case, it just seems like the perfect storm of that technical writing plus conversation design. Larry: I guess tell me, because, well, you first came to my attention a few years ago as a really prominent and well-known and well-regarded and super-helpful content and conversation designer, and these new agents we're working with, the chatbots and GPTs all that stuff, they're all conversationally based things. Does my idea that you're perfectly positioned for that, does that resonate? Did you feel like this has just come really naturally, or- Maaike: Yes, and it's interesting because when I started as a technical writer, I wasn't even aware that that was a formal job. It was in the early 2000s. So little trip down memory lane, it was the time of a news group called Tech Role where people like me gathered. I was what we call the lone technical writer at the company. So I basically invented my own job. Maaike: I found that I just got really fascinated, especially by how people ask their questions. I also spent some time at the help desk at the same company where I started as a tech writer. The way people ask questions, it can be so completely different from what you think they want to do, and that's what started to fascinate me throughout my career. Maaike: With conversation design, 20 years later, I felt like I had won the lottery because, for the first time, we got our user questions handed to us on a silver plate, because especially if you create a chatbot with natural language understanding, you get the literal questions from your users. This, of course, as a technical writer. Maaike: Well,

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app