

Content + AI
Larry Swanson
Content + AI has two missions: to demystify the family of technologies and practices known as artificial intelligence and to democratize the use of AI across the span of content practice.
Episodes
Mentioned books

Oct 24, 2024 • 34min
Colleen Jones: AI and The Content Advantage – Episode 39
Colleen Jones
Now in its third edition, Colleen Jones's book "The Content Advantage" has become a classic in the content-profession literature.
The new edition of the book continues to highlight content intelligence and content effectiveness and adds a new focus on the impact and use of AI in content programs. It also takes a fresh look at the enduring concepts of digital disruption and digital transformation, both of which have been accelerated by the arrival of new AI technology.
We talked about:
her work at Content Science and how it informs the forthcoming third edition of her book, "The Content Advantage"
her take on the concepts of "digital disruption" and digital transformation, both of which have been accelerated by the arrival of AI
the title she'd give a movie about pace of organizational adoption of AI: "Slow and Slower"
how elevating content concerns to the C-suite has garnered better results for companies lke the pharma giant Pfizer
how AI can accelerate the implementation of content visions, strategies, and experiences
how AI can improve content intelligence and aid in the assessment of content effectiveness
how the structure, framework, and methodology for assessing content effectiveness remains the same in the age of AI
her push to get organizations to use digital transformation as the lever to take an end-to-end view of their content
how she consciously crafts the language she uses to talk about her consulting services - for example, using the term "end-to-end" instead of "omnichannel"
a correlation that she's identified between operational maturity and AI implementation
how AI might improve the process of improving content performance
her optimism about the prospects for content professionals in the new AI-dominated tech world
Colleen's bio
A content expert and Star Wars fan, Colleen Jones is the founder of Content Science, an award-winning content firm where she has advised or trained hundreds of the world's leading organizations to become content Jedis. She has worked with many of the Fortune 50, the largest U.S. web properties, the largest nonprofits, and several U.S. government agencies. She also served as the fractional head of content at Mailchimp during its high-growth period before its $12 billion acquisition by Intuit.
A member of Mensa, Colleen shares insights about content, AI, and business by writing for Entrepreneur, MediaPost, Forbes, and Content Science Review and by speaking at events around the world. She has earned recognition as a top content change agent by publications such as Technical Communication and a top voice for content strategy and artificial intelligence by LinkedIn. As a top instructor on LinkedIn Learning, Colleen's courses have reached hundreds of thousands of professionals.
Connect with Colleen online
LinkedIn
Resources mentioned in this interview
Content Science
The Content Advantage, third edition November 2024
Video
Here’s the video version of our conversation:
https://youtu.be/lumGk_5EH6Q
Podcast intro transcript
This is the Content and AI podcast, episode number 39. The arrival of generative AI has upended many corners of the content world. As a long-time content consultant and researcher, Colleen Jones is very aware of this phenomenon. But Colleen is equally aware of the enduring value of intelligent, effective content, and the fact that all content efforts must ultimately engage and motivate actual human beings. When applied thoughtfully and strategically at an organizational level, AI can help achieve all of these goals.
Interview transcript
Larry:
Hi, everyone. Welcome to episode number 39 of the Content + AI Podcast. I am really delighted today to welcome to the show Colleen Jones. Colleen is the president of Content Science, and also the author of the forthcoming book, The Content Advantage, in its third edition. It's been out for quite a while and the new edition has a lot of new additions about AI. Welcome, Colleen. Tell the folks a little bit more about what you're up to these days.
Colleen:
Thank you so much, Larry. It is great to be here, and fantastic to connect with you again. Content Science, we have been doing all kinds of interesting things in and around content. We do a lot of professional services as part of that. We do a lot of research and analysis, and we get the opportunity to do it for clients, but also independently, just delve into things that are of interest to us or that relate to trends that we're seeing. We've been continuing that over the past several years, and I'm really excited with the third edition of the book to bring some of those updated insights, facts, stats, all that kind of good stuff into our current, very interesting situation with AI and content.
Larry:
Yeah, and I think that very interesting in air quotes is appropriate. And one of the things, I can't remember, I read the second edition of your book maybe five years ago, so I can't remember if digital disruption figured as prominently then, but that's how you open the third edition of the book, is with this notion of digital disruption, which I think is really apt in the age of AI, but I think it's also just in general, it's related to digital transformation and a number of other phenomena that are going on. Can you talk a little bit just about what your concept of digital disruption, and how it applies especially to content practice?
Colleen:
Yeah, absolutely. You know what? I mentioned it in the second edition without really having any clue of just how much disruption would happen between the second edition and the third edition, so made it much more prominent in this third edition. And really, what that is about is the pace, the acceleration of change driven by technology. And right now, what's really driving that is artificial intelligence. At a macro level, big picture view, when disruption happens, that really drives the need for change. Business models might need to change just the way a current business model is executed, might need to change all kinds of implications.
Colleen:
That really is what digital transformation is trying to address. And I know a lot of people see that as jargon, but in the business world it is taken kind of seriously, a lot of big budget around it. And with my book, The Content Advantage, I am really trying to tie in content to business decisions. I thought it was important to mention both digital disruption and digital transformation, and really kind of make the case for how important content is to both of those concepts.
Larry:
It just occurred to me literally, as we were talking that in both the case of digital transformation and the adoption of AI, you get the sense that there's a lot of director and VP and management level people who are getting the charter from on high, "We have to do digital transformation, we got to do AI." And I think that's the level that most of us operate at. Is that a correct assumption on my part? Because you're way more in the management consulting side of this than I am, I think.
Colleen:
Yeah, I think that there's certainly that reactive stance of, "Hey, there's a lot going on here. We really need to take this seriously. Do we really need to get into implementing AI and so on?" AI, in some ways is a shiny object. It is getting a whole lot of attention. There's that kind of reactive stance, but then we're also seeing a little bit more of a strategic approach. Something that I think is interesting is that individual adoption of AI, what we've seen over the past couple of years, especially generative AI, that can be fast. Someone creates their own account and they can start generating content, refining their prompts and so on. But organization-wide AI adoption, it has been slow and it's getting slower. If I gave it a movie title, I think I'd call it, Slow and Slower.
Colleen:
And I think that's a good thing because there's a little bit of pause around all of the potential pitfalls that come with AI. I think there's more realization of just how much impact generative AI can have, how much it affects an organization because content supports just about every business function. So it's far reaching in terms of implications, and so it's an opportunity to get more strategic. I think the slowness isn't necessarily bad. It's an opportunity for organizations who are really looking at potentially implementing AI at a larger scale to think about doing that strategically. And it's a big opportunity for content leaders, professionals or allies of content leaders and professionals to be a big part of that conversation. That's what I'd really love to see more of.
Larry:
Yeah, I've talked to a couple of people on the podcast too, especially in the content design world, in that product content world where they're often like the perennial fight for the seat of the table on these product and teams. And when they demonstrate their AI chops, they often not only get a seat at the table, they have the C-suite calling them for advice about stuff. Are there any examples of that in your practice? I'm going to keep in mind throughout the conversation that you do all this kind of independent curiosity of your own research. Are you finding any places where content people have sort of an edge, like an edge in that AI gives them a competitive edge in terms of that seat at the table or influence in an organization?
Colleen:
Yeah, absolutely. We've worked with, over the past year, director-level content leaders and above who really are trying to update strategy and operations to factor in AI in the right way, which I think is super exciting and that's the right way to do it, doing everything from a series of AI readiness workshops to really kind of dig into, where are the opportunities, what are the gaps we have to be able to make the most of those opportunities? That type of thing.

Sep 17, 2024 • 32min
Bill Rogers: AI-Powered Assistants, Chat, and Search for Content Platforms – Episode 38
Bill Rogers
Bill Rogers is an experienced AI entrepreneur whose latest venture, ai12z, gives web content platform owners tools to build digital assistants and chatbots and to run gen-AI-powered searches.
We talked about:
his work at his latest startup, ai12z, which builds copilots designed to power content experiences
his use of the term "copilot" as a generic AI capability, to distinguish it from branded uses of the word
the two main capabilities of their copilot: question answering and ReAct (reasoning and action)
his take on RAG architectures and how ReAct fits into them
how integrating copilots into content and commerce architectures can guide users through complex interaction flows that are connected to third-party services
how to ensure that users have confidence in AI systems and that the systems are technically secure
the technical architecture that underlies their copilot platform
how copilots help write queries to search utilities and other information and knowledge sources to help with tasks like complex product comparisons
the variety of UIs their platform provides: search boxes, knowledge panels, etc.
how interactions with copilots can inform an organization's content planning
the importance of including image AI in this kind of platform, to both better understand the content and create more robust ALT text
Bill's bio
Bill Rogers is a visionary entrepreneur with a deep technologist background in AI and digital technologies. Recognized for significantly influencing the evolution of online experiences, Bill founded Ektron and served as its CEO. Under his leadership, Ektron emerged as a pioneering SaaS web content management platform, serving thousands of organizations globally. After Bill sold Ektron to Accel KKR, it merged with Episerver and became part of Optimizely. Bill then co-founded and led Orbita as its CEO, driving innovation in advanced conversational AI. Beyond these startups, Bill co-founded several other ventures and has had an expansive career in digital signal processing and robotics engineering. Bill holds a Bachelor of Science in Electrical Engineering from Boston University.
Connect with Bill online
ai12z
bill at ai12z dot com
Video
Here’s the video version of our conversation:
https://youtu.be/hJPnAvWXBlA
Podcast intro transcript
This is the Content and AI podcast, episode number 38. You wouldn't try to operate an airliner without a copilot, and you shouldn't operate a modern web architecture one function at a time either. That's the case that Bill Rogers makes for his latest AI startup, ai12z. His company builds AI copilots - in the generic, non-branded sense of that term - that enable robust search and discovery, streamline complex tasks like mulitfaceted product comparisons, improve accessibility, and even help with content planning.
Interview transcript
Larry:
Hi, everyone. Welcome to episode number 38 of the Content and AI podcast. I am really delighted today to welcome to the show Bill Rogers. Bill is a longtime veteran in the content management and technology world. He founded a company called Ektron years ago, which was acquired by Episerver, which is now known as Optimizely. He ran a conversational AI platform long before ChatGPT came out called Orbita, and he's currently the CEO and founder at ai12z. So, welcome, Bill. Tell the folks a little bit more about what you're up to these days.
Bill:
Thank you, Larry. Yes. So, at ai12z, what we're doing is we're focused on building essentially a copilot, enabling websites and mobile applications, the ability to take advantage of AI to help drive experiences.
Larry:
Nice. And that's a nice, succinct description of what you do, but a lot of websites have chatbots or things like that. How does a copilot... Well actually, first let me back up because copilot is an interesting term. I first became aware of it when GitHub did their coding assistant thing, and then Microsoft has a whole suite of branded products called Copilots. But we're talking about a generic capability. Is that correct?
Bill:
That's correct. I think the term copilot, Microsoft has used quite a bit, but it is a generic term. We actually like to refer to it as a website AI assistant. And if you think about it, in the days of Ektron, we had this phrase, "What do you want your website to do?" And now we are talking about, "What do you want your AI to do for your website?"
Larry:
Interesting. Human needs haven't changed that much, but we have all these new capabilities. I guess what are one or two use cases that have jumped out early in your journey that are really helping people?
Bill:
So, when you think about, "What does copilot need to do?" So, one of the obvious things is this ability to be able to answer questions. And so when you talk about years back, when people were building chatbots, the challenge was creating the knowledge for that question and answering took a tremendous amount of work because you'd have to curate each piece of content that you're going to answer a question with. You had to create an intent model. Just an awful lot of work.
Bill:
Today, we have a CMS connector, we ingest the data and we can answer any question that your content actually have. You don't have to redo anything with your content in order to make it usable for question and answering. So, that's the first step, just question and answer.
Bill:
Then there's this concept of ReAct, which is reasoning and action. You enable these agents to do things. It can talk to backend systems like CRM systems or it could talk to any system that you have in your system. You just make a REST API available for it, and all of a sudden we can now use this data to create workflow to accomplish tasks that used to take an awful long time to go do and create, and it doesn't need to be that way at all anymore.
Larry:
Yeah, I know a lot of conversational designers and I've watched them work in Voiceflow and tools like that and hand crafting all those query... all the questions and answers basically, and the intent discernment stuff that they do. There's a lot to that. And so that ReAct, that sounds like a really intriguing... it's like you can get your fingers into any other system that you have. And this kind of reminds me of a... Is this in the family of a RAG architecture where you're...
Bill:
So, a RAG architecture would actually be just an agent to a ReAct system. So, let's just describe RAG. To the users, RAG is a way for you to, instead of using the knowledge of the LLM, you are using the content of your own content and you're answering... the LLM is answering questions based on that content. So, you have typically a vector database that when you ask the question, it gets the content and based on the content that it gets, the LLM will analyze that content and build a summary answer to it, actually very, very robust. And so that's a core piece to it.
Bill:
What ReAct does is that there's a large language model that does the reasoning. It thinks about what came in as a question and says, "Can I just answer that question or do I call one of my agents to help me answer the question?" And so, one of those agents can be ReAct... I mean, can be the RAG.
Bill:
So, why that becomes very exciting is that let's say that you want to compare two products. Your RAG has the information about each product in the system. The reasoning engine knows if you said... We'll use an example, sports example. If I said I wanted to compare the stats of Bobby Orr and Derek Sanderson, that's very tough for RAG because that one compare question, are you going to find content in your system that actually does do the comparison? And you're likely not. And so what will happen is that the reasoning engine says, "I'm going to go call the RAG for Bobby Orr, and then I'm going to call the RAG for what's the stats of Derek Sanderson."
Bill:
It gets the answer of those two information and then it combines the answers to do the comparison, and you get an amazing comparison around that concept. So then you take that step to the next level with a reasoning engine. And the reasoning engine, you tell them about all the tools that you have available to it: email, SMS, CRM, and the list goes on. Google Maps, Google Places. And you then say something to it like, let's say you're a hotel and you said, "What is the directions to the hotel from the airport?"
Bill:
And so the reasoning engine, from its system prompt, knows the address of where the hotel is and it knows where the nearest airport is, and it'll actually call an agent called Google Maps and it passes to that, the address of the airport, address of the hotel and IT generates the Google map with the full map and the link so that you can actually... so you see all the directions just like you would in Google Maps, but you can click on it and now it's in your mobile phone.
Bill:
So, you can see how a hotel can start looking at a reasoning engine as enabling all these third party services. Like if you said, "What are my activities?" Then the system is intelligent enough to say, "Oh, I have these eight activities, would you like to learn more?" And it gives you call to actions to learn more. And you then click on learn more and you see something about golf that you were interested in. It tells you about golf and you said, "Would you like to book a tee time?" You click book a tee time, a form has to come up to collect who are you and it collects your first and last name, your room number. And then it says, "Do you want to pick a date and a time?" So the time slots, when you pick a date, the slots are going to change. So now you pick all the information and then it might say, "Do you want to rent a club car?"
Bill:
And then it collects that data and it'll analyze it, send you an email, register it with the system of record that this booking has occurred.

Sep 3, 2024 • 34min
Jeff Coyle: Creating New Content-Marketing Opportunities with AI – Episode 37
Jeff Coyle
Generative AI tools and LLMs bring the need for a new kind of content awareness in organizations of all sizes.
While some have focused on content creation, Jeff Coyle has grown and accelerated his content-marketing capabilities by leveraging the content discovery and operations improvements that AI can deliver.
We talked about:
his decade-long history in working with NLP, AI, and content
his overview of the rapid progression of AI technology over the past two years
the importance to businesses and enterprises of doing a data inventory to understand their unique strengths
the exponential increases in both the capabilities of the AI services he uses and their affordability
the importance of creating high-quality content in this new AI landscape
how to capture your org's knowledge and use it to fuel your content plans
how journalists are crucial for capturing that knowledge
his take on the current state of content-industry employment
the importance of aligning content and its performance to organizational KPIs
the crucial differences between how you wish people would consume your content versus how they are consuming it and how they might be
the ongoing difficulties of marketing attribution and how new predictive models that AI affords can help address them
how a "process inventory" is even more important than a conventional content inventory
Jeff's bio
Jeff Coyle is the Co-founder and Chief Strategy Officer for MarketMuse. Jeff is a data-driven search engine marketing executive with 20+ years of experience in the search industry. He is focused on helping content marketers, search engine marketers, agencies, and e-commerce managers build topical authority, improve content quality and turn semantic research into actionable insights. His company is the recipient of multiple Red Herring North America awards, multiple US Search Awards Finalist, Global Search Awards Finalist, Interactive Marketing Awards shortlist, and several user-driven awards on G2, including High Performer, Momentum Leader and Best Meets Requirements.
Prior to starting MarketMuse in 2015, Jeff was a marketing consultant in Atlanta and led the Traffic, Search and Engagement team for seven years at TechTarget, a leader in B2B technology publishing and lead generation. He earned a Bachelors in Computer Science from Georgia Institute of Technology. Jeff frequently speaks at content marketing conferences including: ContentTECH, Marketing AI Conference, Content Marketing World, LavaCon, Content Marketing Conference and more. He has been featured on Search Engine Journal, Marketing AI Institute, State of Digital Publishing, SimilarWeb, Chartbeat, Content Science, Forbes and more.
Connect with Jeff online
LinkedIn
MarketMuse
Twitter
jeff at marketmuse dot com
Video
Here’s the video version of our conversation:
https://youtu.be/Ij18O07YnYc
Podcast intro transcript
This is the Content and AI podcast, episode number 37. The label "generative AI" has led many to focus on using this new tech for content creation, while the real benefits may lie in different capabilities that LLMs and other AI tools afford. In his work, Jeff Coyle has enthusiastically adopted AI, using it to identify new content repurposing opportunities, to capture and leverage unique organizational knowledge, and to dramatically reduce the costs of content operations, discovering along the way new opportunities for content professionals.
Interview transcript
Larry:
Hi, everyone. Welcome to episode number 37 of the Content and AI podcast. I'm really delighted today to welcome to the show Jeff Coyle. Jeff is the co-founder and Chief Strategy Officer at MarketMuse. We talked on my other podcast, Content Strategy Insights, a couple of years ago, and I'm really excited to have him back because one or two things have changed since then. Welcome, Jeff. Tell the folks a little bit more about what you're up to these days.
Jeff:
Oh, thanks, Larry. And I am glad to be back. I am the co-founder and Chief Strategy Officer for MarketMuse, as you mentioned. I'm working on building artificial intelligence and content strategy offerings so that teams can make better decisions about what content they create or what content they update and then execute a lot faster. And so I'm sure we'll get into the details, but my background, I've been in the search space, building products, building search engines, building lead management systems, or selling them for 25 years. And I've been in SEO for about that long as well. There's probably nothing in the SEO space that you could ask me about that I haven't tackled or got knocked over by and got back up and then tackled. But yeah, I'm looking forward to this discussion.
Larry:
There's so much going on in that world. I really want to stay focused on the AI stuff that we might have to slip into SEO a little bit because that's an old practice of mine way back in the day.
Jeff:
Sure.
Larry:
But the first thing I wanted to do, you mentioned the details and do want to get into the details, but what I would love to get, because you're somebody who's been in this world for 20 years and you were talking about LLMs and Prompt Engineering six months before ChatGPT hit the scene. You're clearly embedded in this world. I would love to get your top-level overview of the commercial landscape around just data and data sourcing and the services around LLMs and GPTs and that whole world. Can you give us just a quick high-level overview?
Jeff:
Yeah. Like you said, and I've been doing natural language processing and the artificial intelligence components for now, gosh, about a decade. Thinking about ways that I can do it. I mean, I was trying to figure out how to use language technology to automatically classify documents into categories and into taxonomies, literally in a project 10 years ago. And then before that, thinking about search engine indexing and search engine strategies and building vertical search engines, building intranet search engines, and then the implications of how to use that to be really great at building content and being really great at SEO. Right now we're in a very unique, and the world is moving so fast that I think everyone really, really needs to focus on the new features and components that come with some of these language model releases.
Jeff:
We just saw from, and this dating this in the late summer in 2024, we saw from recent releases with Llama some of these things that have been closed and not accessible. Now you can see the way that things are working, right? The way that they're open, the waiting, things that you can tweak. You're able to learn from what's being released a lot more than you were in the past. And that's amazing just by itself. The advancing models that come out, even if you don't modify them yourself, they're progressing so fast that if you have a process in place that's using natural language processing technology or large language models, every time a new model's releasing, you're talking about savings of factors of 10 minimum. I mean, I have processes that every time something new comes out, I'm able to knock down 90% of the costs, right? When you're talking about the data side of it, there is massive, massive diamonds built into anyone that has any proprietary data source right now.
Jeff:
Inside your business, if you're a mid-market to small enterprise to enterprise, you should be doing a data inventory. What do we have that's special? What do we have that could be used for someone else's benefit based on how fast this market is moving, whether the use case, if you don't understand the use case, come find somebody like me. Come find somebody like Andrew Amen from 923 Studios, find somebody who is all about knowing how to make use cases with data and turning those things into potential gold mines for your business. If you have a database of customer data, if you have a database of real estate data, if you have a massive search engine index, you can use those things to do magic and you can do it on the cheap now. And it keeps getting cheaper and cheaper. And that's where I don't think people are catching up right now.
Jeff:
They're not catching up to how truly fast and how truly cheap it is to do things that would've cost millions of dollars. And I'm not being hyperbolic there. Millions of dollars only three or four years ago. And I'm a kid in a candy store with these things, right? I mean, I did a proof of concept that would've cost me about a half a million dollars just two or three years ago. And I shocked myself because the total cost of the entire project was a dollar. I mean it was literally a dollar. And I was like, I'm paying more for the coffee that I'm drinking right now than that cost. And I'm like, well, could we scale this? I'm like, hey, let's spend $70. And we did and I'm like, the magnitude of the things that we're doing for the cheap, it's truly staggering. And so I think everybody's really got to think what makes them special, what data do they have or what data do they know about? Maybe it's a partner, maybe it's a peer, maybe it's a data provider, and you can turn it into a partnership and say, hey, you have this thing. We could really do something special with it. That's the new economy with artificial intelligence and with content that nobody's talking about.
Larry:
Yeah. And as you say that, I'm thinking it's probably a rich multi-sided environment too. I'm just picturing, like you just said, if you have the data and people with the data have more opportunities, but people with ideas about what to do with that data, there's also the world of data products, but also just data as a supply for other people's stuff. It just seems like there's so much going on there. And you mentioned the use case.

Jul 31, 2024 • 32min
Cennydd Bowles: Design and Tech Ethics for Our AI Future – Episode 36
Cennydd Bowles
Like most designers who work in technology, Cennydd Bowles has reflected at times on the impact of his work and its ethical implications.
After a couple of decades of information architecture and interaction design practice, Cennydd stepped back from his design work to explore philosophy and ethics in depth.
His explorations have led him to extensive academic study as well as speaking gigs and writing on the subject, including a book, Future Ethics.
We talked about:
his transition from interaction design to tech ethics
his origins in the information architecture world and his career, including a stint at Twitter
how we as designers have missed predictable mistakes and patterns that ethicists have long known about
how he got hooked on philosophy and ethics
his 2018 book on the connections between the worlds of philosophy and design, Future Ethics
the ethical issues that can arise in even a seemingly harmless practice like A/B testing
his prediction that AI will in the not-too-distant future permit almost fully automated product development and the risks that that brings
how the difficulties of measuring trust might exacerbate the trust issues that arise with AI
the "magical" nature of AI his observation that "the problem with magic is it's intentionally deceptive"
a new orchestrator role that he sees coming with AI
his pessimism about the prospects for humans over the long term in the AI economy
how Cory Doctorow's notion of "enshittification" manifests in the design and AI world
what he sees coming: "rapidly iterating mediocrity rather than considered excellence"
the power, albeit diminished recently, of employees to influence ethical decision-making within organizations
three books he recommends (links below)
his advice to designers to listen to and connect with philosophers and learn from their prior work on ethics
Cennydd's bio
Cennydd Bowles is a technology ethicist and interaction designer, author of Future Ethics, and a recent Fulbright Visiting Scholar at Elon University. Cennydd’s views on the ethics of emerging technology and design have been quoted by Forbes, WIRED, and The Wall Street Journal, and he has spoken on responsible innovation at Facebook, Stanford University, and Google.
Connect with Cennydd online
LinkedIn
Cennydd.com
Tech ethics books
Future Ethics, Cennydd Bowles
Design for Real Life, Eric Meyer and Sara Wachter-Boettcher
Ethical Product Development, Pavani Reddy
Ethics for People who Work in Tech, Marc Steen
Video
Here’s the video version of our conversation:
https://youtu.be/MbfK7AnPa-0
Podcast intro transcript
This is the Content and AI podcast, episode number 36. In the flurry of activity launched by AI-technology investment, ethical considerations have been left largely unexplored. Cennydd Bowles is an accomplished interaction designer who has spent the last several years studying and writing and speaking about tech ethics and responsible innovation. What he sees unfolding now concerns him, leading him to predict that the near-term future is more likely to bring "rapidly iterating mediocrity rather than considered excellence."
Interview transcript
Larry:
Hi, everyone. Welcome to episode number 36 of the Content and AI Podcast. I am really delighted today to welcome to the show, Cennydd Bowles. Cennydd is a technology ethicist and interaction designer based in the UK. Welcome, Cennydd. Tell the folks a little bit more about what you're up to these days.
Cennydd:
Hey, Larry. Well, so let's see. I've just got back from America, so for the last six months, I've been in Elon University, North Carolina as a Fulbright visiting scholar. This is really a large part of my transition, essentially, from the days of UX and product design within industry, and transitioning from that into academia, and particularly philosophy, philosophy of technology, and ethics of technology.
Cennydd:
These days, I'm now essentially figuring out what's next. I'm finishing up a master's dissertation right now on the topic of the ethics of A/B testing, which I've got a lot of experience seeing inside companies, and think maybe I can offer something about looking at the ethics of it. After that, well, probably a lot more writing, probably a book or two. Then I think I'm probably heading down the academic path, so probably a PhD in some sort of philosophy, of technology, or computer science somewhere in that kind of space.
Larry:
Oh, great. I'll have to check back in. I'd love to see where... Getting into the details of this. You just mentioned, well, I guess I would love to talk just a little bit more about your transition, because you've been an interaction designer for a long time. I can't remember exactly how long, but we've talked about this and a little bit about your transition, but can you talk a little bit more about what motivated you to go from interaction design into ethics?
Cennydd:
Yeah, you bet. Yeah, so I started off as an IA back when that term was far more sort of current, I suppose. I read the Polar Bear Book, which some of your listeners may well know. This is Louis Rosenfeld and Peter Morville's book. I started, I guess, in about 2002, so it's been 20 plus years that I've been designing digital products. I don't like the idea that you can design the experiences, but interaction design, UX design, whatever you want to call it, for a range of companies, a lot of consulting, a bit of freelance.
Cennydd:
I also worked for Twitter for three years, where I was heading up the design team in London. It was after Twitter, actually, that I started to consider, well, maybe there's something that we're missing here as a community, and maybe there's something I can offer. It wasn't that I was sort of filled with horror and revulsion for what I'd seen inside Silicon Valley. It wasn't that I looked back on my career and said, "Wow, I've made a lot of mistakes."
Cennydd:
Of course, I have, and a few things I wish I could have ethically questioned at the time, but then that's common for all of us. I had an interest in the topic. Just even as a teenager, I was just interested in ethics as a concept, but I have no training in it. My undergrad was in physics, I had a masters in IT as well. I didn't really have any kind of philosophical or ethical background. When I left Twitter, I had sold, I got some shares, and I sold them, not huge amounts, it was a Silicon Valley thousandaire rather than millionaire, but I didn't have to rush into the next thing.
Cennydd:
I could afford to say, "Okay, what do I want to do? What's going to be the next right step for me?" I thought, well, I don't want to rush into a job immediately. I want to poke at this ethics thing. I think there's something here, and I don't understand it, and maybe there's something I can do to try and raise that, the profile of ethics within the design community and the technology community. I started reading.
Cennydd:
I got myself a reader's card for the British Library, and I sat there, and I tried to read philosophy. That's quite hard to do without any background in it. There's a reason why it's seen as a complex topic. It took me a while to find the right types of things, but eventually, I stumbled across some work that blew me away. I thought it was just fascinating, complex, and perceptive, some of the work that I was reading by philosophers and ethicists, and also writers, and artists, and critics.
Cennydd:
They'd been looking at the social impact of technology for decades. What occurred to me is that we just hadn't been listening. We'd been in this space, not really heeding their advice, not really listening to some of the warnings that they might've shared, and just convinced we were the smartest people in the room, and that we would figure it out for ourselves along the way. We're not the smartest people in the room, I'm afraid. You read some of this work, and you recognize a lot of the mistakes and the patterns that you see within the modern tech industry.
Cennydd:
It just put its hook in me, and eventually it got to the point where I said, "Actually, I think this is the direction I want to go. I don't think I want a regular kind of mainstream type design role anymore. I think I want to see what I can do to act as a translator, essentially, between the disciplines of design, and technology, and product, and the world of philosophy." That culminated in a book which I released in 2018, which is called Future Ethics.
Cennydd:
Then ever since then, I've been trying my best to make a living consulting on responsible design and technology, doing some academic work, talking, writing, speaking, all that kind of stuff, to try and influence the industry, frankly, to raise its standards, to consider ethics as more central to what it does. I think I've been partially successful in that. There's definitely been a change in how those discussions are happening since 2015, '16 when I started in that space.
Cennydd:
I'm not saying we're anywhere near winning that particular battle, but I think we're starting to see some slow change. I think that's going to be my continued role.
Larry:
Nice. We were talking before we went on the air that your current study, you're working on your master's dissertation, you said, and you're working specifically on A/B testing. I wonder, that seems like a really good, is that a good lens into tech ethics in general?
Cennydd:
I think it can be. I think one thing that makes it a good, almost sort of microcosm of how the tech industry thinks about ethics, or fails to think about ethics, is that A/B testing is very rarely questioned as something that's commonplace. Well, of course, we A/B test everything, and I've been in companies since 2007 when we A/B tested... Not really, there hasn't been a lot of focus on, "Well, should we A/B test,

Jul 23, 2024 • 31min
Sharon Ni: Merging Conversation Design and Content Design – Episode 35
Sharon Ni
One of the most engaging aspects of generative AI products is their conversational interfaces. This has led many content designers working on AI products to develop skills in conversation design.
Sharon Ni works on both conversational AI products and script-driven chatbots in her content design role at Cisco. She has developed her conversational design and technical AI skills by attending conferences, hackathons, and other events, by networking extensively, and by experimenting constantly with AI and conversation tech.
We talked about:
her work on chatbots and AI tools at Cisco
an overview of the content design guidance chatbot she built
her addition of "conversation designer" to her resume
the evolution of the people ecosystem she works in, which now includes more engineers and data practitioners
the professional development that she's done to prepare her for working with AI and collaborating with her more technical collaborators
how participating in hackathons and other events has helped her advance her AI skills
some of the tools she uses in her work, including spreadsheets, Miro, and Voiceflow
her personal interest in building chatbots and how it's helped her in professional work
the content design repository where she stores the conversational content she works with
how she helps her colleagues understand how to best use AI
her new responsibilities around assessing the technical feasibility of
her advice to "just do it," to start building your own AI projects and connecting with others who share your interest
Sharon's bio
I love writing products. I hate writing about myself. So here’s five quick things about me and my work in AI:
I’m a content designer at Cisco. Currently working on the Cisco AI assistant and Cisco.com chatbot.
I like trying and building different chatbots myself - I recently built a content style guide chatbot that can help people review their copy and find guidelines.
I’m a fierce advocate for content research and like to use data to inform my content design decisions.
I have a background in Psycholinguistics and received a master’s degree from Middlebury College in 2023.
Huge fan of this podcast.
Connect with Sharon online
LinkedIn
Video
Here’s the video version of our conversation:
https://youtu.be/4HgM2hp5hpM
Podcast intro transcript
This is the Content and AI podcast, episode number 35. One of the main attractions of generative AI products is their conversational interfaces. This basic characteristic has drawn many content designers into the adjacent field of conversation design. In her work on chatbots and conversational AI products at Cisco, Sharon Ni has applied conversation design techniques and also learned a lot about the engineering side of AI, sometimes even advising her colleagues on the technical feasibility of their product ideas.
Interview transcript
Larry:
Hi everyone. Welcome to episode number 35 of the Content + AI podcast. I am really delighted today to welcome to the show, Sharon Ni. Sharon is a content designer at Cisco, is doing really interesting stuff with AI and other technologies there. Welcome, Sharon. Tell the folks a little bit more about what you're up to these days.
Sharon:
Yeah, hi Larry. Very nice to meet you and excited to be here. And as you mentioned, I'm currently working on Cisco AI system for security, which is part of the Cisco AI ecosystem. And I'm also working on a chatbot that's on the cisco.com website right now.
Sharon:
And other than that, I am also working with the Voiceflow team to build an AI powered content design, style guide chatbot that can help our design partners to find the right guidelines and also review copy based on the guidelines, basically. It's not going to write the copy for them, but it will provide recommendations based on the good examples and bad examples that I fed into the chatbot and also the templates. So yeah, that's what I do.
Larry:
Well, it sounds like you're definitely earning your paycheck, at least three major things going on there. I would love to start with the content design guidance chatbot that you mentioned, because that's like... I think that'll be of interest just probably a lot of people are working on similar kinds of things. Can you talk a little bit just in general about... You mentioned that it's not so much doing, writing for people, but it's more like style and voice and tone and stuff. Anyhow, can you talk a little bit about how that chatbot works?
Sharon:
Yeah. So basically, I injected all of our guidelines into this chatbot. I kind of rewrite it because you can't just put the same... the guidelines into the chatbot. It's not going to recognize it very easily.
Sharon:
And so I work with the Voiceflow team. They help me to write the code part. And then right now I'm just adding more examples from our product, the copy and our product, and also some good examples and we also need some bad examples so that the AI will be able to recognize it and learn from it. And also the templates that you have to provide with the... what kind of response you want this chatbot to produce in a certain format. The reason why I wanted to create this was because we always get a lot of repetitive questions from...
Sharon:
During our office hours or in our help channels, people are asking about whether or not it's okay to capitalize certain words or sentences. And also they're asking about some words that's already... they're in our guidelines. So that's why we wanted to create this chatbot so that people don't have to look through our guidelines. They can just type in using natural language and to find the right thing that they're looking for. Yeah.
Larry:
Right. And you're building that with Voiceflow. And it's interesting, you still have the job title: content designer. But you're doing an awful lot of conversation design.
Sharon:
I know. I know.
Larry:
Working with Voiceflow and all that. Was that a new skill to you? Because you've been doing this, what, a year or so that you've been working on these chatbots?
Sharon:
Yeah, like a year.
Larry:
So you've kind of upskilled to become a conversation designer as well as a content designer?
Sharon:
Yeah, I think so. And I think I started calling myself conversation designer very recently, because I feel like all my projects right now are AI or chatbot related. But also, at the same time I feel like the conversation design work that I'm doing, just wanted to be clear that might be different from what other companies are doing or other content designers are doing.
Sharon:
But I think basically right now I'm doing a lot of the writing for AI and also the writing for chatbots. But also, at the same time I'm working with a lot of design team, marketing, and also sales team to just think of those strategies for AI. So it's more like a new experience to me, but I find it really interesting and I had a lot of fun with it.
Larry:
Yeah. And you're reminding me of... There is... It seems like generalists, or not so much generalists, but people with versatile skill sets are really going to thrive it seems like in this age of AI, because what you just described... And not just skills, but also the ability to collaborate with new and different people. Like the conventional content design roles, there's the product and engineering and design colleagues where you just mentioned that you're working... Well, this has to do with the nature of the products. You're doing the sales and marketing folks.
Larry:
But you've also mentioned, I know in your AI work you're working with machine learning engineers and data scientists and stuff. Can you talk a little bit about how the people ecosystem around you has changed over the past year?
Sharon:
Yeah, yeah, definitely. I would say in the past I've never really worked really closely with the engineers in the past. Just for our team, we mainly work and support our designers. We're more like a service. And also, because we have a super, super small team. We only have three content designers in our team. So a lot of times we're not the one who actually created the copy at the very beginning. We're more like a reviewing, we're helping them to review and also to edit their copy. And also we have our office hours and help channel to help them answer UX writing related questions.
Sharon:
And right now, I think I'm more embedded in those AI projects from the very start of the project. And I'm doing more than just writing. I know people are talking content designer, only 10 or 20% of their time are doing the writing work, but right now I feel like it's less than that.
Sharon:
We're actually doing more thinking than writing, which is really interesting to me. And I'm in this AI design team and we have our designers and we have our engineers, machine learning experts, and also AI experts and data scientists.
Sharon:
We work really, really closely together because what we're doing right now is we're all trying to figure out together. We don't really know what we're doing. But it's great to be able to understand and also to learn from other people and to learn what they do and also what they know. And I learned so much from especially the engineers about AI and especially the technicality side of AI and also the limitation of AI, what we can do with AI, what we can't do with AI. So I think this is super important.
Larry:
Yeah, that's one thing that has come up in a lot of my conversations, is the level of technical skill that is required to do this is a little more than a lot of conventional content design roles. Was that a challenge for you or did it come naturally or how did you get up to speed to work with these more technical collaborators?
Sharon:
Well, it wasn't easy, I would say,

Jul 17, 2024 • 34min
Andrew Stein: Content Design and AI Leadership – Episode 34
Andrew Stein
Like many content designers in the fall of 2022, Andrew Stein was concerned about the possible negative impact of generative AI on content and design practice. And his concern was heightened by the large number of content designers on his team.
Since then, Andrew has discovered many ways to apply AI in his content design work, both in conventional digital-product design and in content work on AI products.
He has also discovered a happy additional benefit of taking the lead on AI. His expertise has led to exciting new collaborations and leadership opportunities.
We talked about:
his work as a content design and AI leader
his take on the best ways to use AI in content-design practice
how to maintain focus on the fundamentals of content as you work with AI to create new content or manage and validate existing content, and a tool he is developing to automate this
new content-employment opportunities that he sees emerging
the clean slate on which content people can create their new AI roles and responsibilities
some of his techniques for demonstrating how your content skills can help your AI collaborators:
find opportunities to serve
adopt a learner's mindset
"just do" - experiment with tools on your own
some of the people he follows and resources he has consulted as he has developed his AI expertise:
Noz Urbina
Leah Krauss
the conversation design community, in particular Maaike Groenewege
his encouragement for all content designers to find a balanced approach to incorporating AI into their career
Andrew's bio
Andrew is a Director and Principal Content Designer at a financial services company. He’s led content in smart home, social media, AI robotics, and FinTech. Andrew’s experience includes both consulting, and companies like Lowe’s, Wells Fargo, Truist Bank, and Meta. Andrew is currently focused on the way AI tools serve the content design process, and bringing a content-first approach to the development of new AI products and services.
Connect with Andrew online
LinkedIn
ADPList
Video
Here’s the video version of our conversation:
https://youtu.be/hxoMSzyDCFk
Podcast intro transcript
This is the Content and AI podcast, episode number 34. When generative AI burst onto the scene there were plenty of reasons for content designers to be anxious. Andrew Stein channeled his concern into a deep exploration of AI tech and how it might be applied in content work. As a design leader, he has discovered a number of ways that content designers can use AI tools, and build AI products. As an advocate for content practice, he has found that his AI expertise opens many new doors for influencing his business collaborators.
Interview transcript
Larry:
Hi everyone, welcome to episode number 34 of the Content and AI Podcast. I am really happy today to welcome to the show, Andrew Stein. Andrew is an independent content design leader. He works currently for a big financial services firm. He also has his own consultancy on the side, does various content things including AI stuff for folks. So welcome, Andrew. Tell the folks a little bit more about what you're up to these days.
Andrew:
Yeah, very cool. Well Larry, super-happy to be here and as I mentioned earlier, I've seen all the episodes and get so much out of them every time, so really happy to be here. Yeah, right now, like you mentioned, I'm doing quite a bit of work both for the company I work for and on the side, working in both AI projects and traditional content design projects and really where those two merge together, both helping to build teams and build structure around how we approach AI from a content perspective, which I think is really key with all of this. And also how to bring AI into the work that we do as well as content designers working on traditional products and services as well.
Larry:
Yeah, I think that latter is probably the more familiar scenario for most of my listeners, I guess. I do know a number of people who are working on AI products, but I think the more common use case for many people is using AI in their day-to-day, just good old-fashioned content design work. Especially as a leader, how are you implementing that and encouraging your folks and just tell me a little bit about that.
Andrew:
Yeah, well I think at first, all of us were wondering does this do the writing for us? Does this replace us? There was quite a bit of fear and trepidation or looking at it very cynically like, "Oh, this thing can't do anything for me. It's not a writer, I'm the writer in the room." I think there's been a spectrum of views on it, but all looking at it as the writer. Is it going to be the writer? Can it replace the writer? No, it can't. And what I've really landed on, or at least at this point in time is that, no, it's not the writer, but it's a really great assistant to the writer. And so that's really the perspective that I'm coming at it from with the teams that I work on, with my own personal work is really seeing it as a really powerful tool.
Andrew:
Noz Urbina, I think he said, and maybe I've inflated the number, maybe it was 100 and now I've made it 1,000, but I believe he said, "Think of AI not as your superhero that's going to do everything and the magic bullet. But think of it as like 1,000 interns that can do way more than you can, but they can also do way more than you can really poorly with poor instructions or really great as long as you give them great instructions." And so that's really the area where I think AI fits in as a tool for content designers. It's definitely not replacing you, it's not going out ahead of you and doing all the work and then you're wondering where you fit into the picture.
Andrew:
It's very much human in the loop before, during, and after, and it's kind of like a companion or a sidekick that can help you do things, can serve as another person in the room or a lot of other people in the room to give you feedback on ideas. But very much from that perspective and not nearly as much as, "Oh, it can go out and do it for me. I don't really have to think about it." I think you have to think even more now to use AI well, but if you do that, it can be a really powerful tool and that's kind of the spot where I think we're coming-
Larry:
...I don't know, I've managed as many as 15 people at a time, but that's only 15. And I've also done a lot of volunteer wrangling at work conferences and things like that. And that notion of managing enthusiastic and pretty knowledgeable, but really just not as far along in their professional development as you are, managing all that requires a lot of guardrails. How do you manage that? Everybody would love to have 1,000 precocious, brilliant people ready to help them, but they just don't know as much as you do about the job. How do you constrain that enthusiastic creative energy that LLMs bring to the game?
Andrew:
They're definitely way too enthusiastic most of the time, and I think if you try and just generate some content without a lot of structure to what you're trying to generate, you get that way over enthusiastic, too many words, too many repeated words, that young person that's learning a lot of cool words and wants to use them all. I think that's what we see a lot of times. It really is about, and I want this to be an encouragement to all content people out there, is that really the fundamentals of content creation are still there and if you bring those into the scenario, you'll have a much better experience with them. But it really is about diving into those core principles.
Andrew:
So when thinking about creating content or checking your content, being able to connect whatever AI tool you're using to really good sources and good existing content, so like a style guide or a content design system, having that built in and really fine-tuned so that anything you're doing is within those guardrails is really important. And obviously you can expand out from there. I think that's the base. So if you're using something like ChatGPT to help you ideate or to check your work, it's really about, okay, again, like you said, guardrails, what are the guardrails that keep that content generation or that content check or that brainstorming companion within the scope of whatever you're working on? So if you're in an organization, making sure that your LRC guidelines are built into that. That's another way to have checks and balances on the content you're creating.
Andrew:
Even tying into research and personas and having all these different pieces of data that can create this world that LLM or that ChatGPT tool can live within and work through is really important. But you can come at it from a few different angles. So you can think about it like that content creation and you're generating new ideas and new concepts or new content ideas and it's coming to you already within that framework, or you can take existing content and check it against those things as well. Does this match those guidelines? And that's where I think if you're using a tool, you want to build that tool in such a way that it's giving you the reasons that it's making decisions or the pieces of data that's factoring into what it's giving you.
Andrew:
I think that's been a really key thing for us, for the projects I've worked on is having that validation that, "Oh yeah, this made that decision considering this piece of information." Because not only does that give you a reassurance that yes, I'm within the guardrails. It also tells you where it's getting it wrong and where you can go update those guardrails to get better outputs. And then the more people that use it, you know that they're all working in the same frame ... It is just like your style guide. You want everybody to be referencing it. Well, it's the same thing now, but now you've got a tool that's also referencing it.

Jul 11, 2024 • 31min
Anna Potapova: Managing AI Content at Scale for an Ecommerce Giant – Episode 33
Anna Potapova
Generative AI creates new opportunities to create and manage content at scale. And scale is definitely required when crafting content experiences for one of the world's largest ecommerce companies.
Anna Potapova is incorporating gen-AI across the span of her work at AliExpress: content creation and management, localization, personalization, and other areas where her strategic-content mind guides her.
We talked about:
her recent promotion to a new leadership role at AliExpress
which types of content are most amenable to being generated by AI
the standards they use to guide the creation and ensure the quality of AI content
the crucial role of content designers and localization experts in the ongoing iterative improvement of AI content at a large scale
how AI enables the democratization of content creation
the large percentage of user-generated content on the AliExpress platform
how AI helps her team with personalization
how gen-AI content helps them scale their marketing personalization efforts
the importance of inviting yourself to machine learning and data science meetings to show the value you bring
the value of case studies when communicating with internal stakeholders to show the value you can bring
the importance of staying grounded in business objectives when developing relationships with your collaborators
how a strategic approach to your work can help your org use AI most productively
how the shift from hand-crafted content to AI content at scale manifests in content operations
her plans to explore how AI can help evaluate content quality and conduct content audits
the concept of hyper-localization, which addresses very specific regional and cultural differences
the importance of proactively engaging with product and tech colleagues to ensure that standards-backed content powers AI products going forward
Anna's bio
Anna Potapova is Staff Content Strategist at AliExpress (part of Alibaba Global Digital Commerce group). She changed team positioning from pure localization to Content Design, built a style guide and a system to maintain it, established standards for AI generated content in multiple languages and improved business metrics while reducing production costs. Anna has been featured on several podcasts (Content Strategy Insights, Writers of Silicon Valley, Localization Leaders), joined UX Evenings @ Google and helped to build a content community in China.
Connect with Anna online
LinkedIn
Video
Here’s the video version of our conversation:
lkjsdf
Podcast intro transcript
This is the Content and AI podcast, episode number 33. When you create, manage, personalize, and localize content at scale for a global ecommerce giant like Alibaba, you need all of the automation help that you can get. In her role as a content strategy and design leader at AliExpress, Anna Potapova is harnessing the power of generative AI tools and techniques to address customers' individual preferences, to help third-party vendors create better content, and to streamline their internal content design operation.
Interview transcript
Larry:
Hi, everyone. Welcome to episode number 33 of the Content and AI podcast. I'm really delighted today to welcome to the show, Anna Potapova. Anna is a staff content strategist at Alibaba, the big e-commerce merchant in China. She works specifically for AliExpress. Welcome to the show, Anna. Tell the folks a little bit more about what you're doing these days.
Anna:
Thanks, Larry. Happy to be here again. Should I mention that since the last appearance on your podcast, I was on Content Strategy Insights with Arnaud. Since my last appearance I was promoted, I attributed exclusively to your podcast. Thank you so much for having me again.
Larry:
That's too awesome. Thank you.
Anna:
Recently, I've been talking a lot about AI and my team has been doing a lot of work in this area. Last week I actually spoke in front of front the audience of over 200 people in Chinese about how my team is harnessing the power of AI as we need to generate a lot of content every day.
Larry:
Nice. And you as one of the biggest commerce, maybe the biggest on the planet, I mean, there's a lot of content. Every one of those products needs something said about it. And every correspondence you have, there's so much going on there. And a lot of it I'm imagining is either routine or data-driven or in some way amenable to the use of AI. Can you talk just a little bit at a high level about how you're using AI? And also, I just want to note for folks that all of these conversations, we're going to dance around anything remotely proprietary. We're just going to talk in general about how big enterprises can work with big, vast content repositories and how AI can help. Can you talk a little bit about how you use AI to generate content?
Anna:
Well, first of all, it all comes down to what types of content can really be, dare I say, outsourced, can be created with AI because not every piece of content is created equal. Something very important for your product, maybe your core flaw, your gold path, all the UX copy over there. You really want it to be based on empathetic research, based on your users, based on very holistic view of this flow and challenges that people might potentially face. For things like that, I think it's very clear that you will still need that human touch.
Anna:
And recently, I think what I'm really excited about is that I see my company finding good niches, finding good places where AI can really benefit both business and users. We're diving into different content types and we're exploring new opportunities to see how we can create more personalized content, more engaging content where it's appropriate, where we know that it's not going to fail us and it's not going to harm the brand in any way.
Anna:
That ties back to quality centers. Of course, you cannot just have a large language model writing everything for you and having it all shipped without any quality control or any kind of involvement from professionals. Yeah. We've been building the system where we learn and try AI-generated content in different areas. And at the same time, we're building standards to make sure that it meets expectations to our customers. It doesn't contain any inaccurate or false information that it's surely aligns with our brand overall.
Larry:
Nice. And you've said that there's so much involved in it. And so it seems like developing standards is the prerequisite. You're like, "Okay, here's the threshold we have to meet in order to share this content." Can you talk a little bit about how... You mentioned some of the criteria that it's got to abide by the voice and tone of the organization, be accurate, all those things. Are there parts of that that AI can support enforcing your standards, I guess?
Anna:
Yeah, absolutely. But in that case, again, it requires more input from content designers and maybe even multilingual content designers as we work with many different countries and different cultures. It's important that you included your content professionals in the whole process of prompt design, scripts. And make sure that you really use the expertise of people to make sure that we can constantly build up, we can constantly build up, and we can improve from iteration to iteration.
Larry:
Yeah. And I can only imagine the scale of localization that you must do because you're a global company. You serve pretty much every country on the planet. Or do you constrain that in any way? Are you down to 50 or 100 languages?
Anna:
We serve over 200 countries and regions currently in 16 languages.
Larry:
Holy cow. And so that, again, that's one of those things like machine translation is probably one of the oldest forms of, if not AI, at least automation in content. But can AI help with that improving those machine translations and just making the localization person's job easier?
Anna:
Fantastic question. Fantastic question. I think what we're talking about here is localization in scale. When you have, for example, multiple merchants on a shopping platform, or maybe you have multiple hosts on your apartment sharing platform, you need to make sure that the content that those customers publish on the platform is actually attractive and actually interesting to people who are going to use the services or shop with those merchants.
Anna:
It's very important to empower people to create better content with AI, as long as you have this clear standard and you can make sure that your AI generates quality stuff. Patrick Stanford had this very good presentation recently on content design 3.0, talking about the impact of AI on the content design overall. And one of the principles that he brought up that really resonated with me is the democratization of content creation.
That more people will have access to the tools to create content. And if we can guide them, if we can provide them with tools that generate better content that is better for their customers, better for their business, then it's really a win-win situation. That's also one of the areas where we've been working on in order to improve the information that comes from AliExpress sellers.
Larry:
That makes me wonder, what percentage of the business that you all do is third-party merchants who are doing that kind of thing? Like people who maybe don't have professional copywriters or just who content design is not their forte versus how much of it's AliExpress. It must be a huge amount of third-party sellers on your platform.
Anna:
Yes, yes, absolutely. I think on the daily basis, most of the content that people browse on our platform or any kind of platform I believe is coming from those merchants. Yeah. Most of the things that they see are not coming from my team. What my team creates is just a very small chunk, very small chunk,

Jul 3, 2024 • 34min
Duane Forrester: Evolving SEO Strategy for the Generative-AI World – Episode 32
Duane Forrester
SEO has always been difficult, but generative AI takes things to an entirely new level.
Duane Forrester has been immersed in the search world for more than 20 years, including stints as the Product Manager for the Bing Webmaster Program and Vice President of Industry Insights at Yext, where he developed company AI strategy. He also helped launch the schema.org structured-data standard.
Duane offers plenty of AI-specific advice about how to navigate the new search landscape. But he also says that the foundations of good SEO are still grounded in timeless digital best practices: understanding your customers' needs and intentions and consistently giving them good content and helpful user experiences.
We talked about:
his long history as a search-industry expert and leader
his high-level take on the current state of AI
the true benefits of AI for content and how they relate to SEO
the title of his content-and-genAI cookbook: "Common Sense"
the importance of understanding the kinds of content that are resonating with your customers
an interesting AI-driven SEO-localization case study that was presented at PubCon last year that demonstrates the power of understanding user intent
an overview of the knowledge graph tech that underpins the search infrastructure at tech companies and big enterprises
his predication that the future of search will be knowledge graph to knowledge graph conversations between companies and search engines
the rapidly evolving new world of SEO and the imperative for businesses to leverage AI to keep up with the increasing need to scale SEO operations
the enduring importance of providing a good user experience at the end of a search flow
the importance of delivering content in video format into a search landscape increasingly driven by social media
new search behaviors created by Google's Circle Search and AR tech like Meta's Ray-Ban glasses
his observation that search is infinitely more complex than most SEOs can imagine
the secret to search success: attracting attention from consumers, by deeply understanding their behaviors and intentions
his prediction that Apple will launch an AI-powered Siri in September that will thrust ChatGPT into the mainstream
Duane's bio
Duane Forrester is a distinguished figure in the search industry, with a career that spans digital marketing, authorship, and leadership roles at prominent companies such as Microsoft Bing, Bruce Clay Inc. and Yext. His expertise in digital marketing is complemented by a strong understanding of AI/ML, consumer behavior and customer experience, making him a well-rounded and sought-after professional in the field.
During his tenure at Microsoft, Duane was instrumental in the development and launch of Bing Webmaster Tools and Schema.org, focusing on the needs of webmasters and digital marketers. His deep knowledge of search engines and user behavior contributed to Bing's growth and success.
Beyond his work at Microsoft and Bing, Duane has showcased his knowledge as a prolific author in the digital marketing sphere. He has written for most industry publications and his two books, "How to Make Money with Your Blog" and "Turn Clicks into Customers," have provided invaluable insights and guidance to numerous businesses navigating the competitive online landscape.
Today, he, continues to share his extensive knowledge of digital marketing, AI, and customer experience, shaping the future of the search industry and empowering businesses to thrive in the digital era.
Connect with Duane online
LinkedIn
Facebook
Threads
Twitter
Duane's books
How to Make Money with Your Blog: The Ultimate Reference Guide for Building, Optimizing, and Monetizing Your Blog
Turn Clicks into Customers - How to deliver conversions across all online marketing activities
Video
Here’s the video version of our conversation:
https://youtu.be/OjCH0b3isrs
Podcast intro transcript
This is the Content and AI podcast, episode number 32. For almost as long as people have been building websites, SEO practitioners have tried to get their content to the top of the search results. Search has always been a rapidly evolving field, but generative AI takes change to a whole new level. Duane Forrester has been immersed in the search world for more than 20 years. He offers this timeless advice for coping with the new AI and search landscape: understand your customer's intentions, and serve them with good content and helpful experiences.
Interview transcript
Larry:
Hi everyone. Welcome to episode number 32 of the Content and AI podcast. I am really delighted today to welcome to the show Duane Forrester. If you work in anything adjacent to search, you know Duane. Years ago, he was the Bing Webmaster tools, I guess advocate or program manager. He helps launch Schema.org, that ontology that schema that you're all working towards when you try to promote your content online. He worked with Bruce Clay for a while, the search legend, and for the last seven or eight years he's been a VP of industry insights at Yext. So welcome Duane. Tell the folks a little bit more about what you're up to these days.
Duane:
Well, Larry, I think right now I'm hanging out with you, going to create content. I'm not an AI, this is really me. I am obviously continuing my career, moving in a new direction, excited about the opportunities that AI is bringing, looking at different areas, seeing areas that probably need investment by the leading companies, but also we don't know what they're actually working towards. So maybe they have a plan, maybe they don't. I don't know. I'm going to knock on some doors and see if I can open some eyes.
Larry:
Nice. So you're excited about AI and you are as well-informed as anybody. In search, I think if you're going to publish content online or share content online, you want it to be discovered and therefore a lot of people in the content world follow search. But tell me why you're so excited about AI. What do you see, especially for content practitioners, what are some of the opportunities you're seeing now?
Duane:
Okay, this is going to be a two-parter, Larry, because I don't think we can talk about this without touching on the topics that are negatively related to this. The idea of content theft and things like that, I think it's important to address those. I can't say that that's wrong or right. I will say that I understand where that perspective comes from. I believe that we have to be diligent, watch for it, manage it, that kind of idea. I do believe, however, that this technology... Look, you've seen this, everyone's seen this over the last say, two to three years. Every product and service has had the letters A and I appended to it. And it was a way to attract investment, to attract attention, to get PR and all of that. Whether the product or service actually did anything with AI was trivial. It didn't matter.
Duane:
It was just like putting the recycling symbol on something. "Oh look, we're earth conscious. Look at us". And it's like, well, okay. But practically speaking, I'm a big fan. I love the efficiencies that these systems... You and I were talking in our prep and I said, "Hey, I have an idea for a business". And so what did I do? I turned to ChatGPT and I said, "Hey, can you create an outline for what a business like this would look like in the state of California and what you have to do to start that business?" And then I made all kinds of noises like, wow, that's a lot of work and whatever else, because the amount of information that it gives you is extraordinary. It is in many ways, and I'm thinking ChatGPT as I talk through this stuff, but you could be thinking Claude, you could be thinking Mistral, you could be thinking Copilot, you could be thinking AI Overview from Google.
Duane:
All of these different things are capable of this in their own way. Perplexity and so on, Claude, all of these systems are capable of being that support person, that support mechanism that you wish you had. The person you can ask the dumb question of and they won't say, "Wow, what an idiot", they'll just go get you all the information and they tailor it to the level of the question. So the dumber your question, the more detailed the information, the more intelligent, more developed. And I mean, think about it, we're talking the concept of prompt engineering here. You give it more, it gets crisper, you give it less, it's a little fuzzier and it covers more ground. So I think there's a lot to that. When it comes to content, this is a force multiplier. This is muscle that you don't have. If you think this is a silver bullet, however, you are going to be sadly mistaken.
Duane:
And the answer to why am I not doing better in search is because your approach to using AI is flawed. And the bottom line is, it's great for ideation, it's great for rough drafts, but you still need a human with subject matter expertise to go through that content to make certain that content is accurate, factual, on point, has the right tone of voice. All of that matters. And that's huge.
Duane:
I think AI, the current systems that we have, the generative AI systems that we're all familiar with, I think they do a really good job of taking a lot of the heavy lifting out of a number of things. And if you can access tools that will look at large volumes of data, so they will take a look at your log files and they will pull out things that are related that you would never see, that can be very insightful and very useful.
Duane:
And if people are building tools that do those things for you, those can be very useful tools and a wise investment of a monthly subscription cost. I think we're at the very beginning of this, the very early stages of it, but I want everybody to think back to where we were, I don't know, I'll put a number of 25, 27 years ago.

Jun 25, 2024 • 36min
Leah Krauss: Responsible AI and Content Design at Microsoft – Episode 31
Leah Krauss
New AI products like Microsoft's Copilot can be powerful productivity enhancers, but if designers aren't careful they can inadvertently introduce into the product the bias and other hazards that can come with large language models.
As a content designer working on Microsoft's Copilot for Sales product, Leah Krauss helps her colleagues understand and follow the responsible-AI principles that the company has developed.
Leah's advocacy helps her design and product teams create a product that balances the need for transparency about the use of AI with the prerogative to keep customers in flow as they use the product.
We talked about:
her work as a content designer on Copilot for Sales at Microsoft and her advocacy there for responsible AI
how she collaborates with her data science team, which had established a relationship with the content team even before Copilot on other products
the evolution of their AI product-development process
how their design system supports the implementation of responsible AI
the six principles that guide responsible AI at Microsoft:
fairness
reliability and safety
privacy and security
inclusiveness
transparency
accountability
how she advocates for responsible AI on the Copilot for Sales product team
the balance between keeping customers in their flow and being transparent about AI features
the concept of the "human in the loop" and how they apply it in the Copilot for Sales product
the importance in AI product design of always being aware of edge cases and possible misuses of the product
her encouragement to anyone working on AI products to stay curious, ask a lot of questions, and to bear in mind the importance and relevance of our language expertise
Leah's bio
Leah Krauss is a senior UX content designer at Microsoft. She works on Copilot for Sales, Microsoft's AI software for salespeople, where she also collaborates closely with the data science team. She champions responsible AI to anyone and everyone who'll listen, including inside Microsoft and at various UX conferences. Outside of work, you can usually find her reading, or spending time outside with her family - hiking, exploring cities, and hanging out on the beach.
Connect with Leah online
LinkedIn
Video
Here’s the video version of our conversation:
https://youtu.be/VItdSUgzkZE
Podcast intro transcript
This is the Content and AI podcast, episode number 31. The introduction of AI tools like Microsoft's Copilot creates new opportunities for content designers. But as with any innovation, the new technology can be a two-edged sword. For every customer workflow that is streamlined there may also be an opportunity for bias or hazard to get into the product. As a content designer and champion for responsible AI, Leah Krauss helps her colleagues at Microsoft understand and apply responsible AI principles in their product design work.
Interview transcript
Larry:
Hi everyone. Welcome to episode number 31 of the Content and AI podcast. I'm really delighted today to welcome to the show Leah Krauss. Leah is a senior UX content designer at Microsoft where she works on Copilot, which many of you may have heard of. So welcome Leah. Tell the folks a little bit more about what you're up to these days.
Leah:
Hi Larry. It's so nice to be here. So yeah, as you mentioned, I'm working on Copilot for Sales, which is a flavor of Microsoft Copilot, and that's been really exciting to be in on kind of the ground floor of AI at Microsoft. And responsible AI, which is what we're going to talk about today, is one of my most favorite topics to talk about. I've done some conference talks about it and my coworkers are really tired of hearing me go on about it. I actually serve, no, that's not true. It's only half true. I serve as actually a responsible AI champion, one of the responsible AI champions on my team. So it's sort of my thing and I think it's so exciting the moment we're at here and how big a part content designers can play in it.
Larry:
Yeah, I think when you're working with language models, you think the content people would have a leg up on some, but that's really, first thing I want to follow up on is that Copilot, I think it's sort of like multiple products then. It's a suite, well I guess Microsoft knows about suites with the Office suite, so there's integrations of copilot with each of the Office elements, Word and Excel and all that. And then there's also specific tools like Copilot for sales. How big is that little budding empire at Microsoft?
Leah:
Well, so there yeah, like there's one Copilot and then the different flavors depending on the user's needs. So as you said, if people are using Office, they'll see Copilot in Word and they'll see it in PowerPoint and Outlook of course. And if they have more specific needs like a salesperson, then they can use our more specific flavor, which has the kind of email summary that a seller might need. Did my customer mention the budget? Or things like that. And it can also pull from other sources so that the seller can have everything they need right in that one place. Copilot is a big thing at Microsoft, as you know, and I think it's only going to get bigger in the next couple of years.
Larry:
Yeah. As you're talking about it too, I'm reminded of how this thing is coming together. It's like there's this one, I assume it's based on one of the GPTs from OpenAI given a Microsoft relationship there. But it sounds like then that most of the applications, each of these flavors of Copilot, is a lot of your work around fine-tuning the model then for that specific task?
Leah:
Absolutely. I work really closely with our data science team and the first thing you sort of have to, there's the GPT prompt, but then we also write our own prompts on top of that. So we tell the GPT that this is a selling audience, for instance. And I'm really involved with working with the data science team to define what a good output looks like because while they're the experts in the model and how to create an output, I'm the expert in what makes it good and what makes it human and what makes it useful to a seller and scannable and valuable and things like that. So the great thing about this project has been that we get started early on working together. As we content designers know, sometimes content people are brought in too late. And that is definitely a danger here too, like for people who are listening, you can always do something with the data science team, but if you're there from the beginning and you're talking about the prompt together, then you can move forward and really have an impact.
Larry:
That's really interesting because there's all these new collaborations that are emerging along with these AI tools. So you're working, is it mostly data scientists? I've talked to other folks who've worked with the machine learning engineers and other new collaborators. What does the team you're working with look like these days?
Leah:
So at Microsoft it's called data science. At other places it may be called machine learning, but it's the same group of people.
Larry:
Yeah.
Leah:
There's the people who care about the algorithms, basically we can call them. And then we have the people who care about the words, which by the way is not only content designers, it's also product managers and interaction designers, but content is really leading the way.
Larry:
Yeah, that's fantastic to hear. And what you just said too is it sounds like there's more opportunities or is there more opportunity or have you just made it happen to get in earlier or do data scientists see the need to involve word people earlier on?
Leah:
Well, we've been working on AI features and machine learning features even before there was Copilot and even before I joined the team actually. So my manager who has been on the team longer than I have was working with the data scientists when we had a feature called Conversation Intelligence. And what that did was when you would record a meeting, a seller would record a meeting with their customer, then Conversation Intelligence would analyze afterward and give sort of a recap that has the main action items and a bulleted list of the highlights of the meeting. So my manager whose name is Erga Herzog, and she was the one who in the sales organization really built that relationship with the data scientists. And also I work with a lot of other content designers too. So basically I was lucky to come in about two years ago with this really strong base that was already very far along.
Leah:
We had a good relationship with the data science team and there was already that conversation was starting. So what we're doing now is we're sort of trying to formalize that process because sometimes it relies on the individual content design or the individual data scientist to decide when we start talking about a certain feature. And we'd rather have it be more formalized into a process that like, okay, at this point we start talking and then at this point we start looking at sample outputs and maybe we first decide together along with the PMs and the rest of the product squad and the leadership of course, what we want out of this particular AI feature. So that's something that we're working on right now, which is really exciting.
Larry:
That is exciting because this is also new and at some point you have to, like it must be, I can only imagine the pace of work there, but so being far enough into it to kind of step back and go like, oh, hey, this is how sort of the routine way that we, or not routine, but the common way that we do things. Is that sort of, do you think that's common across the other flavors of Copilot or, like because you mentioned you were talking to your peers as well as your immediate colleagues. Is there sort of patterns emerging around how those, like you just said,

Jun 12, 2024 • 0sec
Jack Molisani: The Impact of AI on Technical Communication – Episode 30
Jack Molisani
As the founder of the long-standing LavaCon conference and the principal at a technical content staffing agency, Jack Molisani gets a deeply informed view of the world of technical communication.
While he sees the opportunities that generative AI presents, he raises several concerns for technical content strategy practitioners, among them the inaccuracy of generative AI content and the inability of AI tools to comprehend subtle human communication clues.
We talked about:
his work as the Executive Director of the LavaCon Content Strategy Conference and at ProSpring Staffing, a technical communication job agency
how a change in the LinkedIn messaging interface inspired him to spend more time at in-person events
his observation that many product features that are promoted as "AI" are actually capabilities that have been around for years
his concerns about the ability to identify and vet the sources that AI tools cite
his assessment of the job prospects for technical communicators in 2024
his exasperation with the decline in quality of applicant tracking systems (ATS)
some of the tasks in technical communication that AI can help with
the inability of AI tools to account for subtle human communication dynamics like facial expressions
how using AI writing tools can misrepresent your own writing ability
how a speed networking event that troubled introverts at a prior LavaCon led to the introduction of calming therapy animals at the event, including a therapy llama
Jack's bio
Jack Molisani is the President of ProSpring Staffing, an employment agency specializing in content professionals (both contract and perm).
He's the author of Be The Captain of Your Career: A New Approach to Career Planning and Advancement, which hit #5 on Amazon's Career and Resume Best Seller list. The first printing is sold out. Watch for a soon-to-be-released second edition.
Jack also produces The LavaCon Conference on Content Strategy, which contains an AI track. The 2024 conference is 27–30 October in Portland, Oregon. Register using referral code LSPODCAST for $200 off in-person tuition.
Connect with Jack online
LinkedIn
LavaCon content strategy conference
Prospring Staffing
Video
Here’s the video version of our conversation:
https://youtu.be/RsgY89El1Aw
Podcast intro transcript
This is the Content and AI podcast, episode number 30. The rise of generative AI affects every type of content practice, including the venerable institution of technical communication. Jack Molisani runs both a tech comms staffing agency and the annual LavaCon content strategy conference, which he's organized for more than 20 years. Jack brings a deeply informed perspective to the conversation around the introduction of AI into content practice, especially its impact on employment prospects for technical communicators.
Interview transcript
Larry:
Hi, everyone. Welcome to episode number 30 of the Content and AI Podcast. I'm really excited today to welcome to the show Jack Molisani. Jack is a legend in the textbook, communication, and technical content strategy world. He's the executive director of the LavaCon Content Strategy Conference. He also runs a staffing agency called ProSpring Staffing. Welcome, Jack. Tell the folks a little bit more about what you're up to these days.
Jack:
Wow, okay. As you said, I'm running around two spheres. One is producing the LavaCon Conference in content strategy. The other one is running a staffing agency for technical writers and other content professionals. Although we also have a division that does engineers, and there's some crossover there.
What's interesting, and it's almost a side note but since you asked what I've been up to, is I've discovered that it's almost impossible for me to land new staffing clients over the internet anymore.
Larry:
Interesting. What's going on there?
Jack:
It used to be that someone would post a job on LinkedIn, and I'd wait two weeks. If it's still there I said, "Hey, could you use some help finding someone?" And they'll tell me yes or no.
Jack:
Well, a couple things happened. One is LinkedIn bifurcated your message inbox. It now has two labels, focused and other. It didn't announce this. Suddenly, all my responses were going to other and I thought I had an empty inbox. Where once I discovered this other tab, had people responding to me for two years saying, "Yes, we need help."
Larry:
Oh, God.
Jack:
By then, they don't need help anymore. Two, LinkedIn opened an API so people could use tools to email thousands of people at a time. Suddenly, mine and every other inbox is just filled with spam. Trying to weed all through that to find the real communication piece. And then, they added a third option on their reply screen, a pre-populated answer that says, "Thanks, we're not interested. Thanks, call me. Thanks, but not interested," and delete without responding.
Larry:
Oh.
Jack:
Now managers just go out and delete, delete, delete, delete, delete, delete. Not even saying, "No, I'm good," or, "Yes, please."
Jack:
I have discovered that I'm going old school and meeting people in-person. I've been going to trade shows. I just got back from a software engineering trade show yesterday. It's going to come back to that when we talk about AI in a second. And a manufacturing trade show two weeks ago. Two weeks from now, I'm going to a semiconductor trade show, just to go around to talk to people in-person. Going, "Hi, I'm Jack Molisani. Here's my card. If you don't need me now, maybe you'll need me in the future." I'm guessing people who have their own technical writing services or are independent contractors are like that.
Jack:
The other thing I've seen is now, on LinkedIn, where people post a job, it now tells you how many people have applied for that job. So within a half-hour, it says 100 people applied for this job already. You go, "Really?" A friend of mine said, "No. What really that means is 100 people clicked on the job to read it. They didn't necessarily apply for it."
Larry:
Interesting.
Jack:
I just don't trust anything I read anymore.
Larry:
Yeah.
Jack:
We've come to that point. They said it was coming, it's here.
Larry:
The reason I wanted to have you on this podcast specifically is because this is the new one, about AI. When we first talked, we were talking about your journey into AI. But I'm going to just jump way ahead. I think my prediction is that one of the outcomes of this is going to be a return to human connection. Here you are, exhibiting it already, going out to conferences. Thanks for validating my prediction.
Larry:
I'm assuming you can't ignore AI in your line of work. Both just the technical communication part of it, the programming for LavaCon. I'm assuming it's invaded your life like everyone else. Is that a safe assumption?
Jack:
We do have a track. Last year's LavaCon, when everyone was talking about LavaCon, it was the main theme of the conference. I observed a trend, if I may.
What was it? Four years ago, everyone going, "Chatbots! Chatbots. The future of tech comm is chatbots." Next year, crickets. Year after that, "Oh my God, VR, the Metaverse. Everyone's going to be in the Metaverse." Next year, crickets. Now we're going, "AI! AI! AI!" I'm going, "Hm."
Jack:
I don't think we're going to get quite to cricket level on AI, but I already know that, in my conference, that it's not the main focus this year. We have an AI track, yeah. Sure. Because you got to know what's coming, what's available, what you can and can't do with it. But we're going back to the basics, treating content as a business asset that you can use to reduce costs or generate revenue. Back to basics.
Larry:
Yeah. That came up. I just dropped an episode of the other podcast, Content Strategy Insights yesterday, with a woman at Albert Heijn, the big grocery chain here in the Netherlands. One of her big accomplishments there was getting the enterprise to view content as an asset. I said, "Wow. How did you do that?" I love that that's a focus of yours as well.
Larry:
But, tell me. You're obviously not a rah-rah person. Tell me how you see AI fitting into tech comms and tech content strategy. What do you think? There are some things that are proving to be useful to people, but I gather that you perceive a lot of hype as well.
Jack:
Yes. More of the latter, less of the former. Or it's just not quite here yet.
A couple stories on this. I can see a perfect application of AI in tech comm. Take a company like Boeing, who has 10 million pages of documentation in their content management system. Scan that whole dataset, find out how many of those pages are sufficiently similar that we can combine them, and reuse it, and maintain only one source. Brilliant use of AI.
Jack:
Or take your legacy documentation. If it's structured using headings, break each heading into a separate topic. Automatically add, populate the meta tags. I'm assuming your audience knows what meta tags are. Then repost that as chunked, individual content pieces. Brilliant use, I can see that.
Jack:
What I'm seeing now, however, is every single tool vendor in every single industry or trade show I've gone to is like, "Our tool is AI enabled." One of them was a content management system. I was talking with one of their people. I said, "Hey, that's great. Show me something in your tool that's AI." She goes, "If you create a new topic, we will pre-populate the XML for you." I said, "Hmm. First of all, that's called a wizard and we've had them for decades. What about your tool is artificial intelligence?" She couldn't tell me. She said, "Oh, let me get back to the developer." True story. Absolutely true story.
Larry:
Interesting.
Jack:
I think what's happening is a lot of these tools, they want to be seen as up-to-date and, "We're just as AI as they are.


