
Sarah O’Keefe: AI in Technical Communication and Content Strategy – Episode 4
Content + AI
00:00
AI Integration in Technical Communication and Content Strategy
The chapter explores the incorporation of AI, particularly large language models, in technical communication and content strategy. It highlights the role of AI in tasks like summarizing, jargon identification, and ensuring consistency, while emphasizing the importance of accuracy in regulated areas such as medical devices. Additionally, it discusses the challenges and importance of trustworthy content creation in the age of generative AI and chatbots.
Play episode from 02:13
Transcript
Transcript
Episode notes
Sarah O'Keefe
The arrival of AI affects every area and aspect of content practice.
In the technical documentation field, Sarah O'Keefe sees three immediate impacts on the work she does for her clients:
how AI agents can support technical documentation workflows,
the ability to create content with generative AI, and
the ways that AI is changing the delivery of technical content
And wherever she looks in the content and AI landscape, she sees the need for governance guardrails and strategic thinking.
We talked about:
her work at Scriptorium, which focuses on scalable, efficient technical documentation
her take on the current impact of AI on technical content
the unique concerns about generative AI that arise in the technical communication world
how chat-based user interfaces will change the delivery of technical content
how users will always hack systems to use them as they wish
the looming role of trust and reputation as important factors in online interactions
how techniques like RAG (Retrieval Augmented Generation) can help LLM-based applications deliver better results
the importance of thinking about the content life cycle as you assimilate and integrate AI into your practices and workflows
a very simple AI-risk-analysis heuristic
open questions - many of them complex and non-obvious - around copyright issues in the AI world
Sarah's bio
CEO Sarah O’Keefe founded Scriptorium Publishing to work at the intersection of content, technology, and publishing.
Today, she leads an organization known for expertise in solving business-critical content problems with a special focus on product and technical content.
Sarah identifies and assesses new trends and their effects on the industry. Her analysis is widely followed on Scriptorium’s blog and in other publications. As an experienced public speaker, she is in demand at conferences worldwide.
In 2016, MindTouch named her as an “unparalleled” content strategy influencer.
Sarah holds a BA from Duke University and is bilingual in English and German.
Connect with Sarah online
LinkedIn
info at scriptorium dot com
Video
Here’s the video version of our conversation:
https://www.youtube.com/watch?v=FOfdOSD8C1A
Podcast intro transcript
This is the Content and AI podcast, episode number 4. The arrival of generative AI, large language models, and other AI technologies obviously affects us all. In the world of technical documentation, Sarah O'Keefe sees three immediate impacts on the work she does for her clients: how AI agents can support technical documentation workflows, the ability to create content with generative AI, and the ways that AI is changing the delivery of technical content - and across them all, the need for guardrails and strategic thinking.
Interview transcript
Larry:
Hi, everyone. Welcome to episode number four of the Content + AI podcast. I'm really happy today to welcome to the show Sarah O'Keefe. Sarah is the CEO and founder at Scriptorium, which is a company that does technical communication and documentation stuff. Sarah, tell the folks a little bit more about your work there at Scriptorium.
Sarah:
We're interested in the question of how do you apply systems and technology to what we call enabling content, which is technical product learning, knowledge base, all of the things that are content that enables you having purchased a product or service to actually successfully use that product or service. And we do a ton of work around content management systems, translation management systems, and basically helping companies scale their content operations into something that works. Right? Because typically, somebody shows up on our doorstep and says, "Well, we're doing it this way and we've been doing it this way forever, but this way isn't working anymore. We acquired a couple of companies. We got a lot bigger. We're doing more and more localization and we're just drowning in content, drowning in inefficient content processes. Please help us." That's our typical problem set.
Larry:
You're the cavalry riding in to save the day. That's great. But I think like anybody in the content world these days, there's this new kid in town, the AI, especially the large language models that are kind of... They seem to be finding their way into every corner of the content world, and the reason I wanted to talk to you is you're probably the first person I thought of when I thought of technical content. What's going on with AI in the technical world? So, a pretty broad question, but we talked a little bit before we went on the air, but I think just the context for how AI is affecting your work.
Sarah:
First, I'll say that, I mean, people are trying all sorts of things and experimenting and working through what does this look like and what are we doing with it and how are we going to make it work for us? Fundamentally, I think this is going to end up being a pretty straightforward tool, and I compare it usually to a spellchecker. Right? I mean, it would not occur to anybody in this day to write content without using a spellchecker. It's just part of the groundwater. It's part of the fundamental tool set, that bag of tools that you're sitting around with when you actually go to create content. So, in many ways AI is going to be that spellchecker.
Sarah:
It's going to be, "Hey, can you write an abstract for me? I wrote the article, but I was told to do a summary and I don't feel like doing it, so can you write it for me? Show me all the places where I've used jargon in this article that I should not have," this sort of supporting tool set that identifies patterns, good or bad, and then helps me work through cleaning up those patterns. Or, "Hey, does this follow the same pattern as that other article I wrote? Are they consistent? Show me that." You had a great example about, "Can you rewrite this in the style of Dr. Seuss," which sounds super fun and possibly not totally productive, but I'm here for it. So, those types of things. There's this idea that the AI world in general can give you the ability to take your tools or to take a bunch of tools and apply them to what you're trying to do and make it better, faster, cheaper.
Sarah:
Now, separately from that tool set, we've got generative AI, and I think that the technical content world is looking at GenAI a little bit differently than most of the rest of the universe because, especially if you're doing content that is compliance related, so in other words, there's a regulatory body somewhere looking at it, and/or you have products where there are health and safety implications. So, I mean, the obvious example of this is medical devices. It is important to me and to you and to all of our friends and family that the instructions for how to use a medical device are in fact correct because some very, very bad things could happen if we generate a bunch of instructions and they are wrong, and in the worst case, it could injure or maim or kill somebody, and that seems bad.
Sarah:
Now, we're not talking here about AI with bad intent. We're just saying that if you plug a bunch of stuff into ChatGPT and say, "Generate instructions for using this product," there's a pretty good chance that it's going to generate some nonsense. And nonsense is okay for certain kinds of scenarios, but it is not okay when I'm trying to figure out how to configure a pacemaker. I mean, that's just bad.
Larry:
Yeah. Although, at the same, I do want to say one thing right there. We were talking before we went on the air that one of the things, it's not like you don't rewrite it into solid Dr. Seuss, but you can sort of tailor communication. One of the beauties of the... You mostly work with DITA, this very structured content, and by having the core knowledge and information that you're working with there, you can do stuff with it. One of the examples you gave was this notion of a customer, an end user looking at this documentation going like, "Well, that's not quite how I understand or how I learn. Can you give it to me a different way," sort of the way we often will prompt to ChatGPT or something like that to rephrase things. Is that an existing use case now or something that's coming soon in the tech comms world or...
Sarah:
That's kind of the third use case because we've got the AI tool support use case. We've got the help me generate content use case, which is again, terrifying, but provided that you have enough guardrails around it and you have people checking what it's generating could be useful. But the third use case for particularly chatbots and generative AI is this universe of not authoring but rather delivery. So, as a consumer of the content, I might look at it and say, "Oh, I don't understand this content. It's too complicated," and I would ask ChatGPT to rewrite it for me now, into say, a seventh grade level or into simpler English, or I could tell it use simplified technical English only, and explain this thing to me. So, there's a whole bunch of stuff I could do there.
Sarah:
Now if I'm the producer of the information, the owner of the product who's producing all this tech comm content, then it's pretty likely that what I would actually prefer is for that content or that process to take place on my website within my guardrails and sort of my controls. So, I deliver all this content. I put it in a data store of some sort, and then I wrap a generative AI, a chatbot over the top of that, and I point it at only my approved content. But of course, what's pretty likely to happen is that your end user is going to access a webpage somewhere that you and I carefully produced and vetted and approved and et cetera, and they're just going to copy and paste it into ChatGPT, and tell it to rewrite it. Right. You really can't stop that.
Sarah:
So, I do think that not so much the large learning models per se,
The AI-powered Podcast Player
Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!


