
Content + AI Lisa Welchman: Content, AI, and Digital Governance – Episode 29
Jun 6, 2024
00:00
Lisa Welchman
Over the past 25 years, Lisa Welchman has established and codified the field of digital governance.
With an enterprise consulting career that spans the emergence of the web, the arrival of social media, and the rise of mobile computing, she is uniquely positioned to help digital practitioners, managers, and executives understand and manage the governance issues that arise with the arrival of generative AI.
Lisa is the author of the leading book in her field, Managing Chaos: Digital Governance by Design.
We talked about:
her career in enterprise digital governance
her concern about the lack of transparency in the existing governance practices at AI companies
an analogy she sees between WYSIWYG and AI tools
the contrast between more mature governance models like the UX field has developed and newer digital practices like the adoption of GPTs
governance lessons that new tech implementers can always learn from prior tech eras
her call to action for technical experts to alert executives of possible harms in the adoption of new technology
the elements of her digital governance framework:
understanding team composition and the organizational landscape in which digital practitioners operate
having a strategic intent
articulating governance policies
establishing practice standards
the range of digital makers she gets to interact with in her work
the importance of accounting for the total business and organizational environment when jockeying for a seat at the table
the responsibility of experienced digital makers and managers to call out potentially troublesome patterns in the adoption of new tech
the importance for digital practitioners of staying aware of how much agency they have right now
Lisa's bio
Lisa Welchman is a digital governance trailblazer with over two decades of experience. She's passionate about helping organizations manage their digital presence effectively and sustainably. Known for her practical approach, Lisa has worked with a variety of clients, from global corporations to non-profits. She’s also a popular speaker and the author of "Managing Chaos: Digital Governance by Design." A mentor and educator at heart, Lisa is dedicated to helping leaders make the digital world a safer and kinder place for everyone.
Connect with Lisa online
LinkedIn
Video
Here’s the video version of our conversation:
https://youtu.be/-UIj0YWxLaI
Podcast intro transcript
This is the Content and AI podcast, episode number 29. Whenever new technology like generative AI emerges, organizations have to deal with both the opportunities and the challenges that arrive with it. It often falls to practitioners like content strategists and designers to alert the C-suite of potential governance concerns that arise with the adoption of new tech. Lisa Welchman sees in this situation an opportunity for digital makers to take the lead on educating their organizations about these important issues.
Interview transcript
Larry:
Hi everyone. Welcome to episode number 29 of the Content + AI Podcast. I am really happy today to welcome to the show Lisa Welchman. Lisa is a true legend in the field of digital governance. She pretty much established the discipline, I think it's safe to say, over the past 25 years. She wrote what I would argue is the leading book on it, Managing Chaos: Digital Governance by Design. But welcome Lisa, and the reason I wanted to talk to you this week is we're right in the middle of Rosenfeld Media is doing a conference on design and AI, and it seems like AI is an area that's really ripe for a conversation about governance. Does that make sense?
Lisa:
Yeah, it does. I will contextualize myself a little bit in saying that digital governance is a really broad term, and my focus is really around enterprise digital governance, how digital governance manifests inside of an organization that's making and putting things online. And there's a lot of other governances around there in the internet web space that are equally interesting, but not where I specialize.
Larry:
That idea of enterprise. And what's interesting about that is that the big companies that are doing this stuff, that are most prominent in the field, it's all Google and Anthropic and Microsoft and OpenAI and huge organizations like that. Do you have any feel for what governance is happening inside those orgs?
Lisa:
I don't actually have any kind of feel. I think the types of organizations that you describe have in some capacity mature governance inside of the organization because of the nature of the types of products and services that they offer online. And just from evidence. Now, whether or not we like the decisions that are being made within that governing framework that they have, that's an entirely different concern. I am concerned about those larger organizations married with the newness of this version of AI, that's like the iceberg, the AI iceberg is finally poking its head out of the water and we're paying attention to it now. And there's a lot of stuff underground that these organizations have been doing for years that we're not really aware of. I'm a little nervous about the lack of transparency around the preamble governance that may have happened, concerned about that.
Lisa:
But I'm not concerned that they aren't governing for many organizations, enterprise organizations, B2Bs who are coming into this technology afresh just as it's emerging to them I'm more concerned because they're more likely to take ChatGPT, and I know it's not a great analogy, but ChatGPT feels to me like a WYSIWYG AI tool. You don't really need to know what you're doing. It's like those of us back in the day who learned HTML, we actually had to learn HTML to make things work. And then you got these, what you see is what you get tools, these WYSIWYG tools come out of the framework and anybody could code a page and it made really sloppy, nasty code on the backend, but it didn't matter because the browser served it up.
Lisa:
And I see some of these new tools, particularly around generative AI as like WYSIWYG tools for AI. And it makes me nervous because not a lot of people are asking "What's in that black box and what's happening and who made the decisions about it?" Which is really what governance is about, "Who was considered, what are the policies, what's the value system around making this technology?" And I don't see a lot of people asking that in the enterprise.
Larry:
I think a couple of things about what we just said. One, the notion that these things are black boxes, that the LLMs, and in fact, even the engineers who build them often say they can't explain what's going on underneath them. But you contrast that with, I spend a lot of time with conversation designers and other UX designers, and in that world, it's so clear that transparency and explainability are crucial to consumer acceptance and adoptance and safety. It seems like reconciling that should be on the governance agenda someplace. Is that reconciliation of intent with customer expectations, is that something that governance can help with?
Lisa:
It is, but I would also argue that you're comparing apples to oranges because one of the things that I like to talk about a lot that a lot of people talk about are maturity models. And the maturity model for a new technology or a new anything is that it comes out of the chute hot and heavy. People don't really know what they're doing with it. They try new things. There's a lot of craziness on organic growth. We make a big mess, a lot of harm and lack of safety come into play and somebody screams and says, "We need to govern this," or, "We need to write policy around this." Or if you're more on the operational side, "We need to write standards around this. We need to become more transparent. We need all of these things happen." And then there's some struggle and then things mature, and then you have a more sustainable model.
Lisa:
You're comparing a UX model that's fairly mature with a coming-out-of-the gate one. And it's not entirely fair because UX has not always been that way and experience development and the development of an online experience has been quite chaotic and a lot of harm that we see has been a result of UX not thinking through problems early on or implementing things or not understanding the foundational functionality of what they're asking for, not understanding that certain types of online interactions will create certain data pools that can be exploited by the organization. That all happened in the UX world. It didn't come out clean.
Larry:
I want to follow up. There are two things about that. One you alluded a minute ago to the AI tip of the iceberg. The AI has been around forever, since the seventies and eighties, and it's just now the arrival of the GPTs and in particular ChatGPT 3.5 almost a year and a half ago now. There's that, but that's where people perceive the start of this to be, and that's where it does lag far behind UX practice, but in fact, it's been around for a while. Is this a common pattern, I guess to see the technology?
Lisa:
Yeah, it's just how it flows. This is just how things work. This is a presentation that I give about the history of automotive, automobile safety and things come out of the gate very hard. Usually in the US, other parts of the world, people are trying to make money or trying to figure out how to exploit this new technology that's become mature enough that it can actually be used to make money and to build product. We all know there's a huge preamble to every technology where people fail and fail hard and fail sometimes for 50 to a hundred years or more. They're failing, failing, failing. Finally, somebody comes up with something that's actually viable and it comes into the marketplace and then people think, "It's new." And of course it's not new,
