
Content + AI Elizabeth Beasley: A Financial-Industry “Risk Nerd” Navigates AI Adoption – Episode 21
Mar 11, 2024
33:18
Elizabeth Beasley
As AI is storming into content design and operations, Elizabeth Beasley is taking a patient and deliberate approach to adopting it in her practice.
Elizabeth works on security and identity products at Intuit, so the experiences she designs have to be reliable and trustworthy, hence her identification as a "risk nerd."
She has also navigated big business changes before, like the shift from cable broadcasting to video streaming, and saw in those transitions the benefits of being a cautious and curious adopter of new technology.
We talked about:
her role as a content designer working on security, identity, and fraud at Intuit
how her background in media and technology have made her a slower adopter of new technology like AI
how being a "risk nerd" informs her concern around reliability and trustworthiness in AI
how her cautious approach to AI adoption may actually put her in a better position to develop trustworthy AI experiences
the new collaborators she is working with as AI arrives on the scene
her work on an industry standards body around new security technology
the utility of having troops back at the fort to keep the old operations running as your org explores new tech like gen AI
how her interest in history informs her approach to change
the inherent risks in being first to adopt new technologies
her "peaceful Wednesday" practice for preventing and coping with stress and burnout
how times of rapid change like this can prompt useful career reflections
the recent evolution of her thinking on the "seat at the table" issue
Elizabeth's bio
Elizabeth Beasley a Senior Content Designer with Intuit’s Identity team. She approaches life with a healthy balance of optimism and skepticism. Because everything is going to be okay, maybe.
She used to have hobbies like performing improv comedy and ballroom dancing. Now she enjoys watching other people doing their hobbies on YouTube.
Connect with Elizabeth online
LinkedIn
Video
Here’s the video version of our conversation:
https://youtu.be/Ny2l_mZgLXQ
Podcast intro transcript
This is the Content and AI podcast, episode number 21. It's easy to get caught up in the frenetic pace of generative AI technology adoption - unless you have already created rituals to help slow your life down. Elizabeth Beasley created her "peaceful Wednesday" ritual ten years ago to bring some calm to her increasingly fast-paced work life. That practice is serving her well now as she and her colleagues at Intuit develop their approach to incorporating AI tools while continuing to deliver trustworthy experiences.
Interview transcript
Larry:
Hi, everyone. Welcome to episode number 21 of the Content and AI podcast. I am really happy today to welcome to the show Elizabeth Beasley. Elizabeth is a Senior Content Designer at Intuit, the big financial software company. Welcome, Elizabeth. Tell the folks a little bit more about what you're up to these days.
Elizabeth:
Hey, it's so fun to be here. Yes, I'm at Intuit. Financial services is my life lately, and I've worked in a fun space. I think it's fun, security, identity. I always describe it to my mom or my friends like, I do the part where you create your account, you sign back into your account, you manage your account and I make that easy for you with content design, they still don't quite understand that, but that's the space I work in and I really, surprisingly enjoyed. I worked in banking previously and got into security and now I'm sort of obsessed with security and identity and fraud and it's a fun, exciting space to work, and also I love it because everyone uses it, so it's very relatable and it affects many, many people. So it has a lot of impact.
Larry:
You can't do anything until you get past that experience that you're designing.
Elizabeth:
Yeah.
Larry:
Then you're in and then you can start doing stuff. But you sort of established your cred. You're not like some kind of a Luddite about technology. You clearly, you're deep in it every day doing that, and yet the reason we connected and the reason I wanted to have you on the show is that we connected, I think on LinkedIn, I can't remember exactly how it started, but you're sort of like a slower adopter of AI technologies. And I was like, perfect, I want to get her on the show because every one of the 20 episodes before this were all, and I'm as into it as them, just deep into the technophilia and all the new work things around AI and you're more like, yeah, it's great and you're studying it, you're staying on top of it, but you're not just diving in with both feet, fangirl about it. Tell me a little bit about how that perspective arose.
Elizabeth:
Yeah, it is, sometimes I feel like I'm behind, but then I'm like, I'm just a late adopter. It's okay. I'm a late bloomer. And I think it's partly because I've seen technology changes before and I worked in television for the first 20 years of my career and watched changes even basically from tape to digital and that really changed people's jobs. And the biggest one though, that makes me think of the way AI was going is streaming sources, streaming video. I worked in TBS and we made that transition from, we are cable network to panicking because everything was streaming and there was a whole TV everywhere initiative where the cable networks are trying to get you to watch their stuff on multiple devices, and that was kind of the beginning. And we were trying to figure out what does that mean? What does that mean for our jobs?
Elizabeth:
We have produced things everywhere. And it was intense and stressful and scary, and then fast-forward 20 years and I'm looking at it thinking like, "That didn't turn out like I thought." It evolved. Streaming is now actually a lot like cable television again, I was telling someone, I was like, this is funny because now you go to Hulu and you can add channels and build your own cable service. So I think the thing that I've been taking away is, it's a long game and if you get stressed at the beginning, you can burn yourself out and create panic and you don't really need that in your life. So I'm trying to relax into it and just sort of, you want to be aware and learn, but I'm also, you know what? I want to see how other people are using it? How is this going to turn out?
Elizabeth:
What's the best thing for us? And particularly with AI, which is to me, radically different because there's these moral and ethical parts of it that I don't think we have had to, I haven't had to wrestle with in technology before. Before, it was more like, is this helpful? But this is more like, oh no, is this going to be bad? So it's a little bit more weight as well, if you adopt early and kind of get in there. So I like to play the kind of watch and see where this goes and where do I need to jump into the game.
Larry:
Yeah, and I love that you have the credibility of having been through this kind of thing before. And as you were talking about it, I was thinking about 20, 25 years ago, I remember just fighting constantly with marketing people who wanted to violate people's privacy. And Seth Godin had come along and said, you know permission based marketing? That's the way to do it. And that's like convention today and all the laws and regulations do that, but we don't have that now. AI is still like the Wild West. It's still unfolding really quickly. Do you see, when you look at, looking especially with that lens of your TV history, I love that perspective on this, are you starting to see any things that you're really paying attention to like, this might be the thing that we look back and go like, boy, that was the wrong thing to worry about?
Elizabeth:
That's a really good question. I'm kind of, gen AI is just curious to me because I was talking to a teammate yesterday and she's like, "I just don't want to release it until it's reliable," and I was like, "Yeah, that's the name of the game, right?" Getting reliable results and so, a lot of times I'm just wondering, I know we're excited about it and we sometimes want to just, let's use it. It's akin to sort of like, I got this new chainsaw and I need to paint the house. I'll use the chainsaw and it's like, well no, that's not the right tool. So really examining like, what's the right tool for this job? It might be a different form of AI than gen AI, and that's something I'm really conscious of because we get really excited about it and we... Let's consider the other ways to solve this problem and find the right solution.
Elizabeth:
Now certainly, we have this new toy, let's see if the chainsaw can work, but we might innovate and find a special way to do it, but I'm just sort of thinking, I'm really into the use cases lately. I was like, let's look at the use cases and how do we solve this problem and then really examine if this is the right method. Sometimes you have to go down the rabbit hole of trying it and be like, that wasn't the right method.
Larry:
Yeah, and as you described it, I think it's like, I love that I'm going to totally steal the chainsaw for painting the house. I loved that because that kind of gets it, you feel like there's some of that going on right now but I think more of the point is just that backing away from like, if all you have is a hammer everything looks like a nail, with AI technology, and thinking back to the fundamentals of well, and in your work, this is like, there's an interesting confluence to what you were just saying about reliability and your friend's concern about that and also working in financial services and security in that, you got a double load of the need for reliability and trustworthiness. Is that part of your concern about this? Are you concerned about the trustworthiness of the experiences you're creating?
Elizabeth:
Yeah, absolutely. And you just reminded me, I am a bit of a risk nerd.
