
Laura Costantino: Scaling Content Design to Work with LLMs – Episode 12
Content + AI
00:00
Transition from Content Design to Training Large Language Models
Discussing the shift from bespoke content creation to strategic content at scale, focusing on training large language models. Exploring the challenges, techniques like fine tuning, and methodologies used in working with vast amounts of content as data for model training.
Play episode from 01:41
Transcript
Transcript
Episode notes
Laura Costantino
Laura Costantino is watching the emergence of AI in content professions from two interesting and valuable perspectives: as a content designer working on LLMs at Google and as an active participant in the social-media communities where content professionals gathers.
In their work at Google, they have returned to their roots as a content strategist to manage the challenges that come with designing content at a massive scale.
Through their interactions in the community, they have had the chance to hear the concerns of content designers who are navigating the new world of AI - and to inspire them with advice and success stories.
We talked about:
their work at Google as a senior content designer training LLMs
how their content strategy background is helping in their current work
the difference in working with content at a huge scale, as is required in their work with large language models
how their work is operationalized in the ever-changing workflows at Google
the community of knowledge sharing that has arisen organically among a variety of content crafts at Google
their advice on how to cope with the rapid pace of change in the world of AI
how they works with data scientists, machine-learning engineers, and other AI collaborators
their cautiously optimistic view of future of the content-design profession
their advice to content designers for taking a proactive and curious approach to new AI technologies and practices
Laura's bio
Laura Costantino (they/them) is a senior content designer and strategist working on AI and large language models (LLMs) at Google. For the past ten years, they have worked at the intersection of UX, content, and marketing for some of the world's largest tech companies. Laura developed a passion for storytelling early on and received a MA in Cinema Studies in San Francisco, where they worked as a curator for a range of film festivals and cultural institutions around the Bay Area. Outside of work, Laura is committed to mentoring people transitioning into UX and tech, advocating for content, and sharing advice on LinkedIn. They currently live in NYC, were born in Southern Italy, and speak both English and Italian fluently.
Connect with Laura online
LinkedIn
Video
Here’s the video version of our conversation:
https://youtu.be/EdgyXGC3xlI
Podcast intro transcript
This is the Content and AI podcast, episode number 12. The arrival of large language models and chatbots like OpenAI's ChatGPT, Anthropic's Claude, and Google's Bard is creating both existential concerns and new opportunities for content professionals. In their work as a content designer at Google and through their extensive professional networking, Laura Costantino has the chance to witness the full range of work experiences and personal emotions that come with the rapid adoption of new artificial intelligence practices.
Interview transcript
Larry:
Hi everyone. Welcome to episode number 12 of the Content and AI podcast. I'm really delighted today to welcome to the show Laura Costantino. Laura is a Senior Content Designer at Google, doing really interesting work around AI and content stuff. So welcome Laura. Tell the folks a little bit about your role there at Google?
Laura:
Hi, Larry. Thanks for having me. It's so nice to be here. Yeah, so I've been at Google for about a year and a half, but somewhat recently, maybe three and a half, four months ago, I moved from my previous team to my current team, and I am at the moment working as a senior content designer, training large language models. So that's my new job.
Larry:
Well, training large language models at one of the biggest tech companies in the world, that's pretty interesting, especially for folks in the content world. There's so much to ask about that. I guess the first thing I'd ask is what's the biggest change? What's the biggest difference in training a language model versus the content design work you were doing a year ago?
Laura:
Yeah, that's a great question. I came up to content design through content strategy and to an extent marketing as well. And so I think for me, it really helped to have that content strategy background, meaning really being familiar with content at scale, content governance. And I think that's been the biggest difference for me, that in my current role, I had to go back to my past and brush up on some of those skills that I think I learned more in the past, versus in my most recent roles as a content designer. I think my day-to-day was still a little bit more writing strings and felt a little bit more like bespoke and... I don't want to say in the moment because of course, ideally it wouldn't be in the moment, but unfortunately sometimes it is in the moment when someone asks you to write a string or edit a string, versus right now I do think my role, it's a lot more focused on the strategy at scale, and I do think it's a function of the role more than say, me growing in my career or something.
Larry:
That's so interesting because when you think about it, because most content design roles, like you just said, you're embedded in a specific product working on just strings and error messages, but also the narrative of the whole product and all that stuff, but then you move up a notch to this kind of thing and all of a sudden like, "Boy, I'm glad I have this content strategy background because I need it again."
Larry:
Tell me a little bit about how that manifests in training a large language model? It seems clear, I think I get why you need to be strategic about it, but can you talk a little bit about why you've had to go back in your toolbox for your content strategy stuff?
Laura:
Yeah, of course. So training data for a large language model, of course, we're talking about volume of data that are really hard to wrap our heads around, and two techniques and one in particular that we've been using that are used as to train large language model or fine-tuning and reinforced learning. And there is all sorts of methodologies that are used and most methodologies require to look at content at scale, like ingest. And some of the technicalities, I admit, I don't fully understand myself, but I create metaphors in my mind or images of how I think certain things work. And I always imagine these large quantities of data, which in this case is really content, sentences and words and so on and so forth being ingested into these box and that then creates more content out of it.
Laura:
And so for me, I think in that sense, working on content at scale because some of the content is content that is created by UX, but we also work with a lot of other people. So it's not so much like me as a content designer, I have a full handle on all the strings that are going to go into a flaw, that's just not going to happen. And so it's more creating the guidelines. And some of that of course, is the work of a content designer, but I think here it becomes even a little bit more not just the guidelines in terms of style and voice and tone, but also operationally, how do we make sure that creating content at scale can work for the team to a scale that is big enough that it helps training the model?
Larry:
Yeah, as you talk about that, I'm wondering, first, there's two things in there that really interest me. One is you're using content as data, and then data as a design material. So are you looking for patterns in the data and the content or... Because at scale, you can't just look at every data point and go like, "Oh, we'll treat this one this way." Tell me a little bit about that?
Laura:
Yeah, that's exactly it. And I think that's, again, going back to what I was saying, my days as a content strategist and doing some sort of, for example, taxonomy work or thinking about in the past, how to label certain kinds of data. And this isn't necessarily what I do now, but when I did that in the past, when I work on categorizing content, a lot of what I had to do was looking at patterns and trying to figure out... And I have been in my head a little bit and loosely wanting to use the Pareto principle. If I remember correctly, 20% reflects the rest of the 80%, you only need 20%. Maybe this isn't a really good explanation, but that's what I think, sampling through the data and trying to find patterns, just like you said, and seeing how the model is responding. And from that, figuring out how do we constantly improve it with new training data.
Larry:
And you talked about that because you're actually in there training the model. And you mentioned two terms there, fine-tuning and reinforcement learning. Here's my little tiny brain's interpretation, is that fine-tuning seems like go on one level deeper than prompt engineering and doing higher level fine-tuning of the model itself. And then reinforcement learning, as I understand, is a neural network thing that's like a Skinner box kind of reinforcement, giving little food pellets to the model when it gets something right. Is that how it works?
Laura:
From my understanding, yes, the reinforce is a little bit more like saying, "This is good, this is bad," in a very simplified way, and that's one side of it. And then the other side, the fine-tuning, right now for me has been more working with a really large scale of content, like a really large amount of model responses.
Larry:
Yeah. Hey, and when you first started talking about this, you mentioned that the ultimate goal out of all this work with the modeling and your work in general is to operationalize it, to get it ensconced in your day-to-day work, I guess. How does that differ? Because I've seen them done a lot of that kind of work in the content design world, but in the AI world, not so much. How does operationalization look in your world?
Laura:
Yeah, that's a good question because I do think we're still figuring it out.
The AI-powered Podcast Player
Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!


