The FIR Podcast Network Everything Feed

The FIR Podcast Network Everything Feed
undefined
May 14, 2025 • 31min

CWC 109: Thought leadership for agency growth (featuring Melissa Vela-Williamson)

In this episode, Chip talks with Melissa Vela-Williamson of MVW Communications about her unique journey in public relations and the importance of content creation. Melissa shares her background, highlighting her non-traditional path into PR and her passion for using public relations for social good. They discuss her focus on helping nonprofits and education clients, her role as a content creator, and her work as a columnist for the Public Relations Society of America. Melissa also delves into the impact of the COVID-19 pandemic on her business and the strategic approaches she took to maintain client relationships and grow her firm. They explore the significance of writing books and producing various types of content, emphasizing the value of building relationships and demonstrating thought leadership in the communications industry. [read the transcript] The post CWC 109: Thought leadership for agency growth (featuring Melissa Vela-Williamson) appeared first on FIR Podcast Network.
undefined
May 12, 2025 • 18min

FIR #464: Research Finds Disclosing Use of AI Erodes Trust

Debate continues about when to disclose that you have used AI to create an output. Do you disclose any use at all? Do you confine disclosure to uses of AI that could lead people to feel deceived? Wherever you land on this question, it may not matter when it comes to building trust with your audience. According to a new study, audiences lose trust as soon as they see an AI disclosure. This doesn’t mean you should not disclose, however, since finding out that you used AI and didn’t disclose is even worse. That leaves little wiggle room for communicators taking advantage of AI and seeking to be as transparent as possible. In this short midweek FIR episode, Neville and Shel examine the research along with recommendations about how to be transparent while remaining trusted. Links from this episode: The transparency dilemma: How AI disclosure erodes trust The ‘Insights 2024: Attitudes toward AI’ Report Reveals Researchers and Clinicians Believe in AI’s Potential but Demand Transparency in Order to Trust Tools (press release) Insights 2024: Attitudes toward AI Being honest about using AI at work makes people trust you less, research finds Should Businesses Disclose Their AI Usage? Insights 2024: AI’ Report – Researchers and Clinicians Believe AI’s Potential but Need Transparency New research: When disclosing use of AI, be specific Demystifying Generative AI Disclosures The Janus Face of Artificial Intelligence Feedback: Deployment Versus Disclosure Effects on Employee Performance The next monthly, long-form episode of FIR will drop on Monday, May 26. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript Shel Holtz (00:05) Hi everybody and welcome to episode number 464 of 4 Immediate Release. I’m Shel Holtz. @nevillehobson (00:13) and I’m Neville Hobson. Let’s talk about something that might surprise you in this episode. It turns out that being honest about using AI at work, you know, doing the right thing by being transparent, might actually make people trust you less. That’s the headline finding from a new academic study published in April by Elsevier titled, The Transparency Dilemma, How AI Disclosure Erodes Trust. It’s a heavyweight piece of research. 13 experiments over 5,000 participants from students and hiring managers to legal analysts and investors. And the results are consistent across all groups, across all scenarios. People trust others less when they’re told that AI played a role in getting the work done. We’ll get into this right after this. So imagine this, you’re a job applicant who says you used AI to polish a CV, or a manager who mentions AI helped write performance reviews, or a professor who says grades were assessed using AI. In each case, just admitting you used AI is enough to make people view you as less trustworthy. Now this isn’t about AI doing the work alone. In fact, the study found that people trusted a fully autonomous AI more than they trusted a human. who disclosed they had help from an AI. That’s the paradox. So why does this happen? Well, the researchers say it comes down to legitimacy. We still operate with deep seated norms that say proper work should come from human judgment, effort and expertise. So when someone reveals they used AI, it triggers a reaction, a kind of social red flag. Even if AI helped only a little, even if the work is just a good. Changing how the disclosure is worded doesn’t help much. Whether you say, AI assisted me lightly, or I proofread the AI output, or I’m just being transparent, trust still drops. There’s one twist. If someone hides their AI use, and it’s later discovered by a third party, the trust hit is even worse. So you’re damned if you do, but potentially more damned if you don’t. Now here’s where it gets interesting. Just nine months earlier in July, 2024, Elsevier published a different report, Insights 2024 Attitudes Towards AI, based on a global survey of nearly 3,000 researchers and clinicians. That survey found most professionals are enthusiastic about AI’s potential, but they demand transparency to trust the tools. So on the one hand, we want transparency from AI systems. On the other hand, we penalize people who are transparent about using AI. It’s not a contradiction. It’s about who we’re trusting. In the 2024 study, trust is directed at the AI tool. In the 2025 study, trust is directed at the human disclosure. And that’s a key distinction. It shows just how complex and fragile trust is in the age of AI. So where does this leave us? It leaves us in a space where the social norms around AI use still lag behind the technology itself. And that has implications for how we communicate, lead teams and build credibility. As generative AI becomes ever more part of everyday workflows, we’ll need to navigate this carefully. Being open about AI use is the right thing to do, but we also need to prepare for how people will respond to that honesty. It’s not a tech issue, it’s a trust issue. And as communicators, we’re right at the heart of it. So how do you see it, Shail? Shel Holtz (03:53) I see it as a conundrum that we’re going to have to figure out in a hurry because I have seen other research that reinforces this, that we truly are damned if we do and damned if we don’t because disclosing, and this is according to research that was conducted by EPIC, the Electronic Privacy Information Center, it was published late last November. They basically said that if you… @nevillehobson (03:56) Yep. ⁓ Shel Holtz (04:18) disclose that you’re using AI, you are essentially putting the audience on notice that the information could be wrong. It could be because of AI hallucination. It could be inaccurate data that was in the training set. It could be due to the creator or the distributor or the content intentionally trying to mislead the audience. basically it tells the audience, AI, it could be wrong. This could be… false information. There was a study that was conducted, actually I don’t know who actually did the study, but it was published in the Strategic Management Journal. This was related specifically to the issue that you mentioned with writing performance reviews or automating performance evaluations or recommending performance improvements for somebody who’s not doing that well on the job. So on the one hand, know, powerful AI data analytics increase the quality of feedback, which may enhance employee productivity, according to this research. They call that the deployment effect. But on the other hand, employees may develop a negative perception of AI feedback once it’s disclosed to them, harming productivity. And that’s referred to as the disclosure effect. And there was one other bit of research that I found. And this was from Trusting News. This was research conducted with a grant that says what audiences really need in order for a disclosure to be of any use to them is specificity. They respond better to detailed disclosures about how AI is being used as opposed to generic disclaimers, which are viewed less favorably and produced. less trust. Word choice matters less. Audiences wanted to know specifically what AI was used to do with the words that the disclosers used to present that information mattering less. And finally, Epic has, that’s the Electronic Privacy and Information Center, had some recommendations. They said that both direct and indirect disclosures, direct being a disclosure that says, hey, before you read or listen or watch this or view it, you should know that we used AI on it. And an indirect disclosure is where it’s somehow baked into the content itself. But they said, regardless of whether it’s direct or indirect, to ensure persistence and to meaningfully notify viewers that the content is synthetic, disclosures cannot be the only tool used to address the harms that stem from generative AI. And they recommended specificity, just as you did see from the other research that I cited. says disclosure should be specific about what the components of the content are, which components are actually synthetic. Direct disclosures must be clear and conspicuous such that a reasonable person would not mistake a piece of content as being authentic. Robustness, disclosures must be technically shielded from attempts to remove or otherwise tamper with them. Persistence, disclosures must stay attached to a piece of content even when reshared. There’s an interesting one. And format neutral, the disclosure must stay attached to the content even if it is transformed, such as from a JPEG to a .PNG or a .TXT to a .doc file. @nevillehobson (07:34) Thank Shel Holtz (07:40) So all kinds of people out there researching this and thinking about it, but in the meantime, it’s a trust issue that I don’t think a lot of people are giving a lot of thought to. @nevillehobson (07:50) No, I think you’re probably right. And I think there doesn’t seem to be any very easy solution to this. The article that I first saw that discussed this in detail in the conversation talked a bit about this, which in some detail, but briefly, they talk about what still is not known. And they start with saying that it’s not clear at all whether this penalty of mistrust will fade over time. They say as AI becomes more widespread and potentially more reliable, disclosing its use may eventually seem less suspect. They also mentioned that there is absolutely no consensus on how organizations should handle AI disclosure from the research that they carried out. One option they talk about is making transparency voluntary, which leads a decision to disclose the individual. Another is a mandatory disclosure policy. And they say their research suggests that the threat of being exposed by a third party can motivate compliance if the policy is stringently enforced through tools such as AI detectors. And finally, they mentioned a third approach is cultural, building a workplace where AI use is seen as normal, accepted and legitimate. And they say that we think this kind of environment could soften the trust penalty and support both transparency and credibility. they… In my view, certainly, I would continue disclosing my AI use in the way I have been, which is not blowing trumpets about it or making a huge deal out of it. Just saying as it’s appropriate, I have an AI use thing on my website. Been there now for a year and a bit. And I’ve not yet had anyone ask me, so what are you telling us about your AI use? It’s very open. The one thing I have found that I think helps in this situation where you might get negative feedback on AI use is if you’ve written something, for instance, that you published that AI has helped you in the construction of that document, primarily through researching the topic. So it could be summarizing a lengthy article or report. I did that not long ago on a 50 page PDF and it produced the summary in like four paragraphs, a little too concise. So that comes down to the prompt. What do you ask it to do? But I found that if you share clearly the citations, i.e. the links to sources that often are referenced, or rather they’re not referenced, let’s say, or you add a reference because you think it’s relevant, that suggests you have taken extra steps to verify that content and that therefore means you have not just, you shares something an AI has created. And I think that’s probably helpful. That said, I think the report though, the basis of it is quite clear. There is no solution to this currently at hand. And I think the worst thing anyone can do, and that’s to the conversation’s first point, leaving it a voluntary disclosure option, is probably not a good idea because some people aren’t going to do it. Others won’t be clear on how to do it. And so they won’t do it. And then if they found out the penalty is severe, not only what you’ve done, but your own reputation, and that’s not good. you’re kind of between the devil and the deep blue sea here, but bottom line, you should still disclose, but you need to do it the right way. And there ought to be some guidance in organizations in particular on how to disclose, what to disclose, when to disclose. I’ve not seen a lot of discussion about that though. Shel Holtz (11:10) Well, one of the things that came out of the epic research is that disclosures are inconsistently applied. And I think that’s one of the issues with leaving it to individuals or to individual organizations to decide how am going to disclose the use of AI and how am going to disclose the use of AI on each individual application, that you’re going to end up with a real hodgepodge of disclosures out there. And that’s not going to… @nevillehobson (11:15) Mm-hmm. Right. Shel Holtz (11:36) aid trust, that’s going to have the opposite effect on trust. Epic is actually calling for regulation around disclosure, which is not unsurprising from an organization like Epic. But I want to read you one part of a paragraph from this rather lengthy report that gets into where I think some of the issues exist with disclosure. says, first and foremost, disclosures do not affect bias or correct and accurate information. @nevillehobson (11:49) Hmm. Shel Holtz (12:03) Merely stating that a piece of content was created using generative AI or manipulated in some way with AI does not counteract the racist, sexist, or otherwise harmful outputs. The disclosure does not necessarily indicate to the viewer that a piece of content may be biased or infringing on copyright, either. Unless stated in the disclosure, the individual would have to be previously aware that these biases, errors, or IP infringements exist. @nevillehobson (12:18) . Shel Holtz (12:30) and then must meaningfully engage with and investigate the information gleaned from a piece of content to assess veracity. However, the average viewer scrolling on social media will not investigate every picture or news article they see. For that reason, other measures need to be taken to properly reduce the spread of misinformation. And that’s where they get into this notion that this needs to be regulated. There needs to be a way to assure people who are seeing content. that it is accurate and to disclose where AI was specifically employed in producing that content. @nevillehobson (13:08) Yeah, I understand that. Although that doesn’t address the issue that is kind of like underpins our discussion today, which is disclosing you’ve used AI is going to get you a negative hit. But the fact that you did use the AI. So that doesn’t address that. I’m not sure that anything can address that. If you disclose it, you’ll get the reactions that the conversations research shows up or the service research shows up, I should say. If you don’t disclose it, you should and you’ll get found out it will be even worse. So you could follow any regulatory pathway you want and do all the guidance you want. You’re still gonna get this until as the conversation reports, as as ever his research, it dies away and no one has any idea when that might be. So this is a minefield without doubt. Shel Holtz (13:36) Right. Yeah, but I think what they’re getting at is that if the disclosure being applied was consistent and specific so that when you looked at a disclosure, it was the same nature of a disclosure that you were getting from some other content producer, some other organization, you would begin to develop some sense of reliability or consistency that, okay, this is one of these. I know now what I’m going to be looking at here and can… consume it through that lens. So I think it would be helpful, you know, not that I’m always a big fan of excess regulation, but this is a minefield. And I think even if it’s voluntary compliance to a consistent set of standards, although we know that how that’s played out when it’s been proposed in other places online over the last 20, 25 years. But I think, think consistency and specificity are what’s required here. And I don’t know how we get to that without regulation. @nevillehobson (14:50) No, well, I can see a way that I’m not a fan of regulation of this type until it’s been proven that anything else that’s been attempted doesn’t work at all. And we don’t still see enough of the guidance within organizations to this particular topic. That’s what we need now. Regulation, hey, listen, it’s gonna take years to get regulation in place. So in the meantime, this all may have disappeared, doubtful, frankly, but. I’d go the route of, we need something, and this is where professional bodies could come in to help, I think, in proposing this kind of thing. Others who do it share what they’re doing. So we need something like that, in my view, where there may well be lots of this in place, but I don’t see people talking too much about it. I do see people talking much about the worry about getting accused of whatever it is that people accuse you of, of using AI. That’s not pleasant at all. And you need to have thick skin and also be pretty confident. I mean, I’d like to say in my case, I am pretty confident that if I say I’ve done this with AI, I can weather any accusations even if they are well meant, some are not. And they’re based not on informed opinion, really, it’s uninformed, I suppose you could argue. Anyway, it is a minefield and there’s no easy solution on the horizons. But in the meantime, disclose, do not hide it. Shel Holtz (16:10) Yeah, absolutely. Disclose, be specific. And I wonder if somebody out there would be interested in starting an organization sort of like Lawrence Lessig did with Creative Commons. So all you had to do now was go fill out a little form and then get an icon and people will go, that’s disclosure C. @nevillehobson (16:27) There’s an idea. There is an idea. Shel Holtz (16:28) That’s it. That’s it. need a creative commons-like solution to the disclosure issue. And that’ll be a 30 for this episode of Four Immediate Release. The post FIR #464: Research Finds Disclosing Use of AI Erodes Trust appeared first on FIR Podcast Network.
undefined
May 12, 2025 • 19min

ALP 270: Limiting scope creep from the start

In this episode, Chip and Gini delve into the topic of scope creep in agencies. They discuss the bell curve of profitability and the importance of setting clear expectations from the first client conversation. They highlight strategies like dividing projects into 90-day scopes to regularly reassess goals and deliverables. The duo emphasizes the significance of internal communication, developing a culture of transparency, and ensuring team members understand project scope and costs. They also stress the need to build flexibility and cushion into initial pricing to manage minor scope changes and avoid financial strain. Finally, they agree on mastering financial understanding and regular one-on-one meetings for smoother agency operation. [read the transcript] The post ALP 270: Limiting scope creep from the start appeared first on FIR Podcast Network.
undefined
May 7, 2025 • 16min

FIR #463: Delivering Value with Generative AI’s “Endless Right Answers”

Google’s first Chief Decision Scientist, Cassie Kozyrkov, wrote recently that “The biggest challenge of the generative AI age is leaders defining value for their organization.” Among leadership considerations, she says, is a mindset shift, one in which there are “endless right answers”.  (“When I ask an AI assistant to generate an image for me, I get a fairly solid result. When I repeat the same prompt, I get a different perfectly adequate image. Both are right answers… but which one is right-er?”) Kozyrkov’s overarching conclusion is that confirming the business value of your genAI decisions will keep you on track. In this episode, Neville and Shel review Kozyrkov’s position, then look at several communication teams that have evolved their departmental use of AI based on the principles she promotes. Links from this episode: Endless Right Answers: Expnlaining the Generative AI Value Gap How Lockheed Martin Comms is working smarter with GenAI How AI Can Be a Game Changer for Marketing AI in 2025: 4 PR industry leaders discuss company policies, training, use cases and more The next monthly, long-form episode of FIR will drop on Monday, May 26. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript Hello everyone and welcome to four immediate release episode number 4 63. I’m Neville Hobson. And I’m Shell Holtz reports on how communication departments are moving from AI experiments to serious strategy driven deployment of Gen AI are proliferating. Although I’m still mostly hearing communicators talk about tactical uses of these tools. The fact is you need to start with strategy or don’t start at all. That’s the conclusion of Cassie. Kako, Google’s former chief decision scientist who warns leaders that Gen AI only pays off when you define why you’re using it and how you’ll measure value. She calls Gen AI automation for problems that have endless right answers. Now that. Warrants a little explanation. Traditional ai, she says, is for automating tasks where there’s one right answer using patterns and data. It’s gen AI that automates tasks where there are endless right [00:01:00] answers and each answer is right in its own way. This means old ROI, yardsticks won’t work. Leaders have to craft new metrics that link every Gen AI project to. Not just a cool demo. This framing is useful because it separates flashy outputs from real, genuine impact. With that in mind, we’re gonna look at a few comms teams that are building gen AI programs around a clear, measurable strategy right after this. Well, let’s start with Lockheed Martin’s Communications organizations, which set a top down mandate. Every team member is required to learn enough gen AI to be a strategic partner to the business. They hit a hundred percent training compliance early this year. They published an internal. AI Communications Playbook filled with do and don’t guidance Prompt templates, a shared prompt library, and monthly newsletters that surface new [00:02:00] wins. There are a few reasons that this is a worthy case study. First, the team generated savings. You can count, for example, a recent video storyboard project ran 30% under budget and cut 180 staff hours. The team has fostered a culture of experimentation. Uh, there’s a monthly AI art contest that they. Host inviting communicators to practice prompting in a low risk environment, helping them learn prompt craft before they touch billable projects. And the human in the loop discipline is built into the team’s processes. Gen AI delivers the first draft or first visual. Humans still own the final story. The takeaway, Lockheed shows that enterprise rollouts scale when you train first, codify governance. Next, then celebrate quick wins. Qualcomm corporate comms manager, Kristen Cochran Styles said Gen A is now in our DNA. Qualcomm’s comms team is leaning on edge based gen AI, running models on phones, [00:03:00] PCs, and even smart glasses to lighten workflows while respecting privacy and energy constraints. Uh, they have a device centric narrative. They don’t just talk about on debate on. Its comms group uses the same edge pipeline that it promotes publicly. They have faster iterations occurring in their processes, drafting reactive statements, tailoring, outreach to niche reporters and summarizing dense technical research all happen at the edge, shaving hours off typical cycles, and there’s alignment of their reputation because they’re eating their own dog food from their own silicon powered AI stack. Qualcomm’s comms team reinforces the brand promise every time it ships content. Let’s. Take a look next at VCA, uh, chain of veterinary clinics. One of them was the one that I take my dog to. Joseph Campbell’s, a comms leader at VCA and he’s echoed the strategy first mantra. He noted that 75% of comms pros now use gen [00:04:00] ai, but more than half of their employers still lack firm policies. A gap he finds alarming. Campbell’s rule of thumb. AI can brainstorm and polish, but final messaging must. Obtain human creativity strategy and relationship building. VCAs approach involves sandboxing with teams practicing in non-public pilots before committing anything to external channels. Crafting guardrails is treated as urgent change management work, not paperwork. So they’re developing their policies in a very deliberate way, and they have an ethics checklist. Outputs go through fact checking and hallucination screen steps just like any other high stakes content. Now these individual stories of teams employing gen gen AI strategically sit against an industry backdrop that’s moving fast with tripling of adoption. Three out of four PR pros now use gen ai. That’s nearly three times the level from March of last year. Uh, and [00:05:00] efficiency gains are clear. 93% say AI speeds their work. 78% says it improves their quality, but speed. By itself isn’t value. Cassie Coser Cove’s Endless right Answers framework reminds us Comms leaders still have to specify which right answers matter to the business. So let’s wrap this up with six quick takeaways for your team from these case studies. First, tie every Gen AI experiment to a business result. Whether it’s fast or first drafts, budget savings, or higher engagement, write the metric before you. Invest in universal literacy. Lockheed’s a hundred percent training. Target created a shared language, a shared context, and without that, AI initiatives are gonna stall, codify, and update guardrails. VCAs governance, sprint shows policies can be an after, can’t be an afterthought. They’re the trust layer that lets teams scale gen AI responsibly. [00:06:00] Prototype publicly when it reinforces brand stories. Qualcomm’s on device PR work doubles as product proof and keep humans critical in every example. Communicators use AI for liftoff, then rely on human judgment. For nuance, ethics and style communicators have next desktop publishing social. Gen AI is bigger than these. It won’t just make us faster. It will change how we define good work. That’s why the strategic questions upfront, what does value look like and how will we prove it matter more than which model or plugin you pick. Good insights in all of that. Uh, shell, I guess the first thought in my mind, it makes me wonder how do those who argue against using AI and, uh, what, what’s prompted that thought as an article? I was reading, uh, just this morning about, uh, an organization where the leadership don’t prohibit it. No one uses AI [00:07:00] on the belief that, uh, it doesn’t deliver value, and it minimizes the human excellence that they bring to their client’s work. I wonder what, uh, they would say to things like this, because there are examples everywhere you look and you’ve just recounted a load of the advantages of using artificial intelligence in business. I was reading one of the other articles that you shared, which you didn’t talk about on the examples that Mons, uh, which is really quite interesting, itemizes, how they, how AI plays a large role in their marketing, uh, for instance, to create digital advertising content. Product display pages, uh, towards high level creative assets including social media content and video ads. They talk about though the 40 ai augmented campaigns that they have implemented, which they say have led to measurable improvements in brand awareness, market share, and revenue. And that compliments all the examples you were saying. They also say, rather than replacing humans, AI assist the, in refining their ideas and generating content. The key role of humans is to ensure brand distinctiveness and [00:08:00] originality. That simple. Those two simple phrases really resonated with me because AI assists the humans, and the key job of the humans is to ensure brand distinctiveness and originality. And that to me is, makes complete sense. So, uh, AI delivers significant value and they talk about the, uh, the metrics they have. Uh, here’s a one, uh, they say when start delivering two. And if you can do that 1% better, that adds up to significant volume gains and significant growth in terms of net revenue. Then, then it’s just the beginning and AI is delivering that according to, so these, these add to the, to the, uh, collection. Of, uh, what I call validation points for the benefits of using a particular tool, particularly when you focus on the human element in it. So they’re all great examples. Uh, and I think you, you mentioned at the start that too much of the, uh, activity we hear about is focused on tactics, [00:09:00] and this is full of it. It links it all to strategic aspects. Uh, it’s not just the, uh, the improvement in this and the 250 trillion impressions, although that’s pretty extraordinary. It seems to me these are real learning insights that you can get from all this kind of stuff. And, you know, I love reading all this stuff, so it’s good to see it. I have to say. I, you know, in communication we talk about strategic planning as a core competency in the profession and IABC conferences and in textbooks, the strategic planning process is outlined repeatedly. I mean, there are, are are different models and different approaches, but it’s always based on what is it that you’re trying to accomplish. At the end of the day, you’re not trying to accomplish writing a good headline. Right. You’re trying to accomplish, uh, having somebody read the article because it had a good headline and walk away ready to buy your product or ready to vote for your candidate, or [00:10:00] whatever it it may be. And it seems like. Even though we have embraced this as a profession in general, we have by and large forgotten it when it comes to Gen ai just because we get so excited by the immediately evident capabilities, the ability to gimme five headlines in different styles. So I can. Pick one or, or adapt one to, uh, to, to, to what I wanted to say, create this image. I mean, there’s nothing wrong with that. These are all great uses of the tool, but ultimately we have to look at where it delivers value that aligns with the goals that we’re trying to achieve on behalf of the organization. And you talk about those organizations that say there is no value. I, I would suggest either they’re not looking, they have a, a bias against it at the leadership level. Or they have people at lower levels who haven’t figured out how to demonstrate that value, and therefore leaders are convinced that there isn’t any. But if you look at the examples we’ve shared here today, it, [00:11:00] it’s clear that you can align what you’re doing with Gen ai. To your organization’s business goals and your strategic plan and your business plan and the like, there’s, there’s, there’s no question that you, you can, uh, the question is why aren’t more people doing it? I completely agree with the decision scientists from Google’s belief that if you’re not being strategic about it, why are you doing it at all? Yeah. I mean, I think to me the, the key thing to keep remembering, and this could well be the kind of circling point you come around to, to repeat together again, as Mondelez says, while AI has been a game changer for them, it takes human ingenuity to get the most out of a technology that is available to everyone. And that, uh, is a point you mentioned from one of the examples that you gave that, um, how AI. Augments as opposed to replace or instead of that people talk about. Sure. But this needs emphasizing, I think, in a much, much bigger way. So Mondelez says, uh, again, a real simple point, but it’s, it’s good to say it. They [00:12:00] think AI is gonna help you do everything from creation of the brief all the way to actual actually trafficking the effort and putting it out into market. It’ll help you. So, um, that bears repeating, it’s not gonna do any of, all of that or any of that. It’s gonna help you do all of that. Hence, you know, AI augmenting intelligence. And I saw another different use of that phrase the other day, which has escaped my memories. Obviously wasn’t very memorable, but it was another example of it’s the human, that’s the key thing. Uh, not the technology, the technology tool that enables these things. So people’s eyes roll my view, leadership. No. And I think if leadership is going to pay attention to this in a way that is meaningful to the organization, there has to be an effort to bring managers into the loop to, so that managers can help their employees feel good about this. Understand, and we’ve talked about the role of the manager here before. Yep. But this, this is a critical one, is the emotional [00:13:00] side of managing. When you have a team of people who are confused and distressed and, and maybe worried about their futures with ai to be able to assuage those concerns and pull people together into a team that works with these things so that they do deliver that value, that’s going to increase the value of that team and of those individuals. So there’s a lot of work to be done here, and it’s heartening to see organizations like VCA and Qualcomm and Mondelez doing it. Well and doing it right and, and the more these case studies we can see, the easier it’s gonna be for other organizations to basically adapt those concepts. Yeah, I agree. And on the case of, on the part of Mondelez, the article was published in a publication called Knowledge at Wharton from, uh, the Wharton School University of Pennsylvania. I was quite at the end of April. Uh, I was actually quite amused to see the final text at the end saying that this article was partially generated by AI and edited with additional writing by knowledge at Wharton [00:14:00] staff. Curious about what the additional writing is. Uh, but that there, I would argue that’s a simple but good example that’s fully disclosed of the role AI played in them. Being able to tell that particular story. I don’t think that diminishes anything. If anything, it’s additional to it, hence the additional. Uh, in the, in the, I was gonna ask, did you, did you find the article less readable because it was partly written by ai? Well, now I know that. How could I tell? That’s the thing. They disclosed it and, uh, it’s good for them. I don’t think they needed to do that. Again, it depends on how they felt. They don’t say what percentage of the additional was AI generated, but I would imagine, again, a good example. To me, it seems that you’ve got something that you wrote and you running it by your AI assistant to check for. The flow tone, all those things you kind of do. With Grammarly a bit, I think at the very least, if you’re using Word, you can use the grammar checker and all those tools in there. Not very good. Nothing nearly as [00:15:00] good as an AI tool to do these things. So that’s already with us and has been for quite a while. It’s getting better, but the human element is absolutely critical. So it would be interesting to know what that additional writing was said, but it’s a good example. It is. And that’ll be a 30 for this episode of four immediate release. The post FIR #463: Delivering Value with Generative AI’s “Endless Right Answers” appeared first on FIR Podcast Network.
undefined
May 5, 2025 • 22min

ALP 269: Pricing psychology for agency clients

In this episode, Chip and Gini discuss the psychology of pricing within agencies. They cover topics such as the importance of being confident in your pricing, avoiding negotiating against oneself, and the benefits of premium pricing. Gini highlights her experiences with male and female negotiators, emphasizing how women often undervalue themselves. The duo debates the effectiveness of the ‘three pricing options’ strategy and its pitfalls. They also offer practical advice for owners to ensure their pricing sends the right message to clients and reflects the true value of their services. [read the transcript] The post ALP 269: Pricing psychology for agency clients appeared first on FIR Podcast Network.
undefined
Apr 28, 2025 • 19min

ALP 268: Identifying and managing agency owner burnout

In this episode, Chip and Gini discuss the prevalent issue of burnout among agency owners. They explore the different types of burnout, including cyclical and long-term burnout, and offer strategies to identify, cope with, and prevent it. Key recommendations include taking regular breaks, understanding personal energy drains and boosts, and adjusting work habits accordingly. They emphasize the importance of self-care, realistic time management, and the necessity to avoid making major decisions while burned out. Chip and Gini also share personal experiences and practical tips to help agency owners manage their workload more effectively. [read the transcript] The post ALP 268: Identifying and managing agency owner burnout appeared first on FIR Podcast Network.
undefined
Apr 28, 2025 • 1h 31min

FIR #462: Cheaters Never Prosper (Unless They’re Paid $5 Million for Their Tool)

A Columbia University student was expelled for developing an AI-driven tool to help applicants to software coding jobs cheat on the tests employers require them to take. You can call such a tool deplorable or agree with the student that it’s a legit resource. It’s hard to argue with the $5 million in seed funding the student and his partner have raised. Also in this long-form monthly episode for April 2025: How communicators can use each of the seven categories of AI agents that are on their way. LinkedIn and Bluesky have updated their verification programs in ways that will matter to communicators. Onboarding new talent is an everyday business activity that is in serious need of improvement. A new report finds significant gaps between generations in the PR industry when it comes to the major factors impacting communication. Anthropic — the company behind the Claude LLs — warns that fully AI employees are only a year away. In his Tech Report, Dan York explains how Bluesky experienced an outage even though they’re supposed to operate under a distributed model. Links from this episode A Deep Dive Into the Different Types of AI Agents and When to Use Them Ethan Mollick’s LinkedIn post on ChatGPT o3’s agentic capabilities LinkedIn post on rumored OpenAI-Shopify integration I got kicked out of Columbia for building Interview Coder, AI to cheat on coding interviews Cluely Columbia student suspended over interview cheating tool raises $5.3M to ‘cheat on everything’ From the singularity community on Reddit: “Invisible AI to Cheat On Everything” (this is a real product) I used the ‘cheat on everything’ AI tool and it didn’t help me cheat on anything LinkedIn will let your verified identity show up on other platforms Bluesky’s Blue Check Is Finally Here Burning questions (and some answers) about Bluesky’s new verification system Bluesky Adds Blue Check System With a Twist A New Form of Verification on Bluesky – Bluesky Bluesky’s newly unveiled verification system is a unique and interesting approach How To Onboard Digital Marketing Talent According To Agency Leaders Center for Public Relations’ Global Communication Report uncovers key industry shifts and generational divides Exclusive: Anthropic warns fully AI employees are a year away AI: Anthropic’s CEO Says All Code Will Be AI-Generated in a Year Hacker News on Anthropic Announcement AI as Normal Technology Links from Dan York’s Tech Report Wait, how did a decentralized service like Bluesky go down? Manton Reece – Bluesky downtime New Features for the Threads Web Experience Facebook cracks down on spammy content by cutting reach and monetization WordPress 6.8 “Cecil” The next monthly, long-form episode of FIR will drop on Monday, May 26. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript Neville Hobson: Greetings everyone, and welcome to for immediate release episode 462, our monthly long form edition for April, 2025. Neville Hobson in. Shel Holtz: I’m Shell Holtz in Concord, California in the us. We’re thrilled to be back to tackle six topics that we think communicators and others in business will find interesting and useful. Before we jump into those topics, though, as usual, in our monthly episode, we’d like to recap the shorter episodes that we’ve recorded since the last monthly, and we’re. Neville over. I think we’re, Neville Hobson (2): yeah, I think we are. Shell, uh, episode 4 56. That was our March monthly recorded on the 24th of, or rather, published on the 24th of March. Um, a lot of topics in that one, they addressed variety of issues. Uh, for instance, uh, publishing platform ghost enabling the social web by employees quitting [00:01:00] over poor communication in companies, the UK newspaper launching AI curated news. And there were three or four other topics in there too. Plus Dan York’s tech report as usual. So that’s a mighty episode. And. Shel Holtz: We did on the topic of whether artificial intelligence will put the expertise of practice by communicators at risk. Julie MayT wrote, it’s not about what we do anymore, but how we think, connect and interpret. Human value isn’t disappearing. It’s shifting, isn’t it? The real opportunity isnt doubling down on creativity, context and emotional intelligence by communicating with kindness and empathy. Looking forward to tuning in. And Paul Harper responded to that comment saying, my concern is that AI, for many applications completely misses emotional intelligence, cold words, which are taken from the web, which does not discriminate between good and bad sources, truth or fake. And Julie responded to that saying, good point, Paul. When it comes to important [00:02:00] stuff where it really matters whether AI is giving us something real or fake, I usually ask for the source and double check it myself. Chachi PT also has a deep research function that can help dig a bit further. Neville Hobson (2): Okay, so our next 1, 4 57 that was published on the 28th of March. And this I found a, a really interesting discussion, very timely one, talking about communicating the impacts of Mr. Trump’s tariffs. And we talked about that at some length. Our concluding statement in that episode was communicated should counsel leaders on how to address the impacts of those tariffs. And I believe we have a comment on that show Shel Holtz: from Rick Murray, uh, saying So true business models for creative industries are being turned upside down, revenue and margin streams that once fueled agencies of all types don’t need to exist now and won’t exist in three years. Neville Hobson (2): Well said Rick. Well said 58, which we recorded or published on the 3rd of April. This was, I thought, a [00:03:00] really interesting one, and we’re gonna reference it again in this episode. This was about preparing managers to manage human AI hybrid. Teams, um, a lot of talk about that and that how, uh, uh, uh, that we are ready or not for this, it’s on the horizon. It’s coming where we will have this in workplaces, and we talked about that at some length in that episode. Uh, looking at what it means for managers and how far businesses from, uh, how far it is from enabling their managers to succeed in the new work reality. We also added a, a kind of a, a mirror or a parallel element to this, that it’s also helping employees understand what this means to them in the workplace if they got AI colleagues. So, um, I don’t think we had any comments to that one. She, but it’s got a lot of views, so people thought about that, just didn’t, didn’t have any comments at this point, but great topic. Uh, I think Shel Holtz: left, left them speechless if we did. Neville Hobson (2): Yeah, exactly. So, uh, maybe we’ll get some after this episode in nine that we publish on the 9th of April that [00:04:00] looked at how AI is transforming content from passive to interactive. We discussed the evolving landscape of podcast consumption, particularly in light of Satya Nadal, the CEO of Microsoft, his innovative approach to engaging with audio content through ai. So not listening to the podcast, he has his, uh, chat bot of, uh, his favorite chat bot, not chat, GBT of course, it’s co-pilot that, uh, talks to the transcript and ge he engages that way. Interesting. Uh, I’ve seen comments elsewhere about this, that, that say, why on earth do you wanna do this? But you can listen. Well, everyone’s got different desires and wishes in this kind of thing. Uh, but it seems to me a feasible thing to do it the, for the reasons he describes why he’s doing it. And I believe it attracted a number of comments. Did it not show. Shel Holtz: We did, starting with Jeff Deonna, who wrote, to be honest, I find this approach deeply disrespectful to podcast hosts and their guests. It literally silences their human voices in favor of a fake conversation with a solace [00:05:00] algorithm. Now, I responded to that. I thought that Cliff notes would be a reasonable analogy. People rather than reading Silas Marner, uh, read the Cliff notes where some solace Summarizers outlines the story and tells you who the key characters are so that you can pass a test and it silences the voice of the author, author. And yet we didn’t hear that kind of objection to Cliff Notes. We’ve heard other objections. Of course, you should read the whole damn book. Right? But I think people have been summarizing for years. Executives give reports to their admins and say, write me a one page summary of this. And now we’re just using. AI to do the same thing. I don’t know if you had any additional thoughts on Jeff’s comment. Sure. Neville Hobson (2): I left a comment to his, uh, comment. I just reply to his comment as well, saying that, uh, I didn’t say these words, but effectively it was a polite way of saying I disagree. Sorry, you’re not right with this for the reasons you’ve, you’ve outlined. I don’t have the comment open on my [00:06:00] screen now, so I can’t remember the exact words I used, but I thought I couldn’t let him get away with, with that, without a response. Shel Holtz: Well, we had another comment from Kevin Anselmo, who used to do the Higher Education podcast on the FIR Podcast Network. He said, I asked chat GPT to summarize your podcast transcript. After receiving the below chat, GPT provided practical advice on actioning the takeaways in my own projects. Interesting exercise, and I will not read everything he pasted in from chat GT’s analysis of the transcript of our podcast. But I’ll, I’ll tell you what the five key takeaway labels are. Transcripts are becoming essential. A ai AI makes podcasts interactive. Most people still prefer passive listening. AI is going multimodal. And then there’s a notable quote from the podcast, so that was, uh, turnabout. I mean, we’re talking about what would happen if people didn’t listen to the authentic voices. Well, you know, Kevin didn’t have to listen to us. I’m fine with that. If he [00:07:00] walks away with actionable items based on hearing or reading a summary of our transcript, one more way to get to it. I agree. And Mark Hillary wrote, why would you need a transcript for chat GPT though? Just feed it the audio and it could work out what is being said. Anyway, I. Neville Hobson (2): Yeah, I replied to him as well. We had quite an interchange. I can’t remember if it was on LinkedIn or on on Blue Sky, I can’t remember which, which service now. Um, but um, he was gonna go and experiment himself with something else. Uh, ’cause what he described, and someone else was left to comment about this as well. Actually, I think that was on Blue Sky too, that, um, talked about, uh, you know, why would you wanna do this a bit bit like GE actually, not like Jeff. It wasn’t just alleging disrespect, it was saying, why would you wanna do this? Um, when I, you know, it was actually Mark who said he’d uploaded an MP three. And, uh, it had done the job. It actually hadn’t, uh, chat. GPT got the MP three, created the transcript from it, and then it did what it [00:08:00] needed to do. So the transcript is essential to. Shel Holtz: Whether you created Issa. Nevertheless, Neville Hobson (2): these, these, yeah, these, these great comments are, are fab to have these I must have been extends the conversation. Okay. So then four 60, which we published on April the 14th. This one talked about layoffs in the United States primarily, and the return of toxic workplaces and the big boss unquote era. Uh, the tide is turning. We started off and assessed that I mentioned. We’re seeing not, not the same and not layoffs per se, but people quitting here in the UK for different reasons. But this turmoil in this and toxicity in the workplace is part of the reasoning. So we explore the reasons behind the layoffs in the US are the impact of CEO Tough talk and how communicators can help maintain a strong non-toxic workplace. So that was good. We have comments too, don’t we? Shel Holtz: We do.[00:09:00] Starting with Natasha Gonzalez who says something that stood out for me was a point that Neville made about employees in the UK who are resigning from jobs due to toxic workplace culture, rather than being laid off as in the us. I imagine this isn’t unique to the uk. And then Julie MayT, who was the first comment she’s going to bookend our comments, wrote that organizations in the US are starting to see we cracks in psychological safety and trust disappearing. Then all those folks who keep everything ticking along will start to quietly disengage. It’s up to us, calms people to be brave enough and skilled to say on a wee minute, that message isn’t landing the way you think it is. While the big wigs are busy shouting, spinning, and flexing, it’s us who need to rock up with the calm, clear human communications, no drama, ram, just stuff that makes sense and actually help folks to figure out what the hell is [00:10:00] going on and what to do next. Neville Hobson (2): Good comment Mr. Bit. And that takes us to the last one before this episode, episode 4 61. We published on the, on the 24th of April that looked at trends in YouTube video two reports in particular that really had interesting insights on virtual influences and AI generated videos. And the bit that caught my attention mostly was, uh, news that every video uploaded to YouTube. So you take your video, you upload it, um, uh, can be dubbed into every spoken language on the planet, uh, with the, with the speaker’s lips reanimated to sync with the words they are speaking. I mean, this is either terrifically exciting or utter nightmare that, uh, that is approaching fast. So, um, we talked about that and uh, we haven’t had any comments to that one yet, but this is a topic I see I’m seeing quite a bit being discussed online in various places. So this is just a start of this, I think. [00:11:00] So that takes us to the end of the recap show, Shel Holtz: so I didn’t see it. Okay. Lemme talk about that. Neville Hobson (2): And last but certainly not least, I want to mention a new interview that, uh, that we posted on the 23rd of April. This was with Zoa artists in Australia who we interviewed on an article she wrote in the populous blog on bridging AI and human connection in internal communication. It was a really, really good discussion we had with, uh, it’s definitely worth your time listening to this one. You will learn quite a lot from what or Zoa has to say on this topic. What did you think of it? She, it was good, wasn’t it? Shel Holtz: It was fascinating and I read that, that post in the popular blog and also was engaged in a conversation with Zuora at the Team Flow Institute where we’re both research fellows and she raised it and it led to a conversation with all the fellows [00:12:00] and this notion of what would a board of directors do if AI was in the room with them right now? What would they use it for? How would they take advantage of it to some fascinating discussion. So worth a listen. Also up now is episode number 115 of Circle of Fellows, the monthly livestream panel discussion that people who watch live are able to participate in in real time. This was about communicating amidst the rise of misinformation and disinformation. Brad Whitworth moderated this installment of Circle of Fellows with panelists, Alice Brink, Julie Holloway, and George McGrath. Sue Human was supposed to participate, but woke up feeling ill, but did send in some written contributions that, uh, were read into the discussion. So a good one. I’ve, I’ve listened to it. You should too. It’s a very timely topic. And just to let you know about the next Circle, circle of Fellows, episode one [00:13:00] 16 is scheduled for noon eastern time on Thursday, May 22nd. The topic is moving to teaching. This is something a lot of communicators do is become adjunct professors or full professors, or even tenured professors. And we’ll be having a conversation with four IABC fellows who have done just that, Cindy smi, John Clemens, mark Schumann, and Jennifer W. And in fact, I’m speaking at Jennifer W’s class via Zoom pretty soon, so that’ll be a fun one too. You can mark that one on your calendars May 22nd noon eastern time, and that’ll take us to the start of the coverage of our topics for this month, but only after we turn things over to an advertiser for a moment.[00:14:00] As we have been discussing for some time, AI agents are coming and to a degree they’re already here. Ethan Molik, the Horton professor, and ai, I guess you’d call him an AI influencer. He posted this observation to LinkedIn a few days ago. He wrote, I don’t think people realize how much, even a mildly agentic AI system like chat PT oh three can do on its own. For example, this prompt works in oh three zero shot. Come up with 20 clever ideas from marketing slogans for a new mail order. Cheese shop. Develop criteria and select the best one. Then build a financial and marketing plan for the shop, revising as needed, and analyzing competition. Then generate an appropriate logo using the image generator and build a website for the shop as a mockup. Making sure to carry five to 10 cheeses to fit the marketing plan. With that single prompt in less than two [00:15:00] minutes, the AI not only provided a list of slogans, but ranked and selected an option, did web research, developed a logo, built marketing and financial plans, and launched a demo website for me to react to the fact that my instructions were vague and that common sense was required to make decisions about how to address them was not a barrier. And that’s an open AI reasoning model, not an actual agent. Built to be an agent to take on autonomous tasks in sequence multiple tasks in pursuit of a goal with agents imminent. HubSpot shared a list of seven types of agents in a post on its blog, and I thought it would be instructive given what Professor Mooch wrote to, to go over these seven categories or classes of agents and where they intersect with what we do as communicators. Now I, I’ll give you the caveat that. Somebody else may develop a different list. Somebody else may slice and dice the [00:16:00] types of agents differently, but this is the first time I’ve seen this categorization, so I thought it was worth going through. They start with simple reflex agents that operate based on direct condition action rules without any memory of anything that you may have interacted with it about before. So in PR, we could use this for automated media monitoring alerts set up agents that trigger. Instant alerts based on keywords that, uh, appear in news articles or on social media that lets you respond quickly. Uh, you could have some basic chat bot responses, you right, simple chat bots on internal or external platforms that will answer frequently asked questions with pre-programmed answers about things like, I don’t know, office hours, basic company information, dates of upcoming events. And then you could filter inbound communication, automatically flag or filter incoming emails or messages based on keywords that indicate urgency or specific topics and route [00:17:00] them to the appropriate team member to respond to it. The second type of agent is a model-based reflex agent. These maintain an internal model of the environment to make decisions considering past states as well as what you’re asking it to do right now. So you could use a contextual chat bot to develop these chat bots for websites or, or internal PO portals that can maintain conversational context. It can remember previous interactions, and then provide more relevant information or support when the employee or the customer comes back for, for a follow-up or for additional information. Do sentiment monitoring with that, that historical context. Agents that track media or social media sentiment over time can identify trends and, and give you historical context to current conversations. So you know, something’s being discussed around the organization. It can say, well, you know, two weeks ago this conversation happened then that weighs on what’s going on in these [00:18:00] conversations today. And then there’s automated information retrieval, uh, agents that can access and synthesize information from internal databases or external sources based on what you ask it. Uh, providing more comprehensive answers than you get from the simple reflex agents. Goal-based agents make decisions to achieve a specific goal, planning a sequence of actions to reach that objective. This is what most of us think about when we’re thinking of agents, automated press release, distribute distribution, social media, campaign management, internal communication, workflow automation. This is all possible here. I think I, I referenced on an earlier episode that I used an agent, a test agent that I think was Anthropic had set up, and I had it go out to my company’s website, identify our areas of subject matter expertise, and the markets we’re in. Then go out and find 10. Good podcasts with large audiences where we [00:19:00] could pitch our subject matter experts as guests and it would be an appropriate pitch. And I sat back and watched while it did all of these things. So this is what we’ve got coming. Fourth are utility based agents that choose actions that maximize their utility or a defined performance measure considering various possible outcomes. Uh, we can use these to optimize communication channel usage, right? Analyze how audiences engage across different communication channels and recommend the most effective platforms for specific messages or, uh, desired reach or desired impact. I can use this for crisis communication, simulation and planning. Personalized communication delivery. Fifth is learning agents that improve their performance over time by learning from their experiences. You can use this to refine your message targeting, to improve, uh, the, the natural language understanding of chatbots that are engaging with customers or employees or whoever. And to predict [00:20:00] communication effectiveness. They can analyze a number of factors like message, content, timing, audience demographics. To predict the potential reach and impact of your communications, letting you make adjustments. Sixth are hierarchical agents that break down complex goals into smaller, more manageable sub goals. Here you’ll have higher level agents overseeing the work of lower level agents, so you’ll have a human manager managing an AI agent who manages AI agents. These for large scale communication projects, multi-channel campaigns, and and streamlining the approval process or use cases. And finally, there are multi-system agents. These are multiple agents interacting with each other to achieve a common goal or individual goals. Integrated communication, planning and execution. Managing online reputation with agents, monitoring different online platforms, analyzing sentiment, coordinating responses or engagement based on a unified strategy, and then [00:21:00] cross departmental communication coordination. So we need to understand the distinct capabilities of these different types of agents, and if we do, we’ll be able to leverage them to automate, to gain deeper insights, to do better personalization and better achieve our objectives. And I think, I think this is also a, a, a good point to mention. I have not had a chance to, to read it because you said you saw it and commented on it today. It’s still early here where I am. But Zora Artis, our interview guest posted something that kind of fits in here too, right? Neville Hobson (2): Yeah, she shared a post from LinkedIn, which I found quite intriguing. Uh, written by, uh, Jade Beard Stevens, who’s the Director of Digital and Social Innovation at YMU in London. Brief post, but it says it all, I gotta read it out. It’s quite, quite short. Uh, she says I wasn’t shocked, but still had to share. This rumor has it that open AI is quietly working on a native Shopify checkout. Inside chat. GPT apparently leaked code shows Shopify checkout, [00:22:00] URL Buy Now product offer ratings. No redirects, no search, just chat compare and buy in one flow. If this happens, Google, TikTok, even product pages as we know them are all about to change. This isn’t just another e-commerce update. This is the merger of search and checkout. This is AI becoming the new storefront. Brands will need to optimize for AI’s first visibility, not just SEO. This could be bigger than TikTok shop, and it’s already happening. Now, is this a agent ai? I don’t know. Shell, it’s, it’s, it’s kind of fits somewhere in, in this overall picture of, uh, tools, emerging methods emerging. Uh, look at the seven things you, you read out. Uh, there’s some real interesting stuff in there to, to deep dive into, but what Jade mentions is definitely something to pay attention to, even if you’re not in retail or in e-commerce or any of that. There’s a huge, not huge kind of developing conversation on Reddit about this, which has some more, in more detail on what’s happening. I did a quick search on [00:23:00] this. This is generally this topic to see, you know, anything else talking. I did find something, which isn’t this, this is gonna replace this other thing that I found, I think, which is a Shopify AI chatbot via chat, GPT as the title of the app goes, uh, put out by, um, uh, not, not Shopify beg, pardon? Shockly. A company called Shockly that, uh, builds, uh, tools to, for, for vendors on Shopify to, to sell their stuff. This isn’t it, but this has been around since September of 2024, and it is actually quite interesting. It’s an app you install. I see it’s got, uh, just under 30, uh, ratings, all five out of five stars from vendors. Um, it is all to do with, uh, enabling your whole, uh. Storefront using a, a tool from chat chat, GPT. What, um, Jade’s article talks about is this sort of [00:24:00] thing happening natively within Shopify. So that’s a slightly different proposition, but something like this is coming, so you’ve already got third party apps doing this. Now you’re gonna have a native app doing this. And if it is, um, well, I don’t wanna get hung up on the word digic here, but if, if this is, uh, uh, enables you to, to complete the whole buying process, from interest to purchase, to signing up and paying for it all within chat GPT, that will, uh, a appeal to quite a few people. I think if it’s offered something better, faster, or less stressful, less hassle, easier than doing it otherwise in, in, uh, in Shopify, it’ll attract attention. So add this one to the list of things to pay attention to as well. Shel Holtz: Yeah, and whether that’s part of an agent or not, I think depends. It could absolutely be, uh, I could see how that would work in an agent tech environment. I’m thinking of giving the, the agent the [00:25:00] assignment of buying me a new mirrorless camera, as long as I provide it with the criteria, my price limit of the features that it needs to have, how soon it can be delivered, which brands I don’t want you to consider, uh, but go out and do comparisons of the different models, uh, from different manufacturers that meet my criteria. Then do price comparison to find the best price. Once you have found the best price, buy it and have it delivered so that I don’t have to do anything else. That’s an agent. So again, you know, if there’s price at the end, what can communicators do with that? I don’t know how much the PR folks can do with that, but the marketing side of the house can probably do a ton with that. Neville Hobson (2): Yeah. So one more to pay attention to. I was looking through the HubSpot article you referenced, and I, it’s a couple things in there that I, that struck me, uh, their views. Uh, one where they talk about under the, uh, autonomous AI agents paragraph, it’s always a good idea to keep a human involved in any AI operation. Absolutely [00:26:00] agree with that. Um, a lot of very useful, uh, information in HubSpot’s piece. Uh, some good explainers of what some of this stuff means. And then, um, uh, the answer to the question about preparing for an agent, ai future experimenting. I think the concluding sentence is probably the kind of, okay. Summarize the whole thing into this. The future is agent. Will you be ready now? That’s what we asked in 4 58 when we talked about this topic, and I wonder if we’ll be asking it again after this one. We’ll see. Shel Holtz: Undoubtedly we’ll be asking this for some time because even after the agents. Have fully arrived and are available. Uh, I think there’s going to be a lot of people in our profession and across industry who are not ready Neville Hobson (2): opportunity for. Shel Holtz: And we’ll talk about that more when we cover another story later. Neville Hobson (2): We will. Yeah. So let’s take a look at something quite interesting that popped up in the last few days. [00:27:00] Imagine an AI tool that promises to help you cheat on everything from job interviews to academic exams. That’s exactly what clearly offers. Created by two former Columbia University students, Chung and Roy Lee and Neil Han Mugham clearly acts as an invisible AI assistant that overlays realtime support onto any application a user is running. It gained attention and controversy after Roy Lee was suspended from Columbia for using an early version during a job interview. Despite this, clearly has just raised $5.3 million in funding from investors promoting its vision of true AI maximalism, where AI can assist in any life situation without detection. The tool is designed to be undetectable, providing realtime suggestions during interviews, exams, writing assignments, and more, much like an augmented reality layer. But for conversation and tasks, supporters argue it could level the playing field for those who struggle with traditional [00:28:00] assessments, but critics warn it crosses a serious ethical line, potentially devaluing qualifications and undermining trust in recruitment and academic credentials. Realtime interview assistants raises questions, not just about competence, but about honesty and disclosure. Rarely happens. Interestingly, the Verge tested it. Their real world testing found that clearly is still very rough around the edges. Technical issues, latency and clunky interactions make it more proof of concept than polished products, at least for now. And did I mention they just got over $5 million in investor funding? The founders defend the provocative framing. They describe cheating as a metaphor for how powerful AI assistance will soon feel. Much like the early controversies over calculators or spellcheck, as they say, not quite the same thing. I don’t think Shel, but so are we looking at the next Grammarly or are we opening the door to a darker future where nobody can be sure what’s real anymore? So question for you then Shell is what does this tell us about the [00:29:00] blurring lines between assistance and deception in an AI driven world? Shel Holtz: Well, I think there’s a couple of ways to look at this. I did hear Lee interviewed on Hard Fork. Uh, it was a great interview and he made a couple of points. First of all, he said that having been through these types of interviews, this is, uh, the kind of interviewing you do for a coding job. That the tests that they give you have absolutely no relevance to the kind of work that you’re doing. You’re gonna do this once for the interview, and then you’re never gonna do it again. So he doesn’t think that helping people. Figure out how to do that particular exercise is, is all that much of a cheat. But he also said that everybody programs with the help of AI these days and he says it just doesn’t make sense to have any kind of interview format that assumes you don’t have the use of AI to help you code. I absolutely see that point, but on the other hand, I think this is [00:30:00] just one instance of the kind of thing that AI is going to enable. And there will be times that it can be very problematic, much more problematic than in this case if somebody can cheat on, say their legal exam or their medical exam, then you’ve got a problem. Somebody who’s not prepared to go out there and and operate on you past the boards because they had help from a program that was written to help them cheat and pass. So it’s the type of thing that society needs to be thinking about and isn’t yet. Neville Hobson (2): So if I get this right from what you said, Roy Lee thinks it’s okay to cheat in coding ’cause it’s a stupid question to ask and you’re only ever gonna do it once. So therefore it’s okay to cheat. Meaning you actually pretend you do know how to do this even though you don’t. I mean, that is bullshit, frankly, truly. Don’t you think? Shel Holtz: Well, his his point is that, yeah, you, you don’t know [00:31:00] how to do it, but you don’t have to because you’re never going to on the job. Neville Hobson (2): So don’t, don’t, don’t, don’t even take the exam and don’t apply for that job. That’s what I would say. Shel Holtz: I guess then you don’t get any jobs, right? Well, cheating is Neville Hobson (2): cheating Shel Holtz: His point is that you’re, well, yeah, it’s cheating. Yeah. But he says his point is that the cheating in this instance isn’t going to affect your ability to do the job. Whereas in other instances, well, I’m still cheating. I’m not defending it. Understand. I’m just telling you what he said. Neville Hobson (2): Yeah, sure. Yeah. But it’s still cheating. I, I would say, I mean, it is, to me, this is the same as saying, or someone’s a little bit pregnant or, you know, I’m, I’m, I’m, you know, that kind of stupid kind of defensive argument. This is an indefensible situation in my view that Shel Holtz: of course, it used to be considered. Neville Hobson (2): Yeah, but no, no, you can’t. You can’t do it by degrees. She, I don’t believe, honestly, I don’t. You are cheating or you are not. And in this case, again, from how you describe what Roy Lee said, effectively it’s saying, well this is a dumb question to ask and [00:32:00] I’m never gonna do this again, so I’ll get this thing to do it for me basically. And that they won’t know this. That’s the other thing. They do not know this. They think, are you’s a smart guy? This fell, let’s give him the job. What a ridiculous outcome. And the other ones you mentioned in degrees, you know, taking legal exams or, or you know, passing to be a surgeon. Yeah, they’re serious too, but they’re all the same. They’re cheating. But I then kind of flip a bit by saying that this is society as we are. I’m afraid this is humans doing this. This will be out there. And this makes it even more difficult to know what’s true and what’s not, and who you can trust and who you can’t. So, you know, welcome to the new world there. Shel Holtz: I think the adaptation that has to happen has to happen on the part of the people conducting the interviews, not the people taking them. And the reason for that is, I mean, if you think about it, it used to be considered cheating to, to bring a calculator into, well, they mentioned that’s Neville Hobson (2): the argument he gives. Ridiculous. Shel Holtz: Yeah. Well, I mean, everybody’s allowed to use a [00:33:00] calculator now because the people that was 60, Neville Hobson (2): 60 years ago. Yeah. So maybe in 50 years this would be normal. Yeah. Shel Holtz: Who conduct the tests came to realize that the people who do the work are able to use calculators. So they should have been part of the test all along. So I think that’s a legitimate argument, not a, not a legitimate argument for cheating, but for updating the testing so that people don’t feel like they need to. Neville Hobson (2): So in the meantime, that’s not the landscape. So they need to develop it. So maybe the simplest way to do this is send your AI agent in to take the exam for you. Has that, Shel Holtz: well, there are people doing that for job interviews. Yeah, of course. They, they’re probably pretty close to that. Yep. We’ve seen some interesting developments recently with two platforms taking different approaches to verification, and I think some of this may be a little backlash to X, where now you can just buy the blue check mark and it doesn’t actually verify anything other than that you pony up the money for it. But LinkedIn and Blue Sky [00:34:00] have taken steps with their verification programs. Let’s start with LinkedIn, which is allowing verified identities to extend beyond its own platform. This change means your verified LinkedIn identity can now be visible on other platforms designed to enhance trust and transparency across the internet. The system leverages open standards and cryptographic methods to ensure authenticity and security. What makes this particularly interesting is how it integrates with Adobe’s technology. Adobe’s content credential system is one of the tools supporting this cross-platform verification. So when you verify your identity on LinkedIn, that verification status can essentially travel with you to other websites and services that support these standards, including Adobe’s Behance. Now, this is a site that helps creators and people who need to hire creators connect. Now, this is a fundamental shift in how verification works rather [00:35:00] than a siloed verification system on each platform. LinkedIn’s embracing an interoperable approach that lets your verified status function as a digital passport of sorts. Now, while it’s too bad, this isn’t tied directly to the fedi verse protocols, the significance for communications professionals can’t be overstated. As content creation becomes increasingly distributed across platforms, having a verified identity that travels with you simplifies your ability to establish authenticity in multiple spaces. For organizations managing multiple spokespersons or content creators, this can streamline verification processes considerably. Meanwhile, blue Sky has taken a different but equally innovative approach to verification by introducing a new Blue Check system just last week. Uh, they’re implementing what they call a user-friendly, easily recognizable blue check mark that will appear next to verified accounts.[00:36:00] The platform will proactively verify authentic and notable accounts while also allowing trusted verifiers select independent organizations that can verify accounts directly. Now, what’s really interesting about Blue Sky’s approach is how it distributes verification authority. Under this system, organizations like the New York Times can now issue blue checks to their journalists directly within the app, and Blue Sky’s moderation team will review each verification to ensure that it is what they say it is. This creates a more decentralized verification ecosystem rather than putting all verification power in the hands of the platform itself. Blue Skies verification system has transparency built in. Users can tap on someone’s verified status to see which trusted verifier granted the verification. This adds a layer of context that helps users understand not just that the accounts verified. But who [00:37:00] vouched for it? Now, before this update, blue Sky had been relying on a domain based verification system letting users set their website as their username. For example, NPR uses@npr.org and US Senators verify their account with their senate.gov domains. This method is gonna continue alongside the new blue check mark system, and this gives users multiple ways to establish authenticity. Now, the evolution of these verification systems comes at a critical time with scammers and impersonators on the rise. A recent analysis found that 44% of the top a hundred most followed accounts on blue sky had at least one doppelganger account attempting to impersonate them. For those of us working in organizational communication, these developments signal a series of important trends. First. Verification is important and it’s becoming distributed and contextual rather than a single authority declaring who’s authentic. We’re moving toward [00:38:00] ecosystems where multiple trusted entities can vouch for identity. Second Cross platform verification is emerging as a solution to digital fragmentation. LinkedIn’s approach particularly shows how verified identity could function seamlessly across digital spaces rather than being siloed within individual platforms. Third, transparency about who is doing the verifying is becoming important. Blue Sky’s approach of showing which organization verified an account recognizes that the source of verification matters almost as much as the verification itself. For organizations, these trends suggest that we really ought to be thinking more holistically about verification strategies. Rather than just get verified on each individual platform, we are really gonna need to start thinking about establishing verified digital identities that can travel with our content and our spokespersons across the net. Neville Hobson (2): Very interesting development. So I [00:39:00] hadn’t familiarized myself much with the LinkedIn one, but that’s e equally very interesting. Uh, blue Sky though, to me is definitely moving ahead in a very interesting area. Unlike XI think you mentioned Shell, but some people are seeing this as like a slap in the face to Musk. That’s probably a very tangential way down the, the priority list, but yes, I bet they are. But I found it most interesting the way in which they’ve gone about this in terms of the, the levels of verification. You’ve got your little blue check mark looking slightly differently depending on the verification system. And by the way, this is, I think it’s a smart move to follow the blue check, although technically it’s not a blue check, it’s a white check in a blue background, but whatever people call it a blue check mark because, uh, it’s familiar thanks to Twitter as was and the who. Trashed it completely. ’cause the only verification means you’ve paid Musk so many dollars a month fee and therefore you verified. I mean, that’s Twitter’s def or X’s definition of what verification means. No value to it, in my view, shall frankly. But, uh, this though, [00:40:00] I think is, is far more interesting. Particularly the transparency about who has verified you. Um, I’ve used my own domain, a domain I acquired back in 2023 for the purpose of this is to verify my handle by domain. Neville Hobson xyz, YXYZ. You might ask because that’s because at the time the Metaverse was a big deal. NFTs were hot, and everyone who was, everyone had a domain ending in X, Y, Z. So hey, that’s a bandwagon I’ll jump onto, which I did. So I’m now using it have been, and it’s only used for that purpose currently. So, um, you can’t request verification. That’s another thing to mention with Blue Sky, uh, it’s not much you are invited. Is that suddenly that you might get a, not from saying they have verified you or one of these other organizations might, if you excuse me. On a domain with your employer, they can verify you. And there is something equally interesting on this. I’m not quite sure if this is just a sample, it’ll stay around or not. But you can actually verify yourself. I’ve [00:41:00] seen some people doing that. I haven’t done it. So because I can’t see the point. Uh, ’cause the point of verification to me is trust in someone else has verified you, not you doing it yourself. So maybe that will disappear or it’ll have some other function, I don’t know. But the transparency thing, according to the screenshots in Blue Sky’s, uh, announcement posts about this are, are great. A very clear so-and-so is verified. Uh, it says this account has been verified. It has a blue check because it’s been verified by trusted sources. Then it lists who those sources are and the date they perform. The verification adds lots to the trustworthiness that you perceive rather than just some simply say, yep, you verified, you a blue check. If you’re an organization verify, you’ll have a different style check. And these will all become quite familiar. They, they’re not complicated at all. So I. You are right at what you said earlier, which is about, verification isn’t just a casual thing anymore. You need to have a strategy about who in your organization, if you are a, a large organization in particular, who gets [00:42:00] verified for what purpose by whom, and we’ll see that emerging as this picks up. But this is a great start. They do say, and this is going back to the domain, you can self verify with a domain. That’s the only thing that makes sense, because to do it, you’ve got to make changes at your registrar in the in DNS settings and, and a few other things. And also engage with blue sky to do this. So it’s uh, uh, they say during this initial phase, they’re not accepting direct applications, as I mentioned. Uh, but they do say as this feature stabilizes, so I guess all the excitement’s dying down and people see how it’s all working, they’ll launch a request form for notable and authentic accounts interested becoming verified or becoming trusted verified. So during the course of 2025, we’ll see this develop and maybe, um, uh, maybe it will, uh, become the kind of benchmark standard for verification on social networks like this. So it’s interesting. I. Shel Holtz: We need a standard and I’d like to see that [00:43:00] standard. Yeah. Integrated with the fedi verse standards, because these all ought to be infra operable. We, we really ought to be able to share a post in one place where we are verified and have that post show up wherever people have chosen to follow us from and have that verification show up with us. And people should be able to click on that verification and see who vouched for us. Uh, they should be able to see that the spokesperson for my company was verified by me or by the CEO and it all works together. Neville Hobson (2): I think that will emerge, uh, thinking about this cross. Posting idea that’s in been in place in a couple of places, but it’s very, very flaky. I’m talking about things like, for instance, it’s been for a while, at least a year, if not longer, where a plugin on WordPress lets you publish your post and it will then share it across, across the Fed us via connection, uh, with Mastodon. And you’ve then got threads doing the same thing, [00:44:00] but they’re not. They require tweaks to your platform. Uh, the, probably the one that shows you, if I can use this phrase again, the direction of travel is ghost. The, uh, the new platform, which I joined the beginning of this year that has just enabled, um, or recently just enabled the ability to share your posts with Blue Sky Now ghost. Has invested a lot of time, effort, and probably a bit of money too, I think, into its social web offering, which is in beta. And that’s all to do with the activity pub protocol because Blue Sky has a different protocol at Proto yet that works from ghost to blue sky via a bridge. And that’s a little technical and that has got to be just immediate term usage whilst this, this plays out further. So someone like Ghost is making big inroads into doing, into enabling this kind of thing. And I would say we’re gonna see a lot of activity [00:45:00] during 2025 from Mastodon in particular, as well as people like Ghost and others to connect up these, these disparate elements of the Fedi verse so that we we’re becoming more cohesive. But it’s gonna take time. Shel Holtz: Yeah, the fedi verse is, is nascent, but it’s also, I think, inevitable. We’ve been talking for quite some time now about what is the successor to Twitter now that X has become what it has become. And I’m not sure that there is a successor. I think that there are a number of places that people are attracted to. It could be ghost, uh, could be for its newsletter functionality as much as for its blogging functionality. It could be threads it, it could be blue sky, it could, you know, whatever. But as long as where I am, I can follow who I want to follow and have that appear in the network that I have chosen, I’m good. So I think this is where things are, are headed inevitably since I think the days of somebody being able [00:46:00] to come along and say, I’m the new 800 pound gorilla of social networking. Everybody’s coming here are over. I. Neville Hobson (2): Yeah, it’s been apparent like that, that that’s likely to be the case for, for a bit, I believe very much that the time is gone for, for monolithic centralized social networks like Facebook, for instance. No, this is the time for niche networks. Uh, people can set things up themselves. Uh, it doesn’t matter. You, you’ve got 50 people on there or 50,000 people on there, doesn’t matter. And indeed, the the recent, uh, outage on Blue Sky is a, is an interesting indicator of the fragility of all of this. And, and Dan’s gonna talk about this a bit later in his report, but this is an interesting time. We’re now, it’s almost like things are maturing, it seems to me. And I think you’re right when you say that, that people aren’t, aren’t so much attracted by the idea of a centralized place where, Hey, we’ve all gotta go here after the experience on X. You’ve got more about people saying, I want to get outta here, where do I go? So, um, we’re still at that phase, and you’ve got. Something interesting with Trump’s, uh, not Trump Musk’s, uh, GR [00:47:00] network, developing chatbots for it and all this stuff. So that’s something interesting in that area of this. So it’s all at a time for communicators to pay attention closer to what is happening here and the implications of it just as you and I are doing. And if you don’t wanna do that, that’s fine. Just listen to FIR ’cause we’ll help you understand it. Yep. Okay. That’s a really good report, Dan. Thank you. Good topics. You’ve talked about, uh, blue sky. I mentioned just before your report actually the outage was unfortunate, but is it not an indicator precisely of that fragility? I mentioned previously different definitions of decentralization that you mentioned. I think that’s. Possibly a communication issue because people seem to be latching onto, Hey, it’s decentralized when actually it’s more like, it’s going to be decentralized. ’cause that’s our aspiration that we’re working towards, which is the case with, with Blue Sky. That’s very good on threads move to.com and web improvements. I must admit, I, I was a bit yawny about [00:48:00] that. You know, dot net.com. Do I care as a user? Well maybe I should because I then read somewhere else that the move to.com enables meta to do things that they can’t do with ANet domain. And I’m sure you’ll know more about that than me Dam at the internet Society. Again, interesting developments with what’s happening with all of this. So thanks for the report, Dan. This is really, really a good one. And let’s cha shift gears slightly. I don’t think this story’s got AI in itself that I’m gonna talk about. Oh Shel Holtz: my God. Neville Hobson (2): Gotta have one gonna be Shel Holtz: fined. Neville Hobson (2): So. Let’s shift focus, as I mentioned, something that’s critical for every business, uh, but often overlooked how we bring new people into our organizations and set them up for success. It’s called onboarding, right? The topic of onboarding is particularly timely right now, especially in digital marketing, where the pressure to deliver results is higher than ever with digital marketing at the heart of [00:49:00] business communication strategies, every new hire represents not just an addition to a team, but a critical investment in how a company presents itself, engages customers and drives growth. Effective onboarding therefore isn’t just about helping someone settle in. It’s about ensuring they contribute meaningfully, quickly, and sustainably to an organization’s broader success. A recent feature in Search Engine Journal caught my attention as it explored how digital marketing agencies are rethinking the onboarding experience. But whatever your business and where the agency or client side hiring great talent is only half the battle, keeping them as where the real challenge begins. The article highlights the critical role of structured onboarding in enhancing employee retention, productivity, and satisfaction within digital marketing agencies. One strong theme is the importance of starting onboarding before day one. Christie Hoyle, the COO at Kaizen Search explains Our process begins two weeks [00:50:00] before their official start date to ensure employees feel informed, prepared, and welcomed. This early engagement helps build confidence and sets expectations well before a new hire walks through the door. Zoe blog director of operations at the SEO Agency reboot highlights the importance of immersion during the first weeks. She says, our process is designed to give new hires time to truly absorb how we work before they’re expected to contribute. Human support systems play play a key role too. Phil Dukowski, client services and new ex director at SEO Sherpa and Emma Welland, co-founder of House of Performers, both emphasize mentoring. As Emma puts it, we assign everyone a mentor as well as a manager to make sure they have multiple people to check in with and speak to. Technology is also critical. Agencies like Vivant use platforms such as Asana to structure onboarding flows. Beth and Ranford, general manager and head of paid media at Vivant says we use Asana across the [00:51:00] business and have a comprehensive onboarding flow, which all new starters enroll with it. Meanwhile, Olivia Royce, operations director at SEO Agency Novos explains how their structured 30, 60, 90 day onboarding plan breaks the early months into clear milestones aligning with probation periods. She says, we have a clear onboarding process in our task management system, which outlines who is responsible for what during the onboarding process. Beyond tools and tar timelines, emotional connection matters most. Emma Wellen says, I fundamentally believe a good onboarding is judged by how you make someone feel for us. Making sure expectations are clear from day one is a big part of this. Shel Holtz: Yeah, I mean onboarding new hire orientation, call it what you will. It’s vital. There is data that suggests that people tend to leave a job somewhere between one and three years into it, and you have to believe that if the onboarding had been effective, those numbers [00:52:00] would drop. And there is so much wrong in what I see in so many companies doing with their onboarding. I mean, the typical thing is you have a new hire orientation the day you start, and then you’re just. Thrown into the deep end and how much can you really retain on your first day? You’re overwhelmed, your first day, you’re lucky. If you remember what day payday is, how I record my time, what work hours are, and what the deal is with the parking lot. So I, I like the 30, 60, 90 day approach. In fact, where I work, we are in the process of migrating to a, a new internal communications platform. We’re consolidating several separate tools in, into one tool. But one thing that it lets you do is target individuals to a different homepage to start with. And one of the things that we’re going to do in phase two is have a homepage for people. Are there. From their first day to their 30th day. Another [00:53:00] homepage for people who are there from their 30th day to their 60th, and a third one for people who are there from their 60th to the 90th, just surfacing those milestones and the kind of information that they need while still providing them the navigation to the same resources that everybody else needs. But yeah, I, I’ve heard so many different great approaches to this. I think it was Coca-Cola that had essentially a report card and it had a list, and it said, in your first week, you need to go talk to these three people about these three things. And when you did, the people you needed to talk to signed off and you had to have everything. Signed off at the end of a 90 day period, meaning you’ve met all of these people, gotten to know them, they’ve gotten to know you, you’ve learned from them, and have built that connection and started the relationship, and that speaks to that emotional connection that the report you referenced, addressed. Companies need to invest the time, energy, and [00:54:00] money in onboarding if they don’t wanna lose these people after they’ve been around for a year or two. That’s what it comes down to because replacing somebody is, I guarantee you gonna cost a whole lot more than what is going to cost to do an effective new hire orientation period. Neville Hobson (2): And this is talked about a lot, isn’t it? Shell, such as the examples I’ve mentioned from those individuals at those digital marketing agencies. But as you pointed out that, that so many companies don’t do anything beyond, Hey, welcome. Here’s your desk, here’s your password for your email stuff. Off you go. Uh, there’s some great approaches here and so. If someone says, why, why do we need to talk about this? Well, I think we just explained why we need to talk about this. This is key. If, as people keep saying, people are the most essential resource in our company, you just read the general newspapers to get a feel for the, the kind of dilemma across the board. Literally. This is not just to do with digital marketing agency. I mentioned that at the beginning. This applies to almost any organization that you [00:55:00] wanna retain people who need to, obviously the package, they get remuneration and benefits. All that is part of that, of course. But how you treat them, make them feel valued. Uh, I’m reminded my only experience this in in recent relevance. Was when I went to work for IBMA decade ago now, and I started at the beginning of 2016, but I had two months prior to that, a lot of contact with, with HR and others in, in IBM to familiarize myself with at the time how IBM worked. And boy, that was difficult to figure that out at that time, but they, they were on the, on the ball very much with this back then, a decade ago. And many company you mentioned Coca-Cola. I’m sure this is not alien to many companies, but it probably is alien to lots of companies as well. So, um, I, I hope this helps people if they’re. Looking ask and set their procedures and processes where there’s some good tips here from these folks that I mentioned. Shel Holtz: Yeah, yeah. There’s so many good ideas you can research on, on how to do a, a, a a a, a [00:56:00] good onboarding program. You referenced the idea of a mentor being assigned to every new hire. I like that in companies that are large enough where there’s a cohort of new hires, there may be 10 per month or or 20 per month. Uh, to have them go through all of these things as a cohort so they get to know each other and they become a resource to one another. You know, it can be embarrassing to reach out to somebody who’s been with the company for 18 years and, and ask something really, really basic that you think sounds stupid, but to reach out to somebody who started within three or four days of the time you did, have you figured this out yet? That’s just fine. And I know that. When I worked for the pharma that I used to work for after you’d been there a year, that cohort got together in a meeting with the CEO and the president who talked about, you know, things that we want you to know about now that you’ve been here a year in terms of culture and direction. But we also want to answer your questions and hear [00:57:00] your concerns. And I gotta tell you, that goes a long way toward building that relationship. It does, and building that trust in the leadership of the organization. And I think, I think it’s a, a, a, a really good idea. There is an opportunity for communicators to inject themselves in what is usually seen as an HR process, because this is all about knowledge transfer and information sharing. Neville Hobson (2): Good stuff Shel Holtz: don’t abdicate the responsibility that the communicators have to participate in this process. Well, I’ve been digging into a new global communication report from the University of Southern California’s Annenberg Center for Public Relations. You’ll, you’ll like the title of this one, Neville. It’s, it’s called Mind the Gap. The gap referenced is, is the one that exists between generations, even though the logo is the one that’s used for the tube and, and London. It’s not like we haven’t had a ton of research about generational differences, but this one had some revelations. Well, lemme start with the big picture. The PR industry is [00:58:00] experiencing what the report calls unprecedented upheaval driven by four major forces, artificial intelligence surprise, a hybrid work, the changing media landscape, and political polarization. Those are all topics that we address pretty routinely here on FIR. The report examines these forces through a generational lens, looking at how perspectives differ across Gen Z Millennials, gen X and US Boomers. Neville. Uh, the researchers surveyed over a thousand public relations professionals this past January, and despite all the disruption we’re facing, 74% of respondents expressed a positive outlook on the industry’s uh, prospects. Only 11% had a negative view. Uh, the optimism spans all generations. That was encouraging. But dig a little deeper and you start to find those gaps in how different age groups are approaching these changes. Let’s start with ai, which the report ranks as the most impactful trend. [00:59:00] About 60% of respondents believe AI will positively affect pr, but the confidence level varies dramatically by age. Nearly three quarters of Gen Z professionals say AI will make their jobs easier compared to just over half of Gen Z and X and boomers. So the older you get, the more skeptical you get about the new technology. The gaps get even wider When you look at specific predictions, 24% of Gen Z practitioners strongly believe AI will generate most of the content currently created by humans compared to just 8% of Gen X and 4% of boomers. That’s a 20% gap. One thing that struck me was a, a story in the report about a grad student who developed a business plan for an AI only PR firm that would charge clients just $15 and 99 cents a month. And most agency veterans and, and people who’ve been around a long time like me, are inclined to dismiss this as fantasy. Pretty clearly, [01:00:00] it’s a reminder that the next generation sees AI’s potential very differently. Now when it comes to hybrid and remote work, we’re seeing another significant divide. 72% of Gen Z says remote work makes their job easier compared to just 39% of boomers. That’s a 33% gap. And guess which generations all those CEOs demanding return to the office belong to? What’s really telling is that 47% of Gen Z practitioners would take a pay cut to work from home while only 25% of Gen X and 22% of boomers would do the same for young professionals today, flexibility isn’t a perk, it’s an expectation. Despite these personal preferences, 74% of PR professionals in mid-level or higher positions say they would hire talented candidates regardless of location. This suggests remote work is here to stay, however, uh, older executives might personally feel about it. [01:01:00] The changing media landscape presents maybe the most fascinating generation gap. Gen Z’s, the only generation that feels more positive than negative about how media changes will affect their day-to-day work. They’re also far more bullish on podcasts, social media, and influencer marketing than their older colleagues. The report points out that 65% of Gen Z believes social media will be very relevant to PR by 2030 compared to just 47% of boomers. When asked about the most effective marketing strategies, a viral social media campaign tops everyone’s list, but Gen Z places far more value on traditional, I’m sorry. Is it a Gen Z? I’m gonna make a time. But Gen Z places far less value on traditional newspaper coverage than older generations. I found particularly striking the reports finding about credibility when asked which generation is best informed about [01:02:00] political, social, and current events. Every age group ranked themselves first. This kind of mutual skepticism presents a real challenge for for cross-generational collaboration. The report’s findings on corporate purpose and social issues are especially noteworthy. Over the past three years, the percentage of PR professionals who believe companies have a responsibility to address social issue. Has nose dive from 89% in 2023 to 52% today. But here again, there’s a stark generational divide. Three quarters of Gen Z still believes in corporate purpose, while less than half of older practitioners do. As the report puts it, younger communicators are still serious about corporate purpose while the older ones are losing their conviction. This plays out in job preferences too when deciding whether to work for an organization. Gen Z values inclusion initiatives at nearly double the rate of Gen X, and they hold much [01:03:00] stronger opinions about refusing to work for companies with negative environmental impacts. So all. What does all this mean for communications? Well, the report concludes that PR is entering a period of major disruption that will redefine it over the next decade. But as the report suggests, we don’t have to close the gaps. We just need to recognize that each generation reacts to change differently based on their own life experiences. For those of us who have been in the business a while, this means we need to be open to new approaches. The report offers that this advice to foster innovation and collaboration in this new world order. Older generations will need to embrace change more rapidly, find common ground more easily, and get out of the way more often. Meanwhile, for the up and comers in our field, the report recommends developing proficiency with AI tools, mastering content creation, honing soft skills, and preparing for polarization by vetting ideas with people who hold [01:04:00] different opinions. I think Fred Cook, uh, the director of the Center for Public Relations, summed it up perfectly. Here’s what he said. The future of the PR industry depends upon how tomorrow’s leaders tackle the critical issues we’re beginning to face today, not bound by tradition, gen Z seems equipped and eager to confront those challenges. If we educate and support them on this mission, our profession will be in good hands. Neville Hobson (2): And that’s the, the burning topic, isn’t it, to support people. I mean, glancing through the report as I was when you were, when you were talking about the findings, it’s quite clear that the younger you are, the more you are likely to embrace new thinking, new ideas. The older you are, less likely Shel Holtz: I. Yeah, the more set your ways that plays out. Neville Hobson (2): Yeah. I mean it probably plays out well against traditional political divides too, I would imagine. So for instance, I’m, I was really just glancing at the part about organizations taking a stand on issues that aren’t necessarily directly involved [01:05:00] with their business and making statements about what they think about this event happening or this idea that’s wrong or right or whatever. And there, there’s a huge majority of Gen Z saying this is very important, they do this. How does that play out in reality? And particularly thinking about this survey that was conducted prior to its publication in March. And since then we’ve had all this, um, uh, uh, kind of metaphorical nuclear explosion with Trump’s tariffs that are still, um, unclear what’s gonna happen next and the stock markets and everything else. The stuff you see visibly in the, in your daily news consumption. Share prices up, share prices down The market conditions here are not good. These, these, they are, uh, what companies are gonna do. What does it mean for us? People are, are pausing in so many areas and it’s basically uncertainty. As to what earth is going on and the effect it’s gonna have. People, I wonder, would [01:06:00] that have made a difference if they’ve been asked these questions? Now, I don’t know is the answer. And there is so much in here that is typical of what we have seen in the past in generational comparison type surveys. Yet as they say in the financial community, past performance is no guarantee of what’s gonna come in the future. So it might be worth looking at this through that lens that this is worth. Analyzing and examining to see, particularly to pay attention to that tho those segments of the generations who are more willing to accept new ideas, to drive forward new thinking. Uh, that’s what needs more support. Now that, I don’t know how we do that though, because, uh, reality is that in most, let’s say, agencies certainly I, I would say, but also an organization where there is a PR function, that the more senior you are, the more older you are and are many of those older generations, not necessarily the boomers, but the ones prior to that, the millennials certainly willing to change. I’d like to think they might be not as a mass thing, but [01:07:00] more than not, perhaps. So this could, this, this to me suggests almost like a blueprint, if you like, a, a kind of a, a, you know, kind of a, like a building blueprint of where, where things need to go, how they need to change and what you need. What tools have you got, meaning the people to help you implement that change if you are in fact gonna be the catalyst for change. Tricky one. Difficult, but it’s, we’ve gotta do something, have we not Shel Holtz: It, it’s gonna be a long eight months for the 2026 Edelman Trust Barometer. Uh, because remember for the last several years, the Trust Barometer has pointed out that people expect business to deal with societal issues because they’re the only ones they trust enough to do that. Is that still true in the increasingly polarized environment we’ve seen just since January of this year, and of course, the report that was issued in January of this year, all the research was done before Trump took off. So, is there still that expectation, or as we see, for [01:08:00] example, the big tech companies accommodate Trump in order to avoid regulation and the other problems that result from bucking the Trump agenda, do they still have that degree of trust in business and do they still have that expectation? And if businesses just back off of conveying their views of societal issue and, and the actions that they think need to be taken and the actions that they take in support of that, if they back off of that, are, are people gonna stop doing business with them or is it simply a matter of if nobody’s out there doing it, or only Ben and Jerry’s is doing it and, and Patagonia, then we have to buy from somebody? Yeah, that’s the time. I remember though, that, you know, David Armo writing frequently about you need to take, put out a stake in the sand about what you believe in and what you stand for. It matters. It’s still gonna matter with Gen Z, where they go work. Unless the, the market changes to the point where it’s, it’s, it’s completely a buyer’s market [01:09:00] and you feel lucky to get a job offer from anybody. Short of that, that purpose is still important if you want to get the best people coming up outta school. Neville Hobson (2): Yeah, I agree with that a hundred percent. So it’s a time of great uncertainty, but taking a stand isn’t necessarily what. People will want to do, but they need to articulate and express what they’re thinking may be different ways than taking a stand. That, that, that always, to me sounds confrontational. I’m gonna take a stand about something, but there are other ways to do this that are perhaps less likely to meet resistance. Trouble is the polarization, certainly in the United States seems to me to be almost beyond the point of fixing. Um, I was reading a, a report, uh, over the weekend, um, and I’ve forgotten the magazine. It was in, it was long. I mean, it was a 15 minute read. They advertised, uh, it analyzing, um, uh, Trump’s press Secretary, um, uh, Catherine leave it 23 years old. She’s 25 years old, so she was born [01:10:00] this century. But the analysis of that article of her as representative of her. Generational cohort is striking. It truly is. And the, the, the, um, the passion she has for the political journey she’s on is quite clear and she is confident and able to convince people. I’m looking on a much smaller scale over here in this country. In the uk we have a, a local elections next week, um, where polling is suggesting that the attitudes of, uh, people who’ve been polled, and again, these are small numbers, they’re not national. These are for the local councils, the mayors and, and so forth and cities I is, is almost like people are saying. I don’t care who I vote for, as long as not anyone connected with a major political party. So you’re looking at the independence and the small ones who never really are ever gonna win power, but they could be the, the kind of linchpins in, in who does. And you think, okay, we, we’ve heard that before a lot, but this is the first time it seems to be [01:11:00] coalescing around an ideal that many people can buy into. That says a lot for the political structure. You are seeing similar things happening in some other European countries. So this is a, it gives like a wave everywhere and in the US it’s, it’s manifest itself. What we’re seeing here, Canada, look what’s happening there. They have federal elections Monday. That’s tomorrow as we’re recording this. And so what impact will that have if the political structure doesn’t shift from the liberals in Canada to the conservatives, which is more aligned with Trump? They don’t seem to like Trump either. So, I mean, one thing you could say Trump has succeeded in, in literally uniting everyone who are in disunity. So Trump’s tariffs have forced the UK and the EU closer together than otherwise would’ve been the case. Interesting. I think Shell, so who knows what’s gonna happen? Uh, the rest of 2025? I don’t see, uh, peace and quiet descending anytime. So, Shel Holtz: no. Neville Hobson (2): Okay, so let’s get back to ai. So this topic is related to [01:12:00] what we’ve talked about to before, and I’ll mention that as I, as I outlined the story here. So, the idea of AI tools supporting workers is nothing new. We’ve been talking about this quite a bit, but what about AI agents acting as workers themselves? And yes, we have talked about that, but this is about Jason Clinton, the Chief Information Security Officer at Anthropic, the maker of the Claude Chatbot, who believes that’s exactly where we’re heading. He told Axios that within just a year, we could see virtual AI employees embedded inside organizations complete with their own corporate accounts, passwords, memories, and defined roles. Clinton warns that this will force companies to rethink cybersecurity and access controls, raising difficult questions about visibility, accountability, and responsibility. If AI agents go rogue. That’s a hell of a picture he is painting here. I must admit. He says in that world, there are so many problems that we haven’t solved yet from a security perspective that we need to solve. Meanwhile, anthropic, CEO Dario [01:13:00] Ammo added another bold prediction that we are only three to six months away from AI writing 90% of software code, and within 12 months, AI could be writing nearly all of it. His remarks reported in ink have drawn skepticism with some suggesting that while AI will certainly reshape coding, it won’t replace human developers entirely. Still the direction of travel is clear. AI is moving from a support tool to something much more autonomous. If AI employees take on real responsibilities, who manages them? Who is accountable if they make mistakes? Well, this builds directly on the conversation you and I had earlier this month. She, in episode 4 58, which I’ve mentioned at least twice in this episode so far, where we explored the challenges of preparing managers to lead human and AI hybrid teams. Managers will soon be asked to lead not only people, but also AI agents that autonomously perform multiple tasks. It’s not just about leadership. [01:14:00] Organizations must prepare their employees too. Helping everyone understand what it means actually to work alongside ai. Colleagues in meetings, projects, and communication and communicators will have a vital role to play shaping the narrative. Guiding expectations and making the future tangible for everyone. However fast this transformation comes, it’s clear that the future of work won’t just be about humans adapting to ai. It will be about organizations adapting their cultures, structures, and expectations too. So let me ask the question. We pose in our FIR four five and that we’ve mentioned in this episode too, are we really ready for a workplace where AI isn’t just assisting, it’s acting as a full team member? Shel Holtz: Yeah. Clearly we’re not ready. I wrote a whole post about this on LinkedIn, an article about what managers are going to have to do to prepare for all of this. It was based on the question you asked during the episode, Neville, you said, we’re talking about the fact that we’re not ready, but we’re not talking [01:15:00] about what we need to do to get ready. So I, I gave that a lot of thought and, and wrote an article in response to that. But. I also question whether we need to be all that ready right now, because the fact that AI agents that are capable of performing the duties of a full-time employee are going to be available in a year, doesn’t mean that you’re gonna see a rash accompanies suddenly hiring them. You are going to see a fairly normal progression look. There was a, there was an essay that was published just on April 15th, not that long ago. Uh, this was written by Arvind Nu and a Osh Kaur. It was published by the, the Knight First Amendment Institute at Columbia University. That university that fired the student who did the cheating program with ai. Yeah, but I love the title and the subtitle of this article. It’s AI as Normal Technology, an alternative to the vision of [01:16:00] AI as a Potential Super Intelligence. This is a very long, very long essay. I heard one of the co-authors interviewed on a podcast and and he said it’s going to be blown out into a full book. But they have one chart here that that really tells the, the story and, and it is that you have the invention and then you have innovation that emerges from the invention. And then you have diffusion. And diffusion has two parts, and that’s early adoption and adaptation. And this takes time. And it takes time. They say it doesn’t matter what the technology is, it doesn’t matter how. Earth shattering the technology is consider electricity, right? I mean, look at what electricity did for the world, but how long did it take to diffuse through society? I mean, there’s people who are going to be adopting and adapting these things, and we tend to do that over time. So as employees that are, you know, a, a collection of AI [01:17:00] agents are offered by these companies that are, are, are going to be providing this service. What you’re gonna see is testing a very tentative, it’s not going to impact real world operations. You’ll probably end up with a team of people doing a simulation with an AI employee to see how it goes, to identify the areas of risk and those things that it does well at and doesn’t do well at. Then you’ll probably see one introduced to one low risk team doing real world work and slowly it will be. Employed by the organization, uh, across all departments, but that’ll take years. So when they say that these are gonna be available in a year, it doesn’t mean they’re gonna be used in a year. I think there’re gonna be very, very few companies that are gonna say, yeah, we’re just gonna stop hiring, or we’re gonna start firing and we’re gonna have AI employees come in and do all this stuff. I mean, who wants to do that with any first model of any technology? [01:18:00] Neville Hobson (2): The early adopters and the people who, uh, don’t care about the consequences, I suspect. Yeah. Shel Holtz: I think one, one of the things that this chart points to is, this goes back to Jeffrey Moore in crossing the chasm, right? I mean, it’s the same kind of timeline and when we think about AI as normal technology as, as amazing as it is, and as much as it can do curing diseases and, and identifying novel drugs and, and all of these things, it’s diffusion into society. Going to take a lot of time. Neville Hobson (2): And I think the, the picture people miss or rather don’t, don’t interpret correctly, is a lot of what you see being discussed online or written about in, in journals is you don’t realize it, but they’re talking about the mass adoption of all of this. When, when they talk about the timeframes, it’s a bit like, uh, you know, the Gartner hype cycles on various things. Talk about the, the plateau and, um, the, you know, use widespread use in, in society. And that is mass [01:19:00] adoption. We’re not talking about that here. I don’t think we should, if I were in a large organization, use yours as example. She, you’ve got an AI kind of task force there. What I’d be looking at amongst the other things, but the focus would be. What is relevant to us? What do we need to do for us right now? Yes, I’m aware of this big picture, all this stuff and predictions and so forth, but how do we get ready in our timeframe to do these things based on what we know now? So experimentation, clearly what you gotta do. What does that actually mean in practice to, to execute on something that’s described like this, an AI virtual system? Well, maybe you could actually, we need to think for ourselves is really what I think is the important thing for organizations with groups of people who are looking into where all this is going in the, in the, in the near or far future. But it’s really what it means to you in your organization, more importantly than. Shel Holtz: I think the process of getting ready is not something that you have to do overnight. Uh, I think the process of getting ready [01:20:00] is going to come from that experimentation is going to come from the processes that organizations implement. Now. I think you’re gonna have problems with organizations that decide, we’re not going to pursue this, or we’re not going to commit the resources necessary to do it well or do it right. But by and large, I think that organizations will tackle this as they tackle any technology. Just, I mean, look how long it took the web to infiltrate business, but everybody’s there now, so it’ll happen. It will. And that’ll wrap up this episode of Four Immediate Release. Our next monthly episode will drop on Monday, May 26th. We’re planning to record that on Saturday the 24th. In the meantime, uh, man, I just loved all the comments we got for. Terrific. This episode. We would love your comments on the stories we’ve reported today, as well as those shorter midweek episodes that will coming, will be coming between now and May 26th. You can [01:21:00] send us an email to fir comments@gmail.com. Send us up to three minutes of audio and we’ll play it. You can record that audio by clicking the uh, send voicemail tab on the right hand side of the. Website. You can leave comments on the FIR Podcast Network website. The show notes has a place where you can leave a comment and you can also leave comments on Facebook or LinkedIn or Blue Sky or Threads where we share links to the show notes. And we also appreciate comments coming on Facebook to the FIR community. One other place that you can leave those and your reviews and ratings are also deeply appreciated. It helps people discover this show. So until next month, that’s a 30 for four immediate release. The post FIR #462: Cheaters Never Prosper (Unless They’re Paid $5 Million for Their Tool) appeared first on FIR Podcast Network.
undefined
Apr 24, 2025 • 21min

FIR #461: YouTube Trends Toward Virtual Influencers and AI-Generated Videos

Videos from virtual influencers are on the rise, according to a report from YouTube. And AI will play a significant role in the service’s offerings, with every video uploaded to the platform potentially dubbed into every spoken language, with the speaker’s lips reanimated to sync with the words they are speaking. Meanwhile, the growing flood of AI-generated content presents YouTube with a challenge: protecting copyright while maintaining a steady stream of new content. In this short midweek FIR episode, Neville and Shel examine the trends and discuss their implications. Links from this episode: YouTube Culture & Trends – Data and Cultural Analysis for You YouTube Looks to Creators (and Their Data) to Win in the AI Era YouTube Publishes New Insights Into the Rise of Virtual Influencers The next monthly, long-form episode of FIR will drop on Monday, February 24. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Shel Holtz: [00:00:00] Hi everybody, and welcome to episode number 461 of four immediate release. I’m Shell Holtz. Neville Hobson: And I’m Neville Hobson. This month marks 20 years since the first video was uploaded to YouTube, a 19 second clip that launched a global platform now at the Center of Digital Media as the platform. Reflects on its past. It’s also looking sharply ahead. And what lies on the horizon is a bold AI powered future highlighted in two reports published in the past week. According to YouTube’s leadership, we’re five years away from a world where every video uploaded to the platform could be automatically dubbed into every spoken language. More than that, the dubbed voice will sound like the original speaker with AI generated lip movements tailored to match the target language. It’s a vision of seamless global accessibility where creators can invest once and reach audiences everywhere. [00:01:00] This isn’t speculative. YouTube is already piloting dubbing tech with hundreds of thousands of creators and experimenting with voice cloning and lip reanimation. But with that ambition comes a fair amount of controversy. Underpinning these features is Google’s Gemini AI model trained on an ocean of YouTube videos, YouTube. Many from creators who weren’t aware their content was being used this way. Some have pushed back arguing that a license granted under YouTube’s terms of service doesn’t equate informed consent for AI training. At the same time, YouTube’s 2025 trends report highlights the rise of virtual influencers, synthetic personas, who are building large audiences and changing what authentic content looks like. For a growing number of viewers, it doesn’t seem to matter whether the face on screen is real generated or somewhere in between. What emerges is a picture of a platform trying to empower creators with powerful tools while, while quietly shifting the [00:02:00] ground beneath their feet, culturally, ethically, and. On one hand, a report by Bloomberg paints a picture of YouTube as a tech powerhouse using AI to expand creative reach, drive viewership, and reshape media, but not without controversy over how training data is sourced, especially from creators unaware that content fuels these advancements. On the other hand, social media, today’s take focuses more on the cultural shift. AI generated influencers, fan created content and multi-format storytelling are changing the rules of what audiences find compelling and raising questions about the very definition of authentic content. Both views converge on the same point, AI is here to stay, and whether you are excited or concerned, it’s reshaping the creator economy from top to bottom. So is this YouTube fulfilling its mission to de democratize creativity through technology? Or is it becoming a platform where the line between creator and content becomes so blurred [00:03:00] that the original human touch gets lost? We should unpack this. There’s quite a bit here to talk about. Isn’t. Shel Holtz: There is, and it seems to me a relatively natural evolution for YouTube. Uh, as long as creators are able to upload what they want, I think you will find plenty of authentic content. There’s going to be no shortage of people who want to talk into a camera and share that. Uh, people who. Themes, uh, that they think people would be interested in? Uh, I, I love hearkening back to a story I read about a, a physics grad student, uh, who started a YouTube series, uh, called Physics for Girls. Uh, and it was aimed at the K through 12. Cohort of of students and trying to get them interested in the STEM sciences and it became very popular and she was [00:04:00] making, I think I read a million dollars a year in. Advertising revenue. I don’t think that’ll stop. I think people will be able to continue to do that. What you see is in a platform where there’s no limits, there’s no constraints. How many gigabytes of of video data can be uploaded? They just. Keep expanding their data center capacity, uh, that there’s room for all of this other stuff, including the AI generated content. And as long as it’s entertaining or informative, if it serves a purpose, people will watch it. And that’s the thing, if it’s crap, people aren’t gonna watch it. It’s not gonna get recommended, uh, it won’t find its way into the algorithm. And. Spending time creating it if it doesn’t produce the kind of results that they’re looking for. But we’ve already seen that influencers. Work, uh, on both sides of the equation, you [00:05:00] can tailor them to be exactly what you know your audience is looking for. So it’s great for the consumer. Uh, and in terms of the brand or the advertiser, uh. You don’t have these loose canon celebrities that you’re, uh, using or, or somebody who’s just a professional influencer who goes off the rails. You’re in complete control. So, uh, you know, it’s not my favorite concept, but I don’t see any way to slow it down. And I think the people behind them are gonna continue to, uh, find ways to make them. Resonate with, with the people that they’re, uh, aiming them at. And in terms of the training of AI models on all of this, you know, right now you have a, an administration in Washington DC that is agreeable to the approach that the, uh, the AI companies, uh, open ai [00:06:00] and like. Want the government to take, which is to, uh, just put an end to this whole intellectual property thing and say, AI can train on anything it wants to. Uh, so I, I think that’s probably coming, uh, God knows Elon Musk is, is training grok on all of the content that is shared on X. And if you have an account there that’s, that’s your. Implicit permission to let him do that. It’s one of the reasons that he went ahead and bought X in the first place was knowing that he had access to that treasure trove of data. So I don’t see it. I don’t see that slowing down either, and I don’t see the fact that people are unhappy, that their content is being used for training, being an impediment to having that content used as training. It’s gonna continue to happen. Neville Hobson: That’s part of what worries me a lot about this. I must admit, if I took, if taking the Bloomberg report, um, which [00:07:00] is, uh, this, this idea of auto dubbing videos into every spoken language. We’ve talked about this before, not what YouTube’s doing, but the notion of. The example, you often give the CEO of a company giving an all employee address and he’s an American or a native English speaker. Uh, and yet there’s a version in 23 other languages like Urdu or Hindi or, or Spanish even. You know, you then talk about Mongolian, perhaps if they have offices in Learn Battle or something. Um. That, uh, shows him fluent in talking in all of those language, which is, I’ve always believed and I still do. That’s misleading. Uh, unless you are very transparent, which is fact adds to your burden of, of engage with employees. If you’ve gotta explain every time he’s not fluent, and this is not really him speaking Hindi. It’s, uh, an AI has done it or however you might frame it. So that’s not gonna stop though easier. Uh, your point I agree with as well [00:08:00] that most people won’t really care about, about this Probably. Um, I mean, I’m a, I count myself as a creator, uh, uh, in terms of the very tiny bits of content I put up on my YouTube channel, um, which, uh, isn’t a lot, uh, it’s not a regular cadence, uh, is now and again. Uh, and if I found versions in, uh, you know, in, uh, uh, in native, uh, in, in a native language on Bolivia, for instance, would I care? Well, only in the sense of is, is it reflecting exactly what I said in English and have to, you have to assume that it’s gonna be doing that, but that’s not to me the point really, they’ve gone ahead and done it without permission. There will be people who don’t want this happen to content. Ts and Cs saying they can do this. If you don’t like it, you’re gonna have to stop using YouTube. And that’s the reality of life, I think. But there are a couple of things though. Uh, I, I think, you know, Google wants creators to use its ai IE Gemini, uh, to, uh, create, edit, market and [00:09:00] analyze the content that they create and, and, uh, uh, that’s, you may not want to use Gemini. Um. You’ve got, uh, uh, the training element that Google is assuming they’re okay to use your content to do things like that. Uh, it aligns with their terms of service, they say, but trust isn’t in that equation as far as critics are concerned. The voice cloning and lip animation, the technology is amazing, I have to say. Uh, and according to Bloomberg, YouTube’s already testing multilingual dubbing in eight languages with plans to expand that. Well, yeah, there’s cloning and lips. To mimic native language speech are in pilot phrases. So all this is coming without doubt. So I think it is interesting. There’s some downsides on all of that. According to Bloomberg, dubbing will reduce, uh, CPMs when moving from English to other languages. You’ve got that to take into account too. But expanding reach to new language audiences may ultimately increase total revenue. If it’s a monetization thing you’re looking at. Um, so [00:10:00] YouTube says they think quality content, to your point, will still rise above the growing flood of AI generated deepfake material. I guess that’s part of what we call AI slop these days, right? So there’s that, which of course leads you straight into the other bits about virtual influences, uh, and, uh. Just a casual look. And I was doing this, uh, this morning, my time before we were recording this, uh, uh, coming across examples of what people are writing about and publishing, uh, with photos and videos of people that you, you get it in the highest resolution you want. I swear. You cannot tell if it’s real or not, if it’s a real human being or an AI generated video. Will that matter? At the end of the day, I, I think it probably comes down to do you feel hood wicked when you find out it’s an ai when you thought it was a person? And there’s a few surveys out recently, and, and this is kind of tangential connection to this topic, but of people who are [00:11:00] building relationships with ais, they’re, they’re getting intimate with them. And, and I don’t, I don’t mean the, obviously meaning what we might think intimate means, but developing emotional bonds. With an AI generated persona. And so, uh, there’s great, uh, risk, I think there of, uh, misuse of this technology. So, you know. Going down the rabbit hole or, or even the, the, the, the, the idea of it’s all a conspiracy and they’re out to steal our data and confuse us. No, it’s not that. But there’s great risk, I think, of opacity, not forget about transparency. This is, this is completely the opposite of that. Uh, and it’s, it’s got, uh, issues in my view that, uh, uh, we ought to try and be clearer than just give. The likes of Google and others, uh, literally can’t blanc to do what the hell they want without, uh, without any, uh, uh, any regulation, which, uh, unfortunately that seems to be aligned with, uh, Mr. Trump and his gang in Washington as to what, they [00:12:00] don’t care about any of this stuff at all. In which case, um, uh, tech companies, if you listen to some of the strong critics are rubbing their hands with glee at what they’re gonna be able to do now without any oversight. And therein is the issue. But I’m not saying that’s something we should therefore, you know, get out our pitchforks and shovels of March on Washington. But it’s a concern, right? I mean, this is a major development. Um, the virtual influencers I think is, uh, is is exciting idea. I. Um, but the risks of of misuse are huge in my view. So I just having the yes but moment here basically. And I normally not, I don’t normally do this shell, I’m normally embracing all this stuff straight away, but there’s big alarm bells ringing in my mind about some of the stuff that’s happening. Shel Holtz: Well, I think a lot of it is going to be contingent upon what we become accustomed to. Uh, yeah. As, as you become accustomed to things, they just become normalized and you don’t give them a second thought. There was a TV commercial. [00:13:00] I’m gonna have to. See if I can find it. They must have it on YouTube. Uh, even though this had to be 20, maybe 25 years ago, I believe it was an IBM commercial. It was a great commercial by the way. This is why I remember it so many years later. Yeah. Uh, it was either black and white or, or sort of CP toned. Uh, it was in a dusty old diner out in the middle of nowhere, and there’s a waitress behind the counter, uh, and there’s nobody there. One guy wanders in and sits down. And he, I don’t remember what he asks for, but they don’t have it. They don’t have this, they don’t have that. And then he sees the tv. He says, uh, what do you have on tv? And she says, every movie and television show every ever made in any language you want to hear it in, uh, and talking about the future of technology, right? If you get to a point where anything you wanna see. Is available in your language [00:14:00] then, does it continue to be an ethical question when you see your CEO who doesn’t speak your language speaking to you in your language? Or is this just something that we all accept that the technology does for everything now and it doesn’t matter whether he speaks your language or not, he can because of the technology. Now, I’m not saying that. Promoting as an approach to take today from an ethics standpoint, I think you do need to let people know, uh, we think it’s gonna be a lot easier and more meaningful for you to hear, uh, the CEO speak in your native language. Mm-hmm. But he doesn’t speak it. This was AI assisting with this, but in five years when everything. Is handled that way, it will it even matter. I, you know, I, I suspect that it won’t, I suspect it won’t matter whether somebody speaks that language when you know that any media you consume can be consumed in your native language thanks to the technology that we all [00:15:00] take for granted at that point. Neville Hobson: Hmm. Uh, that’s a sound assessment. Uh, and you may well be right and I, I suspect that much of what you said will likely come to pass. I just think that there’s. Concerns we ought to be paying more attention to than we seem to be. It seems to be. So for instance, uh, one big thing to me is, is um, I guess it’s kind of related to the ethical debate, but what does real mean anymore? I. In this, what does authenticity mean? Now, it doesn’t mean what it meant yesterday. If you’ve got virtual influencers, uh, creating videos, you don’t know that that’s not, that’s not a real person. Things like that. Shel Holtz: That’s, that’s keep in mind that I was, I was sold, uh, sugar Frosted Flakes by Tony the Tiger, uh, who was not a real person, uh, or even a real tiger. But they, Neville Hobson: they weren’t pretending it was, or, or making you assume that it probably was. That’s the only different, but this is. Thing. Shel Holtz: This is, uh, uh, the, the modern equivalent. Uh, and well, Tony the Neville Hobson: tiger. [00:16:00] Shel Holtz: Yeah. And yeah, the, the virtual influencers I’ve seen so far, uh, are obvious. Uh, I have not seen one that they have worked really, really hard to convince you that this is anything but a virtual influencer. And on Instagram, at least most of them, I see the disclosure, uh, that, that they are, uh, I just don’t think people care. Uh, no. If, if, if they’re getting good information, if they’re being entertained, you know, are you not entertained? If you are, you’ll continue to watch. And, uh, if somebody says, you know, that’s ai, your answer’s gonna be okay. So Neville Hobson: I get that, but I think we have a responsibility to, uh, to point out certain things, whether people care or not. That’s part of our Oh, that question. Our responsibility is communicate. Yeah. So yes. So, so hence my point about, uh, what does real mean? What, how do we defining real now? Uh, and I think the, um, the, the, the kind of, uh. Bigger worry. Waiting in the wings is [00:17:00] the fakery that we see everywhere. It’s getting even easier to, uh, to do this kind of thing. Um, deep fakes, whatever they’re now called. Um, that’s been off the radar for a bit now, but suddenly you’ll see something and to, for what I mean, I read, I haven’t seen anything myself, but I did read this morning that already there’s videos around of Pope Francis who died, uh, on Monday, uh, that he is not. Actually, uh, according to these videos, he’s out there speaking and, and doing all these events and so forth, um, that will confuse some people. And th this is the, this is, I think the gr the, the grave risk, uh, of not the technology, um, because. It’s what people will do with it. And that’s not, I’m not suggesting for a second that because of that, therefore we shouldn’t do X and y and so forth. Not at all. But we need to, uh, address these concerns and indeed the, uh, the unspoken concerns, uh, before they become a problem, uh, or at least make people aware [00:18:00] and that that is a lot. Not to do with the awareness that we’re already seeing from governments everywhere. Like here in the uk for instance, I see government ads across every social network now and again about, uh, checking the very, checking the authenticity of things and people, uh, and products that people are pitching and so forth. Uh, and that will ramp up, no doubt, in which case opportunity for communicators then for that kind of education. So, um, it, it, it perhaps will come down to, uh, to that the, uh, the, uh, the ethical debate on training, on consent, uh, on people’s rights, intellectual property, whatever. Governments in Washington, DC I mean, that. Uh, the situation with Trump and his, uh, um, his psycho fence, as I call them, really, uh, is only, um, uh, is well, it’s more than a blip. It’s, it is made a huge change around the world that no one could have predicted. Whatever you think about Trump, you gotta give it, give it to him in one sense that he [00:19:00] has forced huge change on almost every country around the world. So, uh, I see here things that people are discussing now, we’re gonna. Would never have dreamt that these politicians would be suggesting that if Trump was not on the scene. So that is a big impact in all of this, and it’s hard to predict what effect that’s gonna have on something like this. But, um, I think the, uh, the concerns of people about training, for example, using their content without permission, uh, human beings, again, this is a, a related thing to what other conversations are worried about being replaced by the. That’s not, not, not a separate or a suddenly new thing, but it just reinforces in my view, certainly that we need to address all of these things. We need to show that we have people’s backs in their concerns about this, and we’re gonna help kind of understand it if we can. That’s our job as communicates, it seems to be. Shel Holtz: Yes. In addition to creating some of this content. Neville Hobson: Oh, indeed. [00:20:00] Shel Holtz: That’ll be a 30 for this episode of four immediate release. The post FIR #461: YouTube Trends Toward Virtual Influencers and AI-Generated Videos appeared first on FIR Podcast Network.
undefined
Apr 23, 2025 • 36min

Zora Artis on Bridging AI and Human Connection in Internal Communication

Zora Artis is a leading voice in strategic internal communication with a passion for how IC can lead the integration of AI into the workplace in ways that reinforce, rather than replace, human connection. In this FIR Interview, Neville Hobson and Shel Holtz speak with Zora about how artificial intelligence is reshaping internal communication, prompting a strategic transformation in the profession. The conversation builds on Zora’s article on the Poppulo blog in March 2025, “Bridging AI and Human Connection: What’s Possible for Internal Communication,” and draws on her experience facilitating a global roundtable debate of senior communicators in The Hague. Zora challenges the narrative that AI erodes empathy or replaces people. Instead, she explores how AI, when used intentionally and ethically, can support personalisation, amplify employee voice, and help communicators focus more on strategic value and less on repetitive tasks. The discussion also examines examples of AI in action: from internal GPTs (large language models that use deep learning to generate human-like text and content) trained on leadership content, to custom AI advisors embedded in daily workflows. But with opportunity comes risk, and Zora highlights the need for governance, inclusivity, and ethical clarity in how AI is used within organisations. Discussion Highlights: Why communicators are central to bridging the trust gap between leaders and employees on AI adoption. What it means to treat AI as a collaborator, not just a tool. How AI can enhance messaging effectiveness and employee understanding. The ethical risks of bias, overreach, and unrealistic expectations. What internal communicators should do now to stay relevant: shift mindset, experiment, and lead. About Our Conversation Partner Zora Artis, GAICD, IABC Fellow, SCMP, is a strategist, advisor, and coach specialising in alignment, communication, and leadership. She is the CEO of Artis Advisory, co-founder of The Alignment People, and a partner in Mirror Mirror Alignment. She helps leaders and teams cut through complexity to build clarity, cohesion, and high performance. Zora is a passionate advocate for responsible human–AI collaboration and the evolving role of communication professionals in the way we work and the value we create and deliver. Follow Zora Artis on LinkedIn Links from This Interview Poppulo blog: Bridging AI and Human Connection: What’s Possible for Internal Communication? Artis Advisory The post Zora Artis on Bridging AI and Human Connection in Internal Communication appeared first on FIR Podcast Network.
undefined
Apr 17, 2025 • 1h 1min

Circle of Fellows #115: Communicating Amidst the Rise of Misinformation and Disinformation

Misinformation and disinformation aren’t just problems for the news media—they’re also becoming critical concerns for corporate and organizational communicators. Whether it’s a viral post spreading false claims about your company, a deepfake video targeting a leader, a cloned voice trying to trick employees into transferring funds, or AI-generated content muddying the information landscape, today’s communicators must be equipped to navigate a world where truth competes with convincing fiction. In this live-streamed conversation, Fellows of the International Association of Business Communicators (IABC) explored how generative AI is accelerating the spread of false and misleading content—and what communication professionals can do to identify, counter, and prepare for it. About the panel Alice Brink is an internationally recognized communications consultant. Her firm, A Brink & Co., works with businesses and non-profits to clarify their messages and communicate them in ways that change people’s minds. Her clients have included Shell Oil Company, Sysco Foods, and Noble Energy. Before launching A Brink & Co. in Houston in 2004, Alice honed her craft in corporate settings (including The Coca-Cola Company, Conoco, and First Interstate Bank) and in one of Texas’ largest public relations firms, where she led the agency’s energy and financial practices.  Alice has been active in IABC for over 30 years, including as chapter president, district director, and Gold Quill chair. Sue Heuman, ABC, MC, IABC Fellow, based in Edmonton Canada, is an award-winning, accredited authority on organizational communications with more than 40 years of experience. Since co-founding Focus Communications in 2002, Sue has worked with clients to define, understand and achieve their communications objectives. Sue is a much sought-after executive advisor, focused on leading communication audits and strategies for clients in all three sectors. Much of her practice involves a strategic review of the communications function within an organization, analyzing channels and audiences. She creates strategic communication plans and provides expertise to enable their execution. Sue has been a member of the International Association of Business Communicators (IABC) since 1984, which enables her to both stay current with, and contribute to, communications practices. In 2016, Sue received the prestigious Rae Hamlin Award from IABC in recognition of her work to promote Global Standards for communication. She was also named 2016 IABC Edmonton Chapter Communicator of the Year. In 2018, IABC named Sue a Master Communicator, the Association’s highest honor in Canada. Sue earned the IABC Fellow designation in 2022. Juli Holloway is an Indigenous communications practitioner specializing in professional communication in Indigenous contexts. Throughout her career, Juli has been fortunate to work with First Nations and Indigenous organizations in British Columbia and across Canada to support transformative change for First Nation communities and people through strategic communications and community engagement. Juli is the communications advisor at the Tulo Centre of Indigenous Economics. She leads communications, designs, and delivers a communications curriculum in university-accredited programs designed to advance Indigenous economic reconciliation. She is also an associate faculty member at Royal Roads University, where she teaches in the MA in Professional Communications program. In 2022, she earned the Outstanding Associate Faculty Award for Teaching Excellence in the MA Programs in 2022 for her innovative pedagogical methods. Juli is Haida and Kwakwaka’wakw and has been a guest on the traditional lands of the Secwépemc for 17 years. She belongs to the Skidegate Gidins, an eagle clan from the village of Skidegate on Haida Gwaii, and the Taylor (nee Nelson) family originating from Kingcome Inlet, home of one of the four tribes of the Musgamagw Dzawada̱ʼenux̱w. George McGrath is founder and managing principal of McGrath Business Communications, which helps clients build winning corporate reputations, promote their products and services, and advance their views on key issues. George brings more than 25 years in PR and public affairs to his firm. Over the course of his career, he has held senior management positions at leading strategic communications and integrated marketing agencies including Hill and Knowlton, Carl Byoir & Associates, and Brouillard Communications. The post Circle of Fellows #115: Communicating Amidst the Rise of Misinformation and Disinformation appeared first on FIR Podcast Network.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app