The FIR Podcast Network Everything Feed

The FIR Podcast Network Everything Feed
undefined
Sep 22, 2025 • 21min

ALP 282: Stop providing solutions before understanding your client’s challenges

In this episode, Chip and Gini discuss the common practice of providing free proposals and baseline ideas to clients. They argue that professional service providers should charge for these services as doing so adds value and ensures a thorough diagnosis before providing solutions. They share personal experiences and compare the situation to doctors who would never prescribe treatment without proper tests. They emphasize the importance of understanding a client’s business through a paid discovery phase and making adjustments along the way to deliver effective results. Additionally, they discuss the risks of providing overly detailed plans in early stages, the benefits of quarterly assessments, and the importance of maintaining clear communication and trust with clients. [read the transcript] The post ALP 282: Stop providing solutions before understanding your client’s challenges appeared first on FIR Podcast Network.
undefined
Sep 16, 2025 • 49min

FIR Interview: Generative Engine Optimisation with Stephanie Grober

GEO – generative engine optimisation – is suddenly everywhere. Is it the new SEO, a passing fad, or simply good communication practice in disguise? In this FIR Interview, Shel Holtz and Neville Hobson talk with Stephanie Grober, Marketing & PR Director at Horowitz Agency in New York, about why GEO matters, the competing narratives surrounding it, and how communicators should prepare for the impact of generative search. What we discussed What GEO actually is – and how it differs from (or builds on) SEO The hype versus the reality: is GEO a genuine discipline or simply “snake oil”? The importance of authority, credibility, and tier 1 media coverage in shaping generative search results Why trade and niche publications are still crucial for visibility Practical steps for PR and comms professionals to get ahead, from media training to message consistency The evolving role of content marketing, press releases, and multimedia in a GEO-driven environment How law firms and professional services balance credibility with regulatory and compliance requirements Where GEO may be heading over the next 12 months About our Conversation Partner Stephanie Grober is the Marketing & Public Relations Director at Horowitz Agency, an integrated marketing and public relations agency with offices in Los Angeles, New York City, and Vancouver (B.C.). Her team works with law firm clients ranging from BigLaw to boutiques, designing and executing content and communications strategies that generate bottom-line growth in measurable ways. Leveraging deep relationships with the press, she delivers high-quality earned media placements for clients and utilizes her extensive marketing background to amplify these results through a multi-channel approach. Stephanie joined Horowitz Agency in 2021 after serving as Marketing and Communications Manager for a Top 50 accounting firm in New York City. Follow Stephanie on LinkedIn: https://www.linkedin.com/in/stephaniegrober/ Relevant Links https://searchengineland.com/what-is-generative-engine-optimization-geo-444418 https://www.geekytech.co.uk/what-is-generative-engine-optimisation/ https://www.reddit.com/r/seogrowth/comments/1m6k4gx/is_anyone_actually_doing_generative_engine/ https://zapier.com/blog/generative-engine-optimization/ https://a16z.com/geo-over-seo/ https://www.searchenginejournal.com/stop-trying-to-make-geo-happen/554629/ Audio Transcript Shel Holtz (00:01.989) Welcome everybody to a For Immediate Release interview. I’m Shel Holtz. I’m here with Neville Hobson and our guest today, Stephanie Grober, who is marketing and public relations director at Horowitz Agency. Stephanie, it is terrific to join you today. Stephanie Grober (00:19.33) Thank you guys very much for having me very excited to be here and chat a little bit about what we are calling GEO. Shel Holtz (00:27.471) We are, even though some people are disputing that particular moniker. But before we jump into this, Stephanie, I think our listeners would appreciate knowing something about your background. Stephanie Grober (00:40.748) Absolutely. So at Horowitz Agency, we specialize in working with law firms and select individuals. So very much rooted in legal marketing. I’ve been in professional services marketing for about six years now, and I’ve been a marketer for my entire career across several different industries. So I was fortunate to be able to continue honing my craft, focusing on communications and now communications, marketing, NPR for law firms and leading attorneys. Shel Holtz (01:13.837) And what got you into studying this field of AI and generative engine optimization? Stephanie Grober (01:22.7) Well, it’s a very exciting time for public relations professionals. When you are talking about AI and generative engine optimization, you’re going to hear the words authority and credibility, right? And authority and credibility are core principles of public relations. So that right there should signal, wow, this, this sounds like a PR play. And so as research has come out and we’re learning more about generative engine optimization and essentially the AI overviews that are populated when somebody puts a search term into a search engine, we’re finding that the AI is creating a brief summary using sources from the internet. And what are those sources? Well, a lot of them are authoritative sources from top publications where you may have a PR professional working with you to get you quoted. So it all circles back to PR, makes it a very exciting time for PR professionals. Those who have already engaged the services of PR consultants or may have PR services happening in-house are a little bit ahead of the curve right now because they’re already going to be appearing in the authoritative sources that AI likes. @nevillehobson (02:50.272) Great. That’s a very good overview. We talked about this topic, actually, as you know, Stephanie, in the regular episode of this podcast, where we looked at this thinking about this interview, this conversation we’re going to have with you. And in fact, it made me think today, earlier today, my time in the UK, I was watching a video by Danny Sullivan, who as a name, you’ll know, he was a big deal, Search Engine Land magazine he was a founder of, he works at Google now. But he was giving a presentation at WordCamp in Portland just last week about SEO. And it was actually about SEO and GEO was how he pitched it. I was intrigued because he talked about essentially the same thing. And I thought, well, that’s not my understanding of it, unless I got it wrong. But that’s really what Shel and I talked about the other day. Although to be fair, he did talk about what the, the most effective thing to do isn’t to argue about or worry about acronyms or initialisms or whatever. It’s about, building trust, creating clear, incredible content, putting it where you can, which already are. I thought, God, did he listen to our episode or maybe, but of course that was last week, not, the other day. but he did get me thinking about this. Is GEO an evolution of SEO? Is it? the same thing by another name. We talked about this and that’s not what we figured out what’s what’s your take on that? Stephanie Grober (04:26.786) Well, I loved what Shel said in the episode from earlier this week that there is no magic bullet, right? Necessarily. That is the approach I take to marketing. I, you know, working in very traditional industries, professional services, often B2B, highly regulated spaces. So understanding that, which informs our approach to marketing and may be different than a direct to consumer play, for example, you know, some brands have a much younger demographic audience, B2C needs while we are, you know, working with highly distinguished attorneys, professionals in a very traditional space. But I agree with Shel. There may not be a magic bullet here. And as for the distinction between SEO and GEO, I do see GEO as complimentary to SEO. You could say that it’s an evolution. I wouldn’t say that they are exactly the same thing, but they are very parallel to each other. I think that if you have successfully been employing SEO strategies as a part of your overall marketing efforts, then you are probably ahead of the curve, as far as GEO. And that would be because you’ve been consistently adding optimized content to your website. You know, which keywords you want to appear in search. You may have been structuring your content in smart ways that already are answering questions. And that would be all on your own website, right? But now when we’re talking, GEO, we’re talking about getting that content out to external sources and reinforcing many of the same things. So SEO, could say it’s about keywords. GEO is about credibility, but the LLMs, they still have to learn. So those keywords are gonna be important for them even in your external sources. And overall, what we’re looking for with GEO is mentions and visibility, whether it’s you as an individual or your brand. Shel Holtz (06:34.097) One of those magic bullets that people have jumped on in the last few weeks is some research that found that Reddit is the source of a lot of the content that is turning up in AI results. And Professor Malik just a couple of days ago, Ethan Malik at the Wharton School said essentially, nope, he said what they did was look at how often the sites come up in the answers at least once in the web search function of some AI agents when they do a web search for more information in response to a keyword search. He points out that the company searched for a bunch of keywords using Google AI mode and ChatGPT web search and perplexity, and then said they measured how many times those sites were included in the reply. That doesn’t necessarily mean that if you’re doing research or if you’re just in one of the LLMs doing your job, that Reddit or Quora for that matter will be the dominant source of the information that you get. And yet this chart is everywhere and everybody’s talking about it. I confess, I went to our leadership in the organization I work and I said, we need to be in Reddit. And now… It appears I was just chasing a shiny object or a magic bullet. How do you stay on top of what is working in this generative EO space? Stephanie Grober (08:16.376) So as a marketer, always have to think about which channels make the most sense for your brand or your individual brand, whoever you’re representing. Like I said, we work in legal with law firms. So Reddit is not exactly the space. There are some very famous attorneys who’ve done ask me any things on Reddit, and I’m sure that is helping them. I do believe that user generated content sites are coming up in GEO. AI overviews simply because of how the content is structured in a very human questioning way, but it’s not going to be the right place for everybody. So when we take a look at that at Horowitz agency as PR professionals working with law firms or other other business professionals, there’s some other interesting stats. I think that stood out. So Muckrack put out a great report recently also analyzing which sites are read by AI. And I believe it was almost 30 % of the sources are journalistic outlets with prominence and attention paid to outlets like Reuters, AP, Axios, for example. So those are sort of your tier one media. So that is a strategy you can employ for your credibility. If you don’t think that Reddit is the right place for you, okay, then let’s go after the tier one media Now I will caveat that by saying, niche outlets are also very important to AI. So don’t think that you have to be an AP or Reuters. If you ask a niche question, it is very likely that the AI is going to read a niche or trade press outlet. So those are very valuable as well. But overall journalistic sources are a very valuable way to start appearing in GEO results. That’s where from a PR perspective, we can start maximizing our approach and our strategies, tailoring them there to support GEO. @nevillehobson (10:21.792) You emphasised the importance of securing tier one coverage, all those publications you mentioned, the idea being that being visible in such places can influence how generative engines surface content. But coverage like that takes time, doesn’t it? And it isn’t always immediate. So how do you advise professional services firms to balance the longer term credibility building with the short term pressure to be visible in generative search results? Stephanie Grober (10:51.534) Absolutely. It does not happen overnight. That’s something we tell our clients and probably any PR professional or communications professional in-house is going to share with stakeholders. You know, we don’t wake up, decide to do PR and land a New York Times article the next day So the best approach today is to make sure that as part of your overall marketing plan, PR is included and you have a comprehensive… hopefully you already have that, but if you don’t have that and you’re thinking, I want to make sure that we’re doing GEO right, don’t neglect PR. There’s a few parts of the PR process. First one is just having your sources become media trained and responsive. And again, that takes time to be comfortable speaking to the press, to identify what they want to comment on, to give good sound bites. That all takes some practice. So somebody who’s just starting out in PR might not be comfortable if the Wall Street Journal calls them on day one and they’re like, this is my first rodeo. So you wanna have some practice working with smaller outlets. Again, working with those trade publications which are still going to be very valuable. So we don’t only wanna prioritize tier one, but just having a variety of PR opportunities, seeing what works best, that’s something that takes some time. So if you’re not already doing that as part of your PR strategies, the time to start is now, it’s all going to benefit your GEO. And then another key part of this is making sure the messaging is consistent. And that is down to the way you are cited in the press, for example So, you know, as one example that comes to mind is if you’re somebody who is always referred to in the press as a celebrity criminal defense attorney over and over, that’s going to train the LLMs. When somebody is searching for a celebrity criminal defense attorney, you’re gonna come up in that AI overview because the reporters have referred to you that way in interviews. So how you present yourself to the media and how they cite you is important. It’s important to be consistent. And then of course, you know, what you’re saying in each opportunity matters too, but I think down to the way that you are cited is, very key. @nevillehobson (13:24.213) I’ve got just a quick follow up on that tier one element before we kind of move on from that. But I’m just curious, actually, Stephanie, because you mentioned earlier the names of a couple of the new media, if I can call that, people like Axios. Isn’t the landscape evolving so fast, generally speaking, the media landscape that trying to kind of give labels or pigeonhole a media property as this is tier one and it’s based on those old definitions back from print media days, I suspect. We’ve got, know, Exhaust is one of the new ones, but there are any number of I’ve never heard of you type of outlet by influencers, quote unquote, who have hundreds of thousands, if not millions of followers. And I’m wondering, Shel, I’ve talked quite a bit about nano influences and others who are kind of disrupting the traditional ways in which things are structured. So where does this fit in that case? And probably the example you gave is someone brand new to this to PR just starting out suddenly is immersed in all this. What, how do you deal with this in this changing environment? Stephanie Grober (14:36.91) So the media landscape has changed drastically. Reporters in newsrooms, there’s been mass layoffs. What I think is interesting about GEO is as we’re saying GEO is so important, journalists are struggling. Newsrooms are letting people go. They’re restructuring at the same time. So how does GEO exist without journalism? And I hope that it demonstrates the… need and necessity of great journalism. They can’t exist without each other because otherwise, you know, what has authority, what do we trust, right, to build a response from the AI. So like you said, Nebel, the landscape is changing. There’s new outlets every day. That continues to happen to myself and my team. We’re constantly discovering new outlets. There’s journalists who have left major newsrooms, who have started their own newsletters, they’re on Substack, they’re podcasting. All of this is important, is the good news when we’re talking GEO, because GEO assesses a very wide variety of sources, including video, including podcasts. So I recommend that anyone who is wading into PR take as many opportunities as they’re comfortable with. And again, don’t only hold out for Wall Street Journal and New York Times. Take a variety because you never know what is going to give you the most bang for your buck or where somebody is going to discover you. This is all about discoverability. This is about mentions. And the more you can do it, the better. All of the LLMs have their own way of reviewing sources at this point. They are not consistent. There is not one formula that they’re using, whether it’s chat GPT or Claude or perplexity. Some of them use journalistic sources at a higher rate than others. Some of them use smaller outlets. Some of them have partnerships with major newsrooms like Financial Times to train the AI. So we’re in the very early stages. That’s why, again, I don’t think that there is a one size fits all formula. I don’t think that there is a magic bullet today where it’s do it this way or you’re going to fail. It’s be prepared to be flexible, but be everywhere that you can. So as things shift, you’re still ahead of the curve. And anyone who has been a marketer through the rise of digital marketing, I think is probably familiar with that, and anyone who has been on the internet since the beginning of the internet is probably familiar with that, right? When I grew up, it was the dawn of Wikipedia and things have really changed. We just got Google. I remember the first time a librarian told me about Google. So this is all gonna change. It’s gonna change very rapidly. It could go away. It could be the most important part of marketing I don’t think we know yet. Shel Holtz (17:50.39) How do you measure all of this? The success metrics for traditional SEO are pretty clear and not that difficult, but what should PR people be paying attention to in GEO? I’ve seen things like share of answers, citation rate, sentiment and model responses. What’s genuinely measurable that’s meaningful today? Stephanie Grober (18:16.046) So there are definitely some tools that are purporting to provide AI analytics, GEO analytics, which are available. I think the best way to measure today is simply through your own experimentation. Enter the questions you think your clients are asking in search, read the overviews, do you come up? Are your competitors coming up? And use that as a guide to see where you need to focus your efforts. And again, it might be certain keywords, certain questions that you want to prioritize, but that’s the easiest way just to be the end user and see how you come up, ask, ask the LLMs to describe you, to describe your brand, your company and see what comes back to you. so again, it’s not going to be perfect. We see a lot of errors or mistakes or misassignments of names and things when we do this. So, you know, this is not the be all and end all to being successful. And I will say that, you know, AI search accounts today for, I think less than 30 % of search traffic. So, fortunately it’s not the only driver of traffic and, you know, discoverability, so we have a little time to iron out those kinks. @nevillehobson (19:35.906) That’s really interesting. In episode 479 of this podcast, the one I mentioned earlier that we did, knowing we’re going to talk to you today, we talked about companies spinning up standalone, community driven content brands that become credible in their own right. When does that approach do you think makes sense? And what safeguards keep it authentic, rather than just a thin GEO play? Stephanie Grober (20:03.592) Unique content is incredibly smart. So if that is something that is sustainable for your company and your brand, I think it’s a brilliant idea. Content is king everywhere and in every industry, I think some more than others, but content is really king. So it just needs to be something that is sustainable. If you have users generating content, that’s excellent. in working with attorneys thought leadership is huge. Attorneys are always writing. They’re speaking. can repurpose their content. So there’s always fresh content going up, on websites and things like that. one thing I wanted to mention too, as, as we’re discussing this, whether it’s a brand that builds their own content platform, or a company working with content in another way is that the LLMs prefer recent and fresh content. So, you if you were writing or speaking or quoted in the press five, six, seven years ago, that’s not gonna help you today. It’s time to get back out there. In the past 12 months or so is ideal with ongoing fresh content. So just something to keep in mind because I know, you know, time passes quickly and we’re like, when was I quoted? No, it was like five years ago. Anyway, back to the content. I think that’s a very smart and strategic play if it can be supported. We know that marketers only have so much time in their day. There’s a lot of things that demand attention. So you have to choose to prioritize what makes the most sense and is a sustainable effort for your team. Shel Holtz (21:43.063) In that episode that we recorded this week and in a previous episode, in fact, that we dedicated to this topic, we discussed the argument that press releases need to be reformulated for algorithms. I’m skeptical of that. How do you adapt or should you adapt everyday PR assets, whether it’s press releases, content that goes on your company’s website newsroom? Thought leadership pieces from your executives on LinkedIn or wherever? How do you adapt those so they’re more likely to be used and cited by LLMs but still useful for the human readers they were originally designed for? Stephanie Grober (22:32.75) Absolutely. And when I listen to that art that that episode rather, you know, one thing that crosses my mind is, is, you know, part of me wants to say the press release is dead. Can we say that? I’m not sure. It feels like it might be dead, but there is also a school of thought that GEO is reviving the press release. Now, if you put out press release across the wire, meaning in most cases you’re paying to put it out across the wire, that might be something that hits an LLM and is used to train on a certain topic. So in some ways, a press release still can have influence in that way. Other than that, the typical press release is not going far these days. It’s very difficult to maximize a press release. We are competing for attention in the media landscape. There are less journalists. There is more news. The news cycle is shortened. It’s not even 24 hours. It’s about 24 minutes, it feels like most times. So just your standard press release, not put across the wire, not going far for most companies, you know, unless you are a enterprise corporation or, you know, very exciting in some other way. @nevillehobson (23:58.229) Hmm. Yeah, I mean, I think it’s interesting. Is the press release dead? It’s a topic that pops up frequently. I think Shel and I have talked about this, the press release is dead, at least six times in the last six years. It could be more. Shel Holtz (24:12.945) By the way, I do like to remind people that for compliance purposes in the U.S., they still have some value, regulatory compliance. Stephanie Grober (24:20.301) Yes. @nevillehobson (24:20.936) Yeah, I’m sure that’s the same here in the UK too, although it may not be specifically a press release, a device you need to have as a compliance record. But I mean, it’s a thorny question because I see people commenting on this here and there on social networks particularly now and again that, yes, this is dead and B, no, it’s not. But this notion of repurposing press releases to take into account the audience change, i.e. it’s not people, it’s the algorithm. I’m not using the word machine. I see people talk about it’s the machines. It’s not the machines. It’s the algorithm, right? So I think that is a credible view. I disagree with you there, Shel. I think that is something that we are seeing that you need to prepare your content to publish it online with two audiences in mind, which are the humans and the algorithms. And the example Shel mentioned that we discussed specifically in an episode was somebody who has thought this through, it’s a really good visual explainers on the differences between them. They made a lot of sense to me, I have to say. But is it going to be like the press release, the social media press release from 2006 to 2010, that was a template of the things you should include for social media to take advantage of it generated quite a lot of excitement, a lot of buzz, lot of hype and ultimately just kind of faded away. It didn’t really, you know, attract a lot of attention. Is this likely to be the same case? I wonder. So I think this does this is important this change along with all the other things that we’ve been discussing. And I just wonder where this is going to fit if people are going to be creating press releases where the prime audience they’re thinking of is an algorithm. Where does that fit into the picture? Stephanie Grober (26:19.436) Well, there’s the algorithm, there’s the machines, but ultimately humans are still engaging with the content, right? So if the AI is gonna feed it back to us, it has to appeal to the human audience. So I think that there’s only so many differences between them. I personally have not seen a huge variation in the style of a great press release myself. And I think most communications professionals… I hope that most of them have a great sense of what it takes to have a really effective press release. I would say again, in the industries that I’ve worked in, that’s not changed very much regardless of whatever flashy marketing terms and strategies are in use at the time. But as far as getting it to hit, The LLMs, the algorithm, it has to address the human at the end user who has a question and is seeking information and it has to answer that question. Shel Holtz (27:23.025) Yeah, that’s interesting because one of the very first arguments that I heard about what you need to do for your brand or your content to show up in the LLM results is to get away from saying what and get more into saying how. More explanations, more details. This leads me to believe that content marketing is probably a strong contributor to what LLMs get trained on. I’m thinking of the great content marketing that comes out of Microsoft where they’ll have a lengthy feature, but it contains some short videos. It contains some short audio clips. It has infographics and photos and text. And it’s all there in that format so that whoever wants to use it can just grab that audio file and maybe use it in a radio broadcast or just grab that video clip and use it in a TV news report. Does that make sense? Should we be focused more on content marketing and less on more traditional forms of communication? Stephanie Grober (28:34.432) I think so. And I think how we are content marketing is important. The variety, the structuring of your content, exploring some Q and A type pieces is a great idea. And hopefully folks are already doing this because of SEO, right? It shouldn’t be such a surprise where your headings and the structuring of your content pieces were important. We see that in, in journalistic content too, right? There’s, there’s headings because news outlets want their articles to come up in search rankings as well. So this shouldn’t be groundbreaking to anybody, but think about variety in your content. Think about making sure it’s answering questions, play with the format, use various types of multimedia to support your content. I think that is absolutely a sound approach today. @nevillehobson (29:35.04) So I’ve got a question about, you mentioned it actually, AI overviews earlier on in this conversation. Quite a simple question, really. Less and less people are clicking through. They’re gonna get their search results using Google and they’ll have then a summary on the right-hand side of the screen. And that gives them all they need and they do not proceed any further. What impact is that having, do you think, and will continue to have on search generally and how you measure search, whether it’s GEO or SEO, how big of a concern should that be that people are not clicking through? And by the way, I’ve seen some reports in the last few days, which I haven’t bookmarked, frankly, but they’re talking about, you know, this is not what’s happening. People are clicking through. That doesn’t seem to fit what I see. I don’t know, respectable search results being produced or data being produced showing that that is not the case. What’s your thought on that, Stephanie? Stephanie Grober (30:36.482) I think it’s so early still that we don’t know yet, right? Because AI, no, GEO, Generative Engine Optimization, the generative search results still make up a very low percentage of the overall engagement with our search engines. What I did see some reports of is that the click-throughs may not be to a brand’s homepage anymore but some of their internal blog pages or specific content pages. So I think it’s smart to make sure that those are optimized. And again, it comes back to the content marketing that we were just talking about. Folks land somewhere else now in your website environment. I think it varies very much brand to brand, company to company, depends on what type of answer the end user was seeking. And then again, we have to think, is everybody trusting AI? Personally today, I don’t fully trust the answers AI is giving me. I’m always going to go back and research them. Now that might change individual to individual, generation to generation, but it’s not perfect yet. So, you know, I don’t think we’re at a place where somebody is just going to solely get their questions answered by AI and go about their day. Maybe we will get there and we will see a real sea change in the way web traffic works, but I don’t think this is going to completely break the system. And I don’t think it’s going to end search traffic as we know it. and you know, if folks are answering a question where they need something, they’re still going to have to click through to get it. AI can’t procure something for you. perhaps it can answer a question, but most likely you’re going to want to click through to read further and do your own research if the AI is just producing relevant sources for you. Shel Holtz (32:38.473) Your agency, Horowitz Agency, has to deal with an additional wrinkle with all of this, given the nature of the market sectors that you serve. Law firms, financial services, accuracy and compliance really aren’t negotiable in those industries. So how do you balance making content citable for AI, while staying inside the lines on confidentiality, disclaimers and regulatory restrictions? Stephanie Grober (33:08.174) I think what I love about legal marketing is that it is so professional and it tends to err more towards the traditional side of things and value just great marketing. So law firms don’t always go for flash in the pan. Law firms weren’t racing to be on TikTok, although there’s some lawyers who are very successful, but the new hot thing isn’t always what a law firm or a professional services firm is looking to do. Rather, they embrace the fundamentals of great marketing and then make sure that everything that is put out adheres, complies, represents the brand consistently and correctly. There’s a lot of layers of approval. Anyone working in corporate marketing is probably very accustomed to that. And I think that that’s a great thing. We are not trying to utilize any gimmicks. We are not trying to trick the AI because we have to stand behind what is put out on behalf of our clients. And when you’re talking about professional services, that’s really guidance and counsel to folks with very important personal or business questions. Shel Holtz (34:24.593) And let me ask you a quick follow-up question on that, as long as we’re talking about law firms and financial services. Those are pretty close to the top of the list of professions at risk from AI. Have you seen any movement among your clients in that direction? Are they starting to replace maybe paralegals or customer support people with AI? Stephanie Grober (34:49.132) Law firms are embracing AI, much like every industry, industry agnostic, think companies are embracing AI. They’re spending a lot of money on ways to incorporate AI into their processes. But I think it’s too early to say if any humans have been replaced. So what I get this sense from some major law firms is that they at the very lowest levels are taking away some of the hours of intensive research that might need to be done. But then they are prioritizing the value giving advice that their professionals can provide. So it might not take as many billable hours on research, but you might get more value in the conversation with your attorney or advisor. Of course, marketers are using AI in very interesting ways. But again, it’s not perfect. So I think it’s still very early stages. Certainly it’s not replacing anyone in these industries yet today. And we’ll see if we get to a point where it will. Now, the same can be said for communications professionals. Is AI going to take our jobs? I personally think if AI is writing all the content, then… wouldn’t we reach a point where human written content is actually at a premium and bespoke and in demand? Because how can you stand out in a sea of AI written content, right? That doesn’t sound appealing at all. I mean, that’s not what I wanna read. hopefully we’ll still be here in five years. Shel Holtz (36:36.049) Yeah, I just saw yesterday that Mark Benioff at Salesforce announced that they are letting 4,000 customer support people go and replacing them with AI. I was thinking on the financial services side that might be a similar trend at some point. Stephanie Grober (36:54.516) Mm-hmm. Perhaps, but I think it’s very risky in, you know, legal services, accounting, when you’re talking about that. you know, these industries are waiting in very carefully. @nevillehobson (37:11.518) We could do a whole episode just on this topic. So sticking with GEO looking ahead as I tend to do, there’s the horizon. Where do you see GEO going and evolving over the next year or so? And traditionally that question might be over the next five years. No, the next year or so. Do you think it will mature in that timeframe or will it need longer, becoming maybe a reliable discipline like SEO eventually did? Or is it more likely to, I guess, remain a contentious topic? People not agreeing on what it’s for, but there’s good practices that we’ll see emerging amongst all the hype. How do you see it, Stephanie? Stephanie Grober (37:58.167) It’s very hard for me to predict. At this point, there’s not an easy way for all of us to opt out of AI search results with every search engine. If that were to become an option, would folks just simply turn it off? And would we not even need to have this conversation if it simply falls from sort of the consciousness? Or is it going to be continued to be sort of forced upon us as search engine users? Of course there was the Google antitrust case decision this week where, know, they’re keeping chrome and Gemini is going to be, feeding us its results. So I don’t see that going anywhere in the next year. I will be watching to see if AI search gets smarter and gets more accurate. and how our clients or those I work with appear in the search. So it will be sort of testing for us. Of course, we’ll be looking out for more case studies of it leading to work. Anecdotally, there are folks in legal who are saying, yes, I’ve gotten clients from chat GPT searches, which is great. I personally know that there are some attorneys I’ve worked with in corporate law, for example, who after a great streak of appearing in outlets like Wall Street Journal, CFO.com, know, AI will tell me about them if I say who is a great &A attorney in Los Angeles. That’s what I want to see. That’s what I’ll be looking for in the next year. I can’t say that definitively we’re going to see a sweeping change more than that. Shel Holtz (39:44.634) In the episode that we recorded earlier this week, I mentioned that I had read an article about, I think it was a LinkedIn article from somebody who said he had tripled the volume of output in order to have more content out there that could be hoovered up by the models in the hopes that the volume would lead to more visibility in results. I’m not sure that’s the answer, but certainly budgets aren’t keeping up with volume. The point was made that this individual did not get more budget to help create all that content. So if your firm were hired by a communications leader and asked you to adjust their content strategy for GEO, what process changes would you recommend, say, over the first 90 days? Stephanie Grober (40:42.35) Absolutely great question. One thing I wanted to mention too, when we’re talking about important content right now is rankings, reviews. So whether it’s professional rankings for an individual or best of lists, I would make sure that a client is incorporating those into their strategy. Because what better way to train an LLM than by appearing on a best of list from the last six to eight months, right? That’s exactly what you want to come up for most likely in a search. We do probably over 500 nominations for our clients each year. So that is always a part of our strategy, but very important. I’d be looking at online reviews, whether for a company, could be your Google My Business reviews. It could be Yelp, Facebook, wherever you might get reviewed, in whatever way you might get reviewed. And then making sure that the client is out there, mentioned and visible again. Each month, fresh mentions in journalistic outlets, whether they are prestige media or trade press, they still have value. Multimedia, getting them on video, getting them in podcasts anywhere we can. And then from a content strategy as a marketer, hopefully at this point you are repurposing, but use AI and let it help you repurpose. There’s so many tips and tricks that marketers can use when it comes to AI. And that includes taking what you already have and making it new, repurposing it into different formats to make your job a little bit easier. So that today is one of the biggest benefits of having AI available to us as a tool to make content production easier. @nevillehobson (42:30.014) You make a good case for that, Stephanie. That’s exactly what I say to people. If you don’t use it or you’re skeptical, this is one thing you could do, you’ll benefit from doing this. That’s been great, this conversation. I think we’ve reached a point now for that famous question, Shel mentioned at the very beginning, which is, what didn’t we ask you that you really wish we had? Stephanie Grober (42:53.998) You know, I think a great question would be what is going to be the greatest impact that AI has on PR, public relations, and communication? I think those industries are still inherently human. I think they are relationship driven. I don’t think that we’re ever going to take humans out of the equation. So even as we are working to boost GEO results, doing public relations, getting media mentions, we’re working with people. You have a PR professional, you have a client source, you have a journalist who’s a person, and that is still very valuable. AI can’t do that for us, it’s relationship driven. So make sure people are doing your PR, not AI. @nevillehobson (43:43.818) To kind of add add to that topic, Shel and I have talked about a few times on the podcast during this year. And it’s in the kind of area of AI agents, but it’s looking at I think, Shel’s phrase for it is ‘synthetic employees,’ where you have you have a situation that isn’t too far fetched in the near future, whether that means two years, five or 10, I don’t know where think about your team and your organization, where you’ve got maybe 15 people in that team of which three perhaps might be AIs effectively. So this isn’t like, you know, some of the illustrations you see on posts where you’ve got people and there’s like a robot in there. It’s not like that. I don’t know what it’ll be like. It might be like that actually But is that kind of like a serious disruptive element suddenly appearing on the horizon that will have a huge impact on this amongst many, many, but think specifically what we’re discussing today and the change it might bring to that where you’re introducing AI team members in a team. What do you think about that? Stephanie Grober (45:00.706) I think that there are tasks AI can do well, but when it comes down to communications, strategic thinking, most of us don’t like it. We don’t love fully AI generated content, right? Where nobody’s reviewed it. We’re not thrilled. Journalists most often don’t want to get pitches that were written by AI, outlets don’t want to receive articles that were written by AI So, you we pitch byline articles for clients all the time. A lot of newsrooms now have a, you know, a disclaimer that we may reject your piece if we think it’s written by AI. So it remains to be seen, you know, there are AI helpers, they’re on our teams right now, you know, how many companies are incorporating AI, we do have AI teammates. We’re figuring out the best use for them. Some are better than others. Some are more trusting of AI than others. And some folks are more clever to use it as a tool than others. when it comes to communicating back and forth with artificial intelligence completely, the more strategic and in depth you get, just don’t know if that’s what we’re going to see in the very near future. Shel Holtz (46:29.649) Yeah, I have an AI colleague now. I have a two-person communication team where I work myself and one colleague and no budget for consulting. And so what I did was it took about four or five hours because I really worked on this, but I created a custom GPT who’s the senior communications consultant with a specialization in my industry. it doesn’t do stuff for me, but it’s a sounding board. If I develop a strategy and I want somebody to play devil’s advocate, I can’t go to my $400 an hour consultant, I don’t have a budget for that. So I’ll just ask the AI consultant. I realize that it’s not as good, but in the absence of any alternatives, it’s fine. And it has come up with some really good responses. So… Stephanie Grober (47:18.7) Yeah, and I dare say that you’re probably, you know, far ahead of some others. So learning how to use it, learning how to maximize its output, how to get the most out of it and have consistent results, I think is where we should all strive to be today. And then in the meantime, you know, follow many voices on AI, on GEO. It’s changing, it’s evolving so rapidly that, you know, don’t put all your eggs into one basket. Don’t buy into a one size fits all approach today. It’s still too soon. And don’t abandon your other marketing strategies or tactics in favor of some of the AI driven ones. Shel Holtz (48:03.843) Yeah, as we pointed out in that episode from earlier this week, recommendations from people you know are still top of the heap. Stephanie, this has been great. We really appreciate your taking the time to talk with us today. How can our listeners find you and maybe read some of what you share online? Stephanie Grober (48:23.146) LinkedIn is great. Stephanie Grober. I’m with Horowitz Agency. Love LinkedIn, I’m constantly talking to my clients about spending more time on LinkedIn. So I’d love to connect with anybody there. Our website is HorowitzAgency.com. And I look forward to keeping the conversation going. Thank you guys very much for the time today. The post FIR Interview: Generative Engine Optimisation with Stephanie Grober appeared first on FIR Podcast Network.
undefined
Sep 15, 2025 • 23min

ALP 281: Supporting team members with mental and physical health challenges

In this episode, Chip and Gini discuss how agency owners should handle employees with physical and mental health concerns. They cover the increased openness around mental health and self-care, sharing personal experiences and business challenges. They highlight the importance of individualized management approaches, legal considerations, and quick professional advice. The hosts also emphasize compassionate handling of employee health issues, the need for flexible scheduling, and the impact on small businesses. Gini shares insights on providing support for team members and owners, such as disability insurance, to cover long-term absences. They conclude by underlining the importance of empathetic leadership and offering flexibility. [read the transcript] The post ALP 281: Supporting team members with mental and physical health challenges appeared first on FIR Podcast Network.
undefined
Sep 9, 2025 • 40min

FIR #480: Reflections on AI, Ethics, and the Role of Communicators

In this reflective follow-up to our FIR Interview in July with Monsignor Paul Tighe of the Vatican, Neville and guest co-host Silvia Cambié revisit some of the key themes that resonated deeply from that conversation. With a particular focus on the wisdom of the heart – a phrase coined by the Vatican to contrast with the logic of machines – Neville and Silvia explore the ethical responsibilities communicators face in the age of artificial intelligence. The discussion ranges from the dignity of work and the overlooked realities of outsourced labour, to the limitations of technical expertise when values and human well-being are at stake. Silvia expands on her Strategic article focusing on precarious workers, while Neville revisits ideas shared on his blog about the Church’s unique role in advocating for inclusive, human-centred dialogue around AI. Above all, this episode highlights how communicators are uniquely positioned to help organisations navigate the moral and societal questions AI presents – and why they must bring emotional intelligence, narrative skill, and ethical awareness to the forefront of this global conversation. Topics Covered The idea of wisdom of the heart vs logic of the machine Redefining human intelligence in the AI era The Vatican’s call for a global, inclusive debate Dignity of work and the reality of outsourced labour What ethical AI really means – beyond compliance Why communicators must be part of the AI conversation Links from this episode: FIR Interview: Monsignor Paul Tighe on AI, Ethics, and the Role of Humanity ANTIQUA ET NOVA: Note on the Relationship Between Artificial Intelligence and Human Intelligence Speaking for Humanity: The Wisdom of the Heart in the Age of AI A View from The Vatican: AI, Ethics and the “Dignity of Work” We must build AI for people; not to be a person What Does It Mean to Stay Human in the Age of AI? The Rise of Culturally Grounded AI The next monthly, long-form episode of FIR will drop on Monday, September 29. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on [Neville’s blog](https://www.nevillehobson.io/) and [Shel’s blog](https://holtz.com/blog/). Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Transcript (from video, edited for clarity): @nevillehobson (00:03) Hello everyone and welcome to episode 480 of For Immediate Release. I’m Neville Hobson in the UK. Shel’s away on holiday, but I’m delighted to be joined by Silvia Cambié as guest co-host for this episode. Welcome Silvia. Silvia Cambie (00:17) Thank you Neville, delighted to be here today. @nevillehobson (00:21) Excellent. Glad you said that. So in this short form episode, we’re going to spend time on an interview we did in late July that you, Shel and I did for an FIR interviews episode. We interviewed Monsignor Paul Tighe from the Vatican. He played a central role in shaping the church’s thinking on artificial intelligence and its broader societal impact. He was instrumental in the development of Antiqua et Nova, the Vatican’s note on the relationship between artificial intelligence and human intelligence published in January 2025. In our interview, Monsignor Tighe offered a powerful reflection on how AI challenges us not only technically, but also morally and spiritually. He urged us to consider what makes us human in an age of machines, calling for a global conversation grounded in dignity. agency and what the Vatican calls the wisdom of the heart. So in this episode, Silvia and I want to share what resonated most for us from that conversation and why we believe communicators have a vital role to play in shaping this future. I mentioned this before during the interview, Silvia, that you were instrumental in securing that interview. So tell our listeners, how did it come about? Silvia Cambie (01:38) Yes, indeed Neville. So you and I were talking in the spring when Pope Leo XIV was elected and we were talking about his background in math and science. And so on top of that, the Vatican has been contributing their views to a lot of papers like Antiqua et Nova to the Minerva Dialogue, which is a forum that basically collects views about the human the interaction between humans and AI and the dignity of work. So we were thinking of bringing these voices to the forefront and in particular in relation to your listeners, your listeners and Shel’s listeners who work in comms and work in change management and are often confronted with the moral aspects, values. the ethical part of governance. And at the moment, they’re looking for a North Star because we are at the forefront as communicators of this wave of AI introductions, AI pilots. But we, at the same time, we often lack guidance. So that’s why we wanted to collect these views from the Vatican, from Monsignor Tighe. relate them to our work, make them very concrete and kind of actionable, something for our listeners to use, to be able to use. And so I think you were mentioning before what resonated with us, with you and me. And so we had our conversation with Paul Tighe back at the end of July. And then we @nevillehobson (03:10) Get it. Silvia Cambie (03:25) also listened to a lot of the podcasting podcast interviews he’s done. And we’ve read, you know, articles he’s written and so on. And I think something that resonated ⁓ a lot with me is really the the fact that he believes that technology is never neutral. It’s always the product of a mentality of a culture. And technology is often. @nevillehobson (03:36) Hmm. Silvia Cambie (03:52) created, produced, programmed by people who, you know, focus on profit, focus on ⁓ productivity. And at the moment, there is a sort of a new trend because of AI that people have to adapt to the demands and pace of machines. A lot of people have to have deadlines these days set by algorithms. And that is creating a certain dynamic which I have witnessed many times when I work in managed services, which is you’re basically following the rhythm of a machine and you have no time to think, you have no time to develop new ideas, to stop and ponder and get insight out of what you’re doing, out of what your client needs. So at the end of the day, everybody loses. ⁓ You don’t have fresh ideas. The client doesn’t get a fresh view or, you know, fresh recommendations on how to do things. So it’s all very mechanistic and it’s a real risk out there that people, you know, will have to follow the rhythm of machines in the work. And therefore what Paul Tighe mentioned. which is the wisdom of the heart, as you mentioned before, the ability to relate to other people, how you relate to your client and solve their problems. So I think I’ve seen that in my work and I think that is a real risk. And we have to be aware of that. And as communicators and change managers, there’s a lot we can do because we are on the front lines. And yeah, so I think this point about technology, that technology is never neutral and that ⁓ there’s this risk and danger that we will have to follow the pace of machines and lose the wisdom of the heart and lose the ability to draw insight from what we do. That’s something that really resonated with me. @nevillehobson (06:01) Yeah, I understand that. It’s similar. I was also thinking that one element that did nudge us together to do this was what we had observed, what we’d read and seen even in the prior months during spring and summer. In fact, really since Pope Leo was elected to the papacy and understanding his background in science and mathematics. But also what struck me that I noticed was his knowledge, his ability, as it were, to understand the role of social media in communicating with people. And we noticed that the Vatican was pretty proactive on many social channels. And indeed, Paul Tighe was at one point ⁓ in the Dicastery of Communication, kind of like the Communications Department, in charge of all of this. So they have a track record, a history, if you will, of knowing how to use social channels to engage with wide audiences, not just the faithful of the church, but broadly a wider audience. And that’s something we observed. And then this document, Antiqua et Nova, I found it an extraordinary document of what it set forth, what it described. and the focus in particular on that phrase, wisdom of the heart, that resonated very strongly with me. I found it interesting too that the conversations that the Vatican had been having, notably with Paul Tighe, but not only ⁓ Monsignor Tighe, others too, with leaders of Silicon Valley companies, of the big tech firms in Silicon Valley, Mark Zuckerberg of Meta We’ve had, you know, we’ve seen more Google, Microsoft, others gathering in a number of instances over the past year or so, ever since Pope Francis’s time to talk about this, where the Vatican was able to introduce this theme, this broader theme of the wisdom of the heart. And it struck me too, that Paul Tighe was quite clear and mentioned the Vatican is not claiming expertise in AI systems or algorithm design, which by the way, struck me too. We keep talking about the machines. It’s not machines, it’s algorithms we should be worried about. Instead, it offers something that the tech industry and many governments sorely lack. I agree with this completely, a deep concern for long-term consequences you nudged on that point, Silvia, just now, and the consistent voice on the value of human dignity, agency and solidarity. So the wisdom of the heart, Silvia Cambie (08:09) Thank @nevillehobson (08:29) is a phrase that appears in Antiqua et Nova as part of its final reflections. And it says this: “We must not lose sight of the wisdom of the heart, which reminds us that each person is a unique being of infinite value, and that the future must be shaped with and for people.” And that’s a pretty straightforward message. It’s simple. Perhaps it could be even simpler, actually, but I’ve seen others alluding to this idea of this is about people, not just the tech recently. So for instance, subsequent to the interview, and this was actually quite recently, about a week or so back, Mustafa Suleyman, the CEO of Microsoft AI, wrote in an essay that we must build AI for people, not to be a person. In other words, AI is not a person. We hear a lot about, and I’ve had conversations with people about this, these so-called personas, the way in which you can create something that’s a duplicate of you almost. It’s like a version of you as a person. I think that’s crazy to do that, to be frank, because that reinforces everything that we don’t want reinforcing, if you will. But Suleyman makes the case that the real risk we face is not AI suddenly waking up with consciousness, as some people talk about, but people being convinced that it has, because it’s not sentient. That’s a firm belief that I have. These are electronic devices and tools, not actual versions of people. We’re not there yet. That’s quite a way away, I would say, if ever. But Suleyman goes on to say, this is the interesting bit to me, I want to create AI that makes us more human, that deepens our trust and understanding of one another and strengthens our connections to the real world. We won’t always get it right. But this humanist frame provides us with a clear North Star to keep working towards. I mean, that couldn’t be simpler either, could it? And this is the head. of a division of one of the biggest technology companies on the planet, Microsoft, saying that. And I’ve seen others in the industry saying similar things recently. So maybe this is beginning to get attention. And I can’t say, of course, that it’s a direct result only what the Vatican has been talking about, but that surely must be having an influence. So I summarized all this just for me. Quite simply, emotional, moral and ethical intelligence must guide communicators response to AI. The big question is how, Silvia Cambie (10:50) Yes, indeed. And I also liked very much the article by Mustafa Suleyman because I think it’s, as you said, he pointed out the real danger, not that the machines are going to wake up and, you know, kind of take over the world and pretend they’re conscious. It’s more that they are that people. @nevillehobson (11:06) Take over the world. Silvia Cambie (11:14) will get used to interacting with them and will expect really kind of seek that human aspect in the machines and will also kind of seek approval from AI, from algorithms. That’s also something that Suleyman is cautioning us against. also, so that can create psychosis, stress, anxiety, people being disenfranchised at work. And I think that there is a quote by UNESCO that I really like, and I have used it in the article I wrote for Strategic, the online platform for communicators. It says that ⁓ AI is about anthropological disruption, right? It’s not only how the… @nevillehobson (11:48) Hmm. Silvia Cambie (12:04) machines, the algorithms function, it’s how humans react to it. And to answer your question about what communicators can do, because indeed we are at the forefronts, we talk to people, we hear about their needs, about their anxieties and worries at work. So I think there are lot of attempts at the moment to ⁓ wrap some governance around AI, AI applications, rollouts. And what I’ve seen is, know, centers of excellence being created in companies. Those centers of excellence oversee AI pilots, for instance, the progress. So, and have, you know, the usual suspects sitting on them, which are, you know, people from IT, developers. But I think it’s very important that communicators and change managers become part of those fora because communicators know how to talk to people. Again, they’ve been doing that for forever. That’s their bread and butter. Also, they can relate to previous tech rollouts, you know, like a workplace technology and how people had reacted to that. So there is all that. institutional knowledge that is needed now because this shift is so unprecedented. So I kind of cringe every time people show me a COE and AI COE made up only of IT people and developers because that’s not the way to go about it. I really like a start, statue mentioned a fact you mentioned you and Shel mentioned in a previous FIR episode that I think you were quoting ⁓ studies by MIT and HR dive which says that people are expected to use AI in their daily work but they are not receiving proper training so they are kind of very confused about you know what you know, this is going to hit my performance indicators. What am I supposed to do? They’re not training me. How am I going to use Co-Pilot? I’m going to download Chat GPT and do my own thing and show that I’m doing something and that hopefully will be enough for my company. So all that needs to be structured. And again, communicators have the knowledge. They have the institutional memory. They have the means and also they know where the different voices sit in a company. Like when we do research before rolling out AI, we create workshops, we’re representatives from different parts of the company. So in that case, communicators know how to spot those voices because we have worked on on rollout projects before. @nevillehobson (15:03) Hmm. Silvia Cambie (15:07) how people react, where pockets of resistance might be found in the enterprise. So I think that it is paramount that we allow communicators and change managers to participate in those bodies that are being created for AI governance. And obviously that’s also a way to kind of channel what you were saying Neville, you know, the human aspect, what makes us human. It’s the ability to relate to other people. is insight, it is emotional intelligence. And it’s all things that are really needed these days because of this shift. And it’s kind of, you know, a paradox that we are so focused on the technology now, but at the same time, we would need to focus even more on the human aspect because this challenge is so huge that people are just not prepared for it. And we really need to focus on the human aspect, their abilities, what makes us human in order to enable people to deal with it, right? In order to enable the training, in order to make people feel that they are equipped @nevillehobson (16:24) Thank Silvia Cambie (16:26) sufficiently equipped for it. So I also would like to, there is a quote by Pope Francis that I really like, late Pope Francis. He said, this is not an era of change. This is a change of an era. And I firmly believe in that, you know, everything we’ve been saying Neville in, also in our interview with Paul Tighe leads to that. But I think that in order, so because this change is so huge, we really have to empower people in an unprecedented way and communicators are very well positioned for that. @nevillehobson (17:06) Absolutely agree with that. I think you mentioned training indeed, Shel and I discussed that topic in that recent episode of FIR where people feel they’re not getting training on the one hand and on the other hand, there are companies that just aren’t provided because they don’t think it’s worthwhile. There are others though that are doing it quite well. So it’s very, very patchy. It’s not universal. But I think The role of the communicator then is to develop and deliver that but there’s also another aspect which to me is the it touches directly on on what we’re currently discussing, i.e. the human or the humanity element, if you will, that, you know, organizations are looking at adopting AI to improve their efficiency to ⁓ improve their productivity and they will enter scale more without looking at this aspect? Are they asking the human questions? And that’s the role, in my opinion, of the communicator well placed to do that. So three questions I wrote down that could be where communicators are able to introduce this element in their conversation. Does this technology help deepen trust and empathy? Or does it risk eroding them? That’s a valid question, in my view. Are we building systems that reintroduce conscience, care and context into conversations, or are we defaulting only to efficiency and output? And the third one, are we ensuring that AI strengthens our connection to each other rather than replacing those with illusions? And I think there are undoubtedly at least a dozen more, but to me, those are great ones to start with that, in a sense, force attention on this rather than just those technically valid, yet ⁓ stale approaches to all of this. It dehumanizes, if you like. And I think the, just briefly going back to Mustafa Suleyman’s North Star, as he references quite clearly, and the Vatican’s wisdom of the heart, there’s an essential reminder in all of that, to me, which is quite simple to grasp. To stay human in the age of AI is to place empathy, dignity and care at the heart of design and use, not simply efficiency or the ways algorithms shape our actions. Suleyman directly references that in his essay. And he’s the first technology leader I’ve seen publicly doing that the way he did. It was very clear and is a long essay, by the way, very long, worth reading. So that’s encouraging to see that and it’s worth. Silvia Cambie (19:07) Hmm. Mm-hmm. @nevillehobson (19:30) I think communicators looking for a kind of a something to hang a hook on. This is it in my view for communicators to do that. that, and I think quite clearly is how to address the question we’ve been asking ourselves. How can communicators help democratize the conversation side of organizations? This is one way to go about it, I think. Silvia Cambie (19:51) Yeah, indeed. You know, conscience, care and context. Those are very important aspects when you are rolling out rolling out AI and dealing with people’s reactions. And I think those questions you asked are very powerful and they are a good start for communicators to kind of make people think, right? This isn’t just about the tech. This isn’t just about the efficiencies this app is going to create. You’re still dealing with people. Your employees are people, your clients are people, your regulators are people. So you’re still dealing with them. And I think that it’s about empowering people to ask the right questions, right? So I was referencing before to those COEs that are being created to monitor AI pilots in companies. Well, the conversations there tend to be very technical and always focused on the tech, know, the rollout, the different waves of the rollout. And I think, again, communicators can bring back the human aspect. How are people reacting to this? Is this making them more happier in their work or is this making them more insecure? As you were saying before, lot of companies are not providing training or not providing the right training. So that makes them insecure. Are they getting more and more confused in the way they are ⁓ dealing with their clients and customers? Because if you have AI that takes over part of that relationship, what is left for them to do? So there is, it’s a very complex scenario that has to be considered from different aspects. And again, you you mentioned the Suleyman’s North Star and the Wisdom of the Heart ⁓ mentioned by Monsignor Tighe. I think these are kind of, sort of, these, thoughts can inspire communicators, can inspire people who work in AI governance and make them pause and think that it’s important to focus on these aspects, to focus on what Suleyman was saying, the fact that people might expect AI ⁓ to be, might think that AI is conscious and they might. establish, you know, develop a relationship of a certain kind with it so that they end up depending from the algorithm, depending from, you know, expecting approval. And so I think that now is the time to stop and think and introduce those thoughts into the conversations that are going on in companies about AI. And I think @nevillehobson (22:33) Hmm. Silvia Cambie (22:51) It’s kind of, we’ve got to be brave. We have to do it. I know that often, as I was saying before, know, technology and technical aspects are basically overriding other aspects just because of the pace of ⁓ the project, just because of the pressures that people are under. But I think it’s very important to introduce these thoughts. into the conversation. And again, you know, it’s a moving target, right? We will continue to look for voices like Monsignor Tighe, like Mustafa Suleyman I’m sure as we progress, there will be others and there will be other aspects. But I think that it’s just this… @nevillehobson (23:17) Mm. Silvia Cambie (23:37) human aspect and the interaction between ⁓ humans and AI seen from the point of view of humans that is important. And we have collected a voice from the faith community. ⁓ We have looked at the paper that Suleyman published, which is really very thought provoking. so, and we will be looking for our voices going forward. @nevillehobson (23:52) Mm-hmm. Silvia Cambie (24:04) But I think for communicators, it’s very important to continue to be open to these voices, right? It’s also a way for us to get backup and support when we need to shift the conversation in a company towards the user, towards the rights of the user, towards the dignity of the user and not just about the technologies. then, you know, these are all tools that we can use to make our point and to make our point ⁓ stick. So I think, as I said, this is a lot of, know, the target, this is a moving target. We’ll have to do a lot more work on this, but it is fascinating for communicators because You know, I get off often, I get asked by people in comms. So, you know, how do I shift to tech and I am not a developer and I don’t have the right knowledge in AI and I don’t know how to build an LLM. I, my answer is, is basically this, right? Make sure, bring the human voice into the conversation. do you know how to talk to the base? you know how to collect their voices, or you know how to collect their views, make sure that they are heard, because it is important as we go forward. So I think that is what communicators can do. And that is a very important role indeed at the moment. @nevillehobson (25:37) Yeah, I agree with you. Now, that’s very good, Silvia. And I think just to add one final thing to this, in a sense of parallel development that is very much a part of all of this is what I’m calling the end of AI universalism, where currently we’ve got ⁓ an environment, if you will, and an assumption, let’s say that that one or two global platforms will serve the world. And we’re talking about the tech tools, the chatbots, the means by which people connect with others and discover things themselves. And it’s Silicon Valley based and tends to be in English more than the other language. But we’re seeing some interesting things happening. Latin America. Peru is leading the charge on building a Spanish language chatbot that serves communities throughout Latin America, taking into account cultural nuances, language differences, and the values that are unique to those communities in that part of the world that are very much not global North style environments, if you like. We’ve got what the Vatican is doing that we’ve just been discussing from that interview with Monsignor Tighe calling for an AI shaped by human dignity. And then just literally yesterday, this this this a few days ago this past week. Saudi Arabia is asserting its cultural sovereignty, let’s put it that way in digital form, with the launch of an Arabic language chat bot called HUMAIN Chat. And it’s based on a large language model that’s Arabic, the largest in the Arabic speaking world, the developer says. And that’s intended to be targeted at people of the Islamic faith globally, that’s two billion or so, Arabic speakers throughout the world, 400 million or so of them. At the moment, it’s just in Saudi Arabia. I’ve seen quite a bit of buzz building up about this over the past week, mostly focused on the tech, because it is quite new. The point I’m making though, is that with HUMAIN Chat and the others, These are signs that the future of AI will not be written in only one language or framed by just one set of values. And that’s something I think we should all be paying very close attention to. And it enables, I think, some, it broadens out, if you will, the part of bringing the human element into the conversation, where you’ve got tools that can be a great help in that goal, in bringing that human part into it. So These I find very interesting, Silvia, the are we looking at fragmentation of, of AI universalism could be enrichment, I see it. So that’s part of the picture, too. So the human element is essential to all of this. And these are all parts of the jigsaw that’s that’s rapidly being being completed, if you will. Silvia Cambie (28:21) Indeed. @nevillehobson (28:32) Are we seeing the potential risk of creating parallel AI worlds where cultural and political divides are reinforced, not bridged? That’s a risk in my view. But the humans can prevent that happening, I would say, assuming everyone’s on the same page. And as people were talking about, that is not the case, right? So it’s an interesting time. I think it’s a fascinating time to be a communicator with all this going on. Because as you pointed out, Silvia, earlier, there are communicators who are fearful of this, who don’t know what to do about this because not getting trained. I would say that this is easy to say this and maybe not easy to actually do for some people. But grasp the nettle, as it were, as the saying goes, get to know these technology tools and how you can, in a sense, leverage them to take your humanist message to others in your organization, particularly the leadership, to bring that human voice into the conversation about deploying AI. Silvia Cambie (29:11) Yeah. @nevillehobson (29:27) and helping people understand what’s really, really, really important is to bring that human voice into it. It’s not just about efficiency, ⁓ you know, and all that stuff and speed of doing it all. It’s also about what people believe what’s in it for people, what’s the value to individuals in understanding and accepting their role and their values in something like this that’s happening. So it gives us all food for thought, right? Silvia Cambie (29:35) profitability. Yeah. Yeah, indeed. I really, I was really very happy to see those developments. And you shared with me an article a few days back that the UN, ⁓ something coming out of the UN, UN was saying that, you know, AI is too Western centric, it’s too focused on the global north and the global south is, you know, bound to suffer from that. @nevillehobson (30:14) Mmm. Silvia Cambie (30:18) Well, you and I have lived in different countries, lived in, you know, have worked in different countries and we know how important culture and diversity of points of view is. And that I think it’s very healthy to have new approaches to AI that are going to challenge the main narrative, i.e. the Silicon Valley narrative. Also, because sometimes, you know, the messages we get out of Silicon Valley are kind of the big brother, we have the, you know, we are reaching a, a GI, no, we’re not reaching it. Well, we’ve reached it. I don’t know. So it’s kind of, you know, very strange and, and very sort of big brother. I’m going to tell you when I, when I think it’s right, but you know, at the moment I’m not telling you the truth. So I think those, points of focus of those alternative approaches. You mentioned Latin America, you mentioned the Kingdom of Saudi Arabia. I think it’s very interesting because in that way there will be kind of competition, quote unquote, to Silicon Valley. There will also be more transparency, but also there will be an awareness of the fact. that as Monsignor Tighe said, technology is never neutral. Technology reflects the mentality of those who create and develop it. So we want diversity of cultures, diversity of points of view. We have to make sure that we collect different voices. and we channel those into the development and the creation of AI and AI applications. So I think this is a very exciting development, particularly the one from Saudi Arabia. I worked a while ago with Saudi. developers and communicators on social media and social media campaigns. And they’re very, very creative and they were the forefront of social media. So I am expecting something really interesting and sophisticated from Saudi Arabia. so, yeah, so this is a very good development all in all. And also it’s a good development for communicators, right? Because communicators, a lot of our colleagues are involved in cross-cultural communication. They work for multinationals, they have to spot those voices and bring them to the forefront. So this will be inspirational for them, right? They will be able to tell their bosses, their board, look, it’s not just this AI application that comes out of Silicon Valley that you can use. There is an AI application in Peru. @nevillehobson (32:44) Yes, absolutely. Silvia Cambie (33:08) that has a lot of users and is very efficient and effective. And why not using that for our operations in Latin America? So that gives us tools and ammunition to challenge the narrative. @nevillehobson (33:26) Yeah, absolutely. No, that’s very true. This has been a great conversation, Silvia. And I think the wrap up, as it were, the extension from the interview, just sharing these additional thoughts, hopefully our listeners will find that complimentary to having listened to the interview. And listeners, have, haven’t you, right? Haven’t you? You have listened to it. If not, ⁓ yeah, yeah, if you haven’t, there’ll be a link to the episode show notes in the show notes for this episode. Silvia Cambie (33:44) Yeah, I have, I have, absolutely. @nevillehobson (33:54) And indeed, much of what we discussed in this episode, there’ll be links to some of those topics in the show notes as well. So let me conclude by saying Silvia, it’s been a pleasure having you as guest co-host on this episode. So thank you very much for joining in. Silvia Cambie (34:08) Thank you for having me, Neville. @nevillehobson (34:12) So this episode is, like I said, the link to the interview with Monsignor Paul Tighe will be in the show notes. If you have any comments you’d like to share about what Silvia and I have talked about, then please do. You could do that through the usual channels that we mentioned. But particularly, you could send us a voicemail. There’s a way to do that on the FIR website. You can send us email fircomments at gmail.com. Increasingly, we’re noticing we’re getting comments quite significantly on LinkedIn. People aren’t actually sending us directly comments anymore. That seems to have fallen out of favor. But conversations build on LinkedIn. FIR doesn’t have its own page on LinkedIn. So you’ll find those comments typically on posts from Shel or I under our own names. But increasingly, others are posting about what they heard in FIR. So you’ll find lots there. On other social channels, we have a community page on Facebook. And we’re also we have a handle for FIR on Bluesky. And then there’s our individual ones too. So thanks, everyone, for listening. And if Shel were here, he’d wrap this up by saying, that’ll be a 30 for this episode of For Immediate Release. The post FIR #480: Reflections on AI, Ethics, and the Role of Communicators appeared first on FIR Podcast Network.
undefined
Sep 8, 2025 • 19min

ALP 280: Handling early client contract terminations with finesse

In this episode, Chip and Gini discuss how to manage situations where clients want to terminate contracts early. Gini emphasizes the importance of having a strong contract with clear termination clauses, which can serve as leverage in negotiations. They share experiences and strategies for recovering outstanding invoices, including offering concessions and being flexible with payment arrangements. The duo also cautions against aggressive tactics like public shaming for non-payment and stresses the importance of maintaining professionalism to avoid burning bridges. They conclude with practical advice on managing accounts receivable and resolving disputes amicably. [read the transcript] The post ALP 280: Handling early client contract terminations with finesse appeared first on FIR Podcast Network.
undefined
Sep 1, 2025 • 27min

FIR #479: Hacking AI Optimization vs. Doing the Hard Work

Posts and videos featuring Generative Engine Optimization (GEO) hacks and formulas are flooding the web. We reported recently on one such hack focusing on press releases. But when you consider the kind of content on which the AI models rely for their answers, it may be more efficient to revert to good, old-fashioned PR and marketing. Links from this episode: 2025 Report Reveals Average B2B Content Volume Triples: Budgets Barely Budge ChatGPT is sending less traffic to websites – down 52% in a month How a content brand became a trusted resource for LLMs Networks, Not AI or Search, Are the #1 Trusted Source Amid Information Overload Many are sharing charts about Reddit and Wikipedia dominating AI search mentions, desperately trying to crack the code AI-Powered Search: Adapting Your SEO Strategy How AI is reshaping SEO: Challenges, opportunities, and brand strategies for 2025 2025 AI SERP Changes: New Strategies To Gain Local Search Visibility Google users are less likely to click on links when an AI summary appears in the results AI Mode in Search gets new agentic features and expands globally The next monthly, long-form episode of FIR will drop on Monday, September 29. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Hi everyone, and welcome to For Immediate Release. This is episode 479. I’m Neville Hobson. And I’m Shel Holtz on Thursday Neville, you and I are going to interview Stephanie Grover, who is the marketing and PR director at Horowitz Agency. This is a marketing agency that works with law firms, production companies, and other professional service providers in the US and Canada. And we’re going to talk, be talking about GEO generative. Optimization, generative engine optimization. I’m not altogether sure, but it’s a hot topic and I thought I would take today’s episode to set the stage for that because we’ve all seen the headlines recently. Chat, GPT, traffic referrals to websites plummeted more than 50% in a single month. This summer, and that’s not a blip. It’s a structural change in how these large language models are surfacing content. [00:01:00] OpenAI tweaked its ranking and suddenly chat GPT Beca became began citing fewer sources, leaning more heavily on places like Wikipedia and Reddit. Useful for users. Yeah. But if you’re a brand counting on visibility, it’s a gut punch. And meanwhile, the volume of content keeps exploding. A new B2B study found content production has tripled year over year, which could be partly attributable marketers flooding the zone with content in the hopes. LLMs will hoover it up and they’ll show up in AI search results. Interestingly, that tripling of content volume has not been accompanied by commensurate budget increases. Mm-hmm. But we’re producing more content than ever but it’s not necessarily better content or content that LLMs are actually going to use. So no surprise that there’s a scramble for the supposed hack that will unlock, sorry. Unlock. Okay. Unlock [00:02:00] rhymes with hack. What can I say? So no surprise that there’s a scramble for the supposed hack that will unlock generative engine optimization. GEO. Some companies are starting to figure out that it’s not about gaming the algorithm, though. It’s about trust. Sylvia la this chief marketing officer at Kenji shared a fascinating case study on LinkedIn. Her team created the sequence, it’s a standalone content brand with its own domain separate from the corporate site. The idea was simple. Create a community driven media hub. Human high quality, free of fluff. The unexpected bonus that came from this is that LLM started treating the sequence as an external authority. When asked about can g chat, GPT doesn’t just reference the company, it references the sequence. In other words, by building a trusted resource that stands on its own apart from the central [00:03:00] brand site, they built credibility, not just with their human audience. But with the algorithms too that aligns perfectly with something else I saw from Liza Adams, another CMO. Who pointed out that the reason Wikipedia and Reddit dominate AI citations isn’t mysterious, it’s because they directly answer real questions using the same plain language. Real people use Adams contrasts two types of marketing teams, the ones who do the hard work of auditing their content, listening to customer language, and creating genuinely helpful answers. The ones chasing quick fixes and shortcuts. Her takeaway is that there is no algorithm hack AI amplifies what’s already there. If your content is genuinely useful, trustworthy, and present in the watering holes your customers rely on, the algorithms will pick it up. If it’s not, there’s no trick. That’ll save you. Now, add one more layer to this. What people themselves actually trust according [00:04:00] to new LinkedIn research networks, our peers, our colleagues, people we know still rank as the number one trusted source of information far ahead of. AI searches or even traditional search engines. In fact, 43% of professionals say their network is their first stop when they need advice at work and new. Nearly two thirds say colleagues help them make decisions more confidently Now. Think about that for a second while marketers obsess over how to get cited by chat, EPT, or Gemini, the real influence continues to live in trusted human connections. That’s also changing how brands approach content. LinkedIn reports that 80% of B2B marketers are increasing investment in community driven content. Bringing in creators, employees, and subject matter experts, they understand that credibility doesn’t come from corporate channels alone. It comes from trusted voices. People want to hear from. So where’s that leave us in? A [00:05:00] messy transition is where that leaves us. Generative AI referrals are volatile. Content volume is ballooning. Everyone’s chasing GEO or a EO or A-I-S-E-O or whatever acronym you want to use, but the truth is the winning strategies are old school in the best sense. Build trust, answer real questions, and put your content where your community is already paying attention. Kenji’s story shows that if you can create a true content brand, it can earn authority with humans and with machines. Liza Adams reminds us that shortcuts don’t work answering real questions and real language does. And LinkedIn’s research makes it clear that even in the age of AI are human networks still shape the bulk of decisions. So stop chasing algorithms, earn trust. That’s the real path to visibility in a generative search world. Good. Good assessment there. Shell, I [00:06:00] think we’ve talked about this a lot on recent episodes of this podcast. One that comes to mind recently, which is in this kind of area, was the episode in which we talked about the, new form of press release that some people are discussing that is aimed at algorithms, not humans. And this to me is in the same kind of territory in that context. It’s a hack. So you, yeah, you call it that if you like, but I see it more as this is the direction of travel and the critics who say, no, no, no, no, no. This is not gonna take off it. Trust me. It really is. We’re seeing signs everywhere that this is, a really good sorry, not Google. It was Pew Research at the end of July. Had a quick survey results. Again, we’ve talked about this stuff on this podcast. Google users are less likely to click on links when an AI summary appears in the results. That’s been a kind of recurring topic, what a communicator’s gonna do about that. It’s not about, well, it’s not what? It’s [00:07:00] not about. What it is about is making your content discoverable more than it currently is in light of this trend that is happening. That means you’ve gotta rethink your approach to such and that kind of acronym soup that you reeled off just now. She maybe think of, oh, McDonald had a farm, right? E-I-E-I-O, right? You remember that? Search engine optimization and generative engine optimization seems to be the latter seems to be the common descriptor of all that I have seen particularly when I was looking into this area. But the reality is that. Change is underway. So, SEO if I contrast it this way, SEO’s the purpose of SEO, if you like, is to improve a website so that it ranks higher in search engine results. That’s essentially what SEO’s all about. GEO on the other hand, is this new idea, and as you mentioned it shaped by the rise of generative AI tools like Chat, GPT, et cetera, that instead of optimizing for search engines and websites, it’s. [00:08:00] GEO is about making content more discoverable and usable by AI systems that generate answers now. That to me, therein lies exactly the threat, if you will, if you wanna see it as a threat. I see it as evolution, and I see it as opportunity for those looking for. Angles looking for leverage, looking for an opening that you know, if your content is gonna be more discoverable and usable by AI systems that generate answers, you are on a winning streak in that case, because this is the definite trend. So it means you’ve gotta structure information ways AI models can easily interpret and cite. Almost that same phrase was used in that topic I mentioned that we discussed on a previous episode about the kind of evolutionary press release that’s emerging for the AI era to coin to nearly coin an already coined phrase. This is, in my view, this is the trend that’s going to knock the existing model sideways. And if you are not. [00:09:00] Looking into how you need to be part of this. You’ve got hard work ahead if you are able to catch up with people who are doing that. Key differences this is quite simple. SEO attracts people by clicking on links. That’s the point of it. GEO AI engines to ensure your content is included in the answers it generates. Think about that. And that’s outside your control in the sense of the generation, but it’s totally within your control to make your content discoverable. So that is included in those generated answers. So the format SEO uses see a rank list of websites. GEO people see a synthesized. Without clicking through to the original site that’s, you’re not gonna change that. It’s interesting, there are many similarities and this is this is a kind of paradox. Both aim to increase visibility of your content. Both rely on credibility. Both evolve constantly as the technology changes and both can affect reputation. If you’re doing badly, like keyword stuffing for SEO [00:10:00] or gaming AI for GEO, you risk undermining trust, and that’s the underlying point of it all. So. I think I’m looking forward to this discussion with Stephanie on, on Thursday, by the way, because I’ve seen when I did a little quick lookup knowing we’re gonna talk Stephanie on, on what she was gonna talk to us about. I see a lot of criticism of geo. Some people call it snake oil. This is not genuine, it’s not authentic. ’cause there’s no, this is definitely not what you should be doing. I’m thinking deja vu in the sense of what I’ve seen in the past when something new appears in the PR realm, if you like. I think it is something definitely to pay attention to. I think what you mentioned and the links that will include in the show notes indicates that SEO is not gonna suddenly drop dead. Not at all. But the future is not. Links on websites at all? Not really. I don’t believe. I can see, we’ve talked about this again on the previous episodes about a IO reviews from Google that I see conflicting stories, and I think you might have referenced something in [00:11:00] the beginning shell that trafficked on in that era is diminishing recently according to sub reports. But the truth of the matter is that, that ask almost anyone, there are surveys saying this, people prefer that. Users, let’s call them that, right? Customers or whoever you call looking for something, there’s a result. You see what you need in that little box on the right hand side of your screen. And that’s as far as you go. You don’t click on anything necessarily. And that’s people’s preferences. Why are you gonna go to a website when hist history shows you that it’s not? Much you can trust. So you’re looking, for instance, and I’ve used this example before. You’re looking for electric cars ’cause that’s a hot topic. And here in the UK the government just announced a rebate. To stimulate people buying electric cars. So you go online to look at what are the deals in electric cars who’s got this particular model. And as you do a traditional search the chances are high that amongst the results you’re gonna get in that list are gonna be [00:12:00] car dealers and sponsors. There’s all, that word appears everywhere. I wouldn’t go there at all, not knowing that there’s now this. Far more trustworthy alternative that’s likely to give me what I want without the feeling that you’re being manipulated by an algorithm because they’ve paid money to have their listing appear. This is where we’re at in the territory it seems to me show. So I, again, Stephanie, looking forward to hearing how can she put this forward as a good explainer for communicators? And this is the message we mentioned before, is that, you’ve got about, you’ve gotta really make sure your content is clear, credible, and discoverable. That’s not a new thing. You’ve always had to do that, right? But now it’s becoming more likely that if it’s not, you’ll be ignored, and that’s not good if you’re a brand. No. And you said SEO’s not dead. And it’s not gonna die anytime soon. Maybe it’s, well, it’s received a diagnosis and it’s physicians are trying to figure out if there’s anything they can do [00:13:00] to save it, but it’s has time yet to enjoy what’s left of its life. They I’ve heard people talking about the notion that one day the web is going to be designed for bots that is going to be bots, that are going to be visiting websites and gathering information and fueling the research and the searching. That humans do? I don’t think that’s accurate. I think there’s always going to be reasons people want to visit websites. I mean, is the Onion website really just there for bots to hoover up? Its content. No. People wanna go have a laugh. People wanna go watch videos on YouTube. There are reasons that people are going to want to go visit some kinds of websites. But, look at that. That item that we reported on about a new type of press release to accommodate the algorithms. And the more I think about that, the more I believe that if you need to revise the approach you take to press [00:14:00] releases to accommodate large language models, then the old approach you were taking to press releases probably sucked because what is it that appeals. To the large language models well, first of all, they’re not sharing their algorithms with us. When Google makes a change to Google, they write papers about it. They give the new update a name and all the SEO companies gobble up this information and make s. Changes to their work in order to accommodate the changes that Google’s made. What? OpenAI and Google Gemini, Google DeepMind, and the other players in this space what their algorithms are is a black box. We don’t know. So if you’re trying to game the algorithm let’s face it, you’re guessing. What we do know is roughly how. These models work not in any great detail. Even the people [00:15:00] who built them don’t understand how they work in any great detail. But we know that the knowledge that is collected in the training and in the continuous training that goes on through adding the new stuff that is posted, that the bots are all out gathering is all stored in cells and neuro net. In neurons in a neural network. And it’s the association between the neurons, just like in our own brains that produce the next token, the next word, the next sentence, the next paragraph in the results that you see. And we do know that a lot of these results are coming from Reddit and from Wikipedia, and we do know, or at least it is a. Very strong strongly supported, bit of surmising that it’s because they talk in plain language and answer questions that people wanna know about. So if you if you consider that website that [00:16:00] the brand Cange came up with that was a content driven site separate from the brand that was community driven. It worked. It satisfied both the need to build trust and interest among your community. And the LLMs liked it and started citing it when people asked about Cange. So it seems to me that rather than try to do something different, the only thing different you should be doing is if you’ve been doing it wrong up till now. You should go get yourself a good primer on marketing and a good primer on public relations from maybe the last 10 years and read that and do that because that’s what the l lms, like, I don’t think there is a magic bullet. I think that what we are good at when we do it the best that it [00:17:00] can be done. That’s what works. And that’s the direction we need to head. I agree. I totally agree. I think part of the problem though, for communicator certainly is the how bit, how you can do that. So, so one question I have, given the shift that’s underway, it is SEO is about ranking in a list of links. GEO is about surfacing in AI generated summaries where the links may be secondary. So for communicators, how do communicators adapt when fewer people click to the source? No matter what the reasons might be, that’s the question, how do you do this? So it’s a how we need to help them, help ’em understand. And I don’t have the answer to the how myself either. Well, I think the answer goes back to basic strategic planning, right? It’s Sure. What is your goal in the first place, and what strategy are you going to employ in order to achieve that? Goal. And if the old strategy of SEO doesn’t work, if the old even older strategy of having a website doesn’t do the trick anymore, what else do you do? [00:18:00] You know, back to the drawing board. But that’s what we do. It is true and we’ve mentioned this before, so for instance uh, you know, AI models are more likely to size sources with consistent metadata, explicit effects breach. That’s common sense, I think. But publishers and brands worry. If AI provides the answer directly, what incentive is there to visit the source? So this is a circular point. Go back to the other one. How to communicate adapt when fewer people click through to the source. Uh, like all things, you need a plan, but what is certainly clear, and I think that point I just mentioned, that would undermine ad advertising based revenue models, right? It doesn’t mean that next week is all gonna collapse. Not at all. But that’s. You can see the writing on the wall. I’m sure I can certainly see writing on the wall about this, and I’m kind of a bit nonplussed by deniers. May maybe the, the VX people too. Deny all that. I, I don’t know, but I mean, it just seems to me crazy. You can see the changes right in front of your eyes, whether you like it or not. So I think you mentioned [00:19:00] your gaming, uh, as I mentioned in my early combat gaming. You know, gave me the content for LLMs you know, AI friendly, but misleading content. I think that is still a, a big issue to be concerned about given how that is already going on. And, uh, we already, you know, we already hear the kind of worry signs about and these are the things that are in the realm of, you know, what happens if the robots do take over all gonna die, et cetera. But it means that if. AI generated content is prompted by people with bad in. Now, these, these famous bad actors we hear about all the time, and the output is based on that bad input. And then others reference that over time. Uh, what is that giving us? Bullshit. Basically misinformation, disinformation how can we trust anything? And we’re already seeing this demonstrated, and we talked about this in a recent episode in Wikipedia, where they are battling the [00:20:00] reality. Of the influx of misinformation and disinformation in content written by others that is faster to appear than the moderation system can handle it. So they’re now coming out with a quick delete methodology that is the short-term solution. I dunno what the long-term one is for Wikipedia, but that directly impacts trust. And I think if, if that happens, then we’re in real trouble. But we’re seeing this, I, I see this daily here. The, the BBC is very much at the front. Page on this and a number of other media companies in the uk the verification team there. There’s, I gather, there’s now like a hundred people on this team. I’m not sure where I, where I saw that, but, uh, it may not be true in that case, but it’s a lot of people who literally analyze everything. Uh, there was a, a story I saw this morning analyzing all this stuff over the weekend. I’m sure you saw it about. Has Trump died? Is he still around? Is he still alive? Uh, and that’s been looked into to get to the source of where that started. I’ve not seen the [00:21:00] conclusion of that research, but Wikipedia’s definitely in the frame. Reddit I saw mentioned earlier, and, uh, every time I go to Reddit and I use Reddit a lot ’cause of things I’m interested in. Uh, and it’s full of, opinion. And then the counter opinion, and then you get rabbit holes everywhere about, well, I heard it was this and that’s not true. Worse is the ones who say, with authorities of sounding words, that X it is so and so without any verification. People believe that stuff there is part of the problem too. Humans are too trustworthy in that regard. We’re, it’s a mess. And how do we, again, how do communicators adapt when fewer people click through? The source is a kind of umbrella question to it all. Uh, it’s a difficult one. Well, I think the fact that fewer people are clicking to a source means that the, the purpose of the source needs to change. You need it there because that’s what the LLMs grab in, in, in their training sets. So you. [00:22:00] Want to continually be working on that content. And there may come a day where you won’t care if a single human sees it, as long as it’s delivering the AI results that you need it to. Perhaps website design is going to diminish only the sites that people actually do want to visit for whatever reason will get high quality design. The rest will be, you know, mark more markup language. So, uh. That it’s even easier for the LLMs to grab. I don’t know where that’s headed. We’ll see. No, uh, because I’m sure that that will follow whatever ends up working. But the idea that you won’t need a website anymore is absurd if you are generating revenue off of that website through advertising. You’re probably gonna need to switch to something more community driven. Right? Because people will continue to go to community sites. Yeah. Uh, because a AI can’t replicate that. They can’t deliver that for you. No, [00:23:00] I, I, I agree. And I don’t see websites dying away, even if, uh, some people say they will. ’cause like you said, people will always want to do certain things that might well diminish nevertheless. But in all of that, you mentioned the point earlier, and you’ve mentioned this the other day, I think in, in, uh, in some of your recent experiences that, already there are tools that, uh, let’s call them AI agents, even though I don’t think they really are yet that will go out to YouTube and find these, take these 17 videos on this topic that you are really interested in. Summarize it all from the, uh, from the transcript and present to you a summary of what it’s all about with some actionable items for you to do, depending on what you’ve asked it to do for you. So that to me, I, I can see more likely I speak of myself here, uh, and I, I’ve kind of. Scroll through YouTube looking for stuff myself, that would be far more appealing than me going to the website. Well, I’ve actually created a, a, a tool that does, so I wouldn’t, I wouldn’t, I wouldn’t, I wouldn’t do that. And I, I, I would imagine a lot of people would prefer to do that than go themselves to [00:24:00] the onions. An example you made, have someone give you the list of, this is what this is all about. I’ve done that. I’ve, uh, used, uh, Google’s opal tool, uh, to create a workflow where I enter the URL of a YouTube video. I say, this is a video that gives instructions on how to do x go to the transcript and just give me the steps to, to do it. Yeah. I, I. I chose that video because I watched it and it has the steps I want. What I don’t wanna have to do is continually rewind, uh, as I’m going through these steps. So once I’ve seen it, I just wanna see the steps articulated. And it works real well. I used it a few times. Then I got the comment browser from Perplexity, which I can just do that as a prompt. Now, uh, I, I don’t need a workflow, so I, I just got that over the weekend as my, you know, congratulations. You’re now in the trial, blah, blah. I’ve installed it, but I’ve not, uh, tried it yet, but it’s on my to-do list to do. I see like you watch a couple of YouTube videos [00:25:00] to see everything you can do with it, because there’re, I’m gonna try, you’re not obvious. Try. I’ll try it with that. I, I suspect so, so this adds to the, to what’s happening in the development phase of what’s going on in that wide world out there. Uh uh, and the pressure is definitely on to find. Better ways, more effective ways to, uh, to get the answers to what you are looking for that are far more, uh, uh, appealing to users. This is really what it comes about. Yeah. What do you prefer to use? And you will ignore lists of links in the future ’cause they’re not. Sufficient information for you without more work. So I find a development in, in tools like chat, GBT with the recent models where they now do what Perplexity started doing quite a while ago, which is giving you the list of sources. So I did something the other on a research project that I’m working on right at the moment that I, I, I told chat GBT to give the full list of sources, [00:26:00] which it did. Citation linked to the appropriate passages. Made life a lot easier. A lot easier. Not as good as perplexity, but it’s getting there and didn’t do it until not long ago. Plexity, I, I use occasionally, but chat gpt is my go-to for all these kinds of things. Uh, particularly model five. Initially skeptical that as I was, but the thinking mode is really quite good. A bit slow though, I have found. So, um, this is part of the, the deal. Right. Again, like I said, I’m looking forward to this conversation in the FIA interview we’ve got scheduled for Thursday. Might, uh, might enlighten us a bit more. Looking forward to it myself and until then, that’ll be a 30 for this episode of four immediate release. I. The post FIR #479: Hacking AI Optimization vs. Doing the Hard Work appeared first on FIR Podcast Network.
undefined
Aug 25, 2025 • 1h 30min

FIR #478: When Silence Isn’t Golden

For a while, businesses were flexing their social responsibility muscles, weighing in on public policy matters that affected them or their stakeholders. These days, not so much, with leaders fearing reprisal for speaking out. But silence can have its own consequences. Also in this episode: The gap between AI expectations and reality; rent-a-mob services damage the fragile reputation of the public relations profession; too many people think AI is conscious, so we have to devise ways to reinforce among users that it’s not; Denmark is dealing with deepfakes by assigning citizens the copyright to their own likenesses; crediting photographers for the work you copied from the web won’t protect you from lawsuits for unauthorized use. In Dan York’s Tech Report, Dan shares updates on Mastodon’ (at last) introducing quote posts, and Bluesky’s response to a U.S. Supreme Court ruling upholding Mississippi’s law making full access to Bluesky (and other services) contingent upon an age check. Links from this episode: So far, AI Isn’t Taking Jobs or Generating Profit Companies Are Pouring Billions Into A.I. It Has Yet to Pay Off. Seizing the agentic AI advantage Not today, AI: Despite corporate hype, few signs that the tech is taking jobs — yet 1 in 6 workers pretend to use AI amid workplace pressures, survey finds We must build AI for people; not to be a person FIR Interview: Monsignor Paul Tighe on AI and Humanity The Wisdom of the Heart (Neville’s post on Monsignor Tighe’s remarks) As Rent-A-Mob “Protests” Rage, PRSA’s “Ethics” Board is AWOL Boom times for rent-a-mobs Fox News’ Lawrence Jones Presses Rent-A-Mob Company CEO Over Protests Denmark Aims to Use Copyright Law to Protect People From Deepfakes Denmark to tackle deepfakes by giving people copyright to their own features When Does Corporate Silence Backfire? Home Depot keeps quiet on immigration raids outside its doors Facebook post on crediting photographers when you don’t have permission to use their content Unmasking the Copyright Traip: The Dark Side of AI Bots Links from Dan York’s Tech Report: Quote Posts Coming to Mastodon Our Response to Mississippi’s Age Assurance Law – Bluesky The next monthly, long-form episode of FIR will drop on Monday, September 29. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: @nevillehobson (00:02) Hello everyone and welcome to Four Immediate Release. This is episode 478, the monthly long-form edition for August 2025. I’m Neville Hobson. Shel Holtz (00:14) And I’m Shel Holtz, and we have six reports for you today. Hope you find them illuminating. And if you find any of them worthy of comment, I would hope that you would comment on them. There are a number of ways to comment on the content that you hear on for immediate release. You can send us an email to fircomments at gmail.com and attach an audio file if you like. You can record that audio file. On the FIR website, there’s a tab on the right-hand corner. It says record voicemail and you can record up to 90 seconds. You can record more than one. We know how to edit those things together. So send us your audio comments, but you can also leave comments on the show notes at FIRpodcastnetwork.com. on the posts we make at LinkedIn and Facebook and threads and blue sky and mastodon. You can comment on the FIR community on Facebook. There are lots of ways that you can share your opinion with us so that we can bake those into the show. And we also appreciate your ratings and reviews. So with those comment mechanisms out of the way Neville, let’s. hear about the episodes that we have recorded since our last monthly episode. @nevillehobson (01:33) We did five since then. Actually, it was four plus the last monthly. So we’ll start with that one. It’s episode four, 74 for July, the long-form episode. That one ran one hour, 33 minutes. So a bit shorter than we usually do for the month, which is about hefty, hefty but good, as Donna would say. Yeah, exactly. Shel Holtz (01:52) We were terse. @nevillehobson (01:55) So we covered a number of topics related to AI, was how we titled the episode Show Notes. AI is redefining public relations, driving a change in the way we craft press releases, PR is at the heart of AI optimization and more. Good discussion. had lots of topics. The links are brilliant. Lots of content we linked to in that episode. Then we followed that. That was on the 28th of July that was published. On the 29th, the day after that, we published an FIR interview with Monsignor Paul Tai of the Vatican. That was on AI ethics and the role of humanity. It’s actually an intriguing topic. We dove into a document called Antiqua et Nova that was really the anchor point for the conversation that talked about the comparison of human intelligence with artificial intelligence and that drove that discussion. He was a great guest on the show, Shell, and it’s intriguing. There’s more coming about that in the coming weeks, the way, because I’ve been posting follow-ups to that in little video clips from that interview and there’s more of that kind of thing coming soon. So we have a comment, right? Shel Holtz (03:06) Every do. We do, from Mary Hills out of Chicago. She’s an IABC fellow who says, insightful and stimulating discussion. Thank you for the extraordinary host team for making this happen and Monsignor Tai for sharing his insights. To the question, my view as a ComPro is to build bridges to discover options to move forward and choose the best way. Think discursive techniques, sociopositive climates, and our ability to synthesize data and information. It taps into those intangible assets we bring to our work and are inherently in us. @nevillehobson (03:45) Good comment. Reminds me, the way, related to what you talking about, how to comment before we started this, is most of the comments we seem to get, certainly in the last six months, if not more, have been on LinkedIn. It’s a great place for discussion, but that’s a business network. You need to be a member to see them. So if you’re not a member and you want to comment, join LinkedIn, otherwise you won’t be able to. Shel Holtz (03:56) Mm-hmm. It’s free. @nevillehobson (04:07) Yeah, it is. So then next, well, you’ve got a paid option, but generally it’s free unless you take out the paid option. I’ve got the paid option too, just as a little aside there. So we followed that on the 4th of August, episode 475, title of the post, algorithms got you down, get retro with RSS. The rise of social media news feeds had rendered RSS useful for many people, said, and declining usage led Google to sunset it. Shel Holtz (04:09) Not for me, I pay for mine, but. Yeah, that’s right. Exactly. @nevillehobson (04:34) But RSS feeds never went away. And we explored that a bit. Most people don’t know that all the newsletters they subscribe to, the sub-stacks or whatever publication it is, RSS is driving a lot of how they get the content that they include in those publications. So it’s part of the plumbing. And it always has been. even now, people don’t think about this. But we had an interesting perspective on that, on how to use RSS afresh in a slightly different way. 476 on the 12th of August rewiring the consulting business for AI. We reviewed the actions of several firms and agencies and discussed what might come next for consultants. There’s been a change, almost literally changing business models with the rise of AI, agentic AI in particular. So we explored that, a good conversation. And finally, 477 on the 18th of August, de-sloppifying Wikipedia. That’s a heck of a. descriptor you put in the headline that’s de-sloppifying. Wikipedia introduced a speedy deletion policy for AI slop articles. It’s actually a bigger deal than most of us would realize if we ever thought about. Wikipedia, the user generated content encyclopedia is running or rather is addressing or has been trying to address for a while. Shel Holtz (05:30) I’m glad you liked it. @nevillehobson (05:51) the rise of AI generated content that makes it very difficult in a collaborative editing environment with volunteer editors that is all about consensual agreement to change or addition. That takes a while. This is at light speed by comparison to that procedure. They’re coming up with a speedy deletion policy and that’s getting some discussion too. But Wikipedia, is an important place online. It has been for long time, a kind of a natural first place that shows up when you’re looking for information about a company, an individual, whatever it might be, a subject of some type. so trust is key to what you see there. So we’ve had quite a bit of a conversation on that. that wraps up what we’ve been doing since the last episode. Shel Holtz (06:42) We did have a comment on 477, this from Mark Hillary, who says, got to say I’m not familiar with Trust Cafe. @nevillehobson (06:44) Oh, we did. You’re right. We did. Yes. Yep. Okay, good comment. No, me neither. That was Mark Henry. I’m surprised I didn’t leave a comment in reply to him because I know him, but obviously I didn’t see the comments at the time. Shel Holtz (07:02) Now have to go look that up. Well, it’s waiting. It won’t go anywhere. We also, in the last week, recorded the most recent episode of Circle of Fellows, the monthly panel discussion with four fellows of the International Association of Business Communicators. This was episode 119 of this monthly panel discussion, and it was on sustainability, communicating sustainability. @nevillehobson (07:15) Yeah. Okay. Shel Holtz (07:37) The panel included Zora Artis from Australia, Bonnie Kaver from Texas, Brent Carey from Toronto, and Martha Muzyczka from the Far East of Canada. The next circle of fellows is scheduled for September 18th at 10 a.m. I tell you all of this because you can watch it in real time and participate in the conversation. This is going to be about hybrid communications and hybrid workplaces. This will be moderated by Brad Whitworth and three of the four panelists have been identified so far, Priya Bates, Andrea Greenhouse and Ritzy Ronquillo. So, so far, Brad, the moderator is the only American on that panel. Priya from Toronto, ⁓ Andrea from Toronto and Ritzy from the Philippines. So it’ll be a good international discussion on hybrid and. That will lead us into our reports for this month, right after this. But one of the biggest workplace stories right now is the widening gap between the promise of AI and the reality employees are living day to day. The headlines have been flooding the zone lately. MIT researchers report that 95 % of generative AI pilots in companies are failing. The New York Times recently noted that businesses have poured billions into AI without seeing the payoff. And Gartner’s latest hype cycle has generative AI sliding into the famous trough of disillusionment. By the way, that MIT report is worth a healthy dose of skepticism. They interviewed something like 50 people to draw those conclusions. But the trend is pretty clear. The number of pilots that are succeeding in companies is definitely on the low end. But while companies wrestle with ROI, employees are wrestling with something more personal. uncertainty. Few research found that more than half of US workers worry about AI’s impact on their jobs, while most actually haven’t actually used AI at work much yet. NBC reported that despite the hype, there’s little evidence of widespread job loss so far. Still, the fears are real, and they’re being compounded by mixed signals inside organizations. Here’s one example I read about. A sales team was told to make AI part of every proposal. but they weren’t offered any guidance, any training, any process change. As a result, some team members just kind of quietly opened ChatGPT and used it to generate some bullet points. Others copied old proposals and slapped on an AI enhanced label. A few admitted they just pretended to use AI to avoid looking like they were behind the curve, which by the way, lines up with a finding from HR Dive that one in six workers say they pretend to use AI because of workplace pressure. That’s not innovation, that’s performance theater. This is where communicators need to step in. Employees don’t need more hype, they need transparency. They need to hear that most pilots fail before they succeed. They need clarity about how AI will really fit into their workflows and they need reassurance that the company has a plan for reskilling, not just replacing its people. So for managers, and I am a firm believer that we need to work with managers to help them communicate with their employees, here’s a simple talk track you can put in their hands right away. So share this with managers on your teams. First, AI is a tool we’re still figuring out your input on what works and what doesn’t is critical. Second, we’re not expecting you to be experts overnight. Training and support will come before requirements. And third, Your job isn’t disappearing tomorrow. Let’s focus on how these tools can take that busy work off your plate. And for communicators thinking about the next 30 days, consider a quick communication action plan. On week one, launch a listening tour. Ask employees how they feel about AI and where they see potential. Week two, share those findings in plain language, including what employees are worried about. Week three, Host AI office hours with your IT team or HR partners to answer real questions. And on week four, publish a simple playbook. What’s okay, what’s not? How employees will be supported as the tech evolves. That should help you cut through the hype while keeping employees engaged. The technology may still be finding its footing, but if communicators help employees feel informed, supported, and included, The organization will be in a far better position to capture real value when AI does start delivering on its promises at the enterprise level. @nevillehobson (12:22) Interesting statistics there, Shell. Listening to that advice you gave, just made me think straight away that, and indeed looking at the HR dive reports in particular with what they’re talking about, 75 % of workers said they’re expected to use AI at work, they say, whether officially or unofficially. That’s a bit alarming, I think. Some people said they feel pressured and uncomfortable, and some said they pretend to use it rather than push back. So that’s part of the landscape. And that seems to me to be what needs addressing first and foremost, because if that is the situation in some organizations, then communications got a real uphill struggle to persuade employees to do all the things that you mentioned. So, you know, the comms team could do all those things. Week one, we do this. Week two. But unless you get the engagement from employees that makes it worthwhile doing that is not worthwhile doing, if the culture in the organization says that you’re not really seeing the right support from leaders. So that is probably the fundamental that needs addressing. It’s a sad fact, isn’t it? If that is the climate still that leads to this kind of reporting. I don’t hear similar in the UK, but then again, there’s not so much, I don’t think so much kind of research going on as there is in the US, plus the numbers are smaller here. This is very US centric. This one in HR Dive is a thousand people they talk to. Nearly 60 % said they use AI daily. I’m surprised that might be higher than that. So that’s all part of the picture there. That makes it a real struggle to implement what you’ve suggested. What do you think? it a real hurdle? Shel Holtz (14:08) ⁓ I think it is a real hurdle. And I think one of the things that we need to acknowledge is that leaders in organizations who are driving the adoption of AI, let’s be clear. It’s not IT behind the AI push. It’s leaders who see the potential for doing more with less and earning more and everything else that AI has promised are jamming it down the organization’s throats. I have mentioned before on the show that I recently read a book called How Big Things Get Done. It’s mainly about building. It’s written by a Danish engineer professor who has the world’s largest database of mega projects. But the conclusion that he draws is that projects that succeed are the ones where they put all of the time into the planning upfront. If you jump right into the building, you get disasters like the California high speed rail and the Sydney Opera House, which I didn’t realize was a disaster until I read about it. But my God, what a disaster. And the ones that succeed are the ones that spend the time on the planning. The Empire State Building went up in I don’t remember if it was two years. I mean, it was it was fast, but they put a lot of time into what we call pre-construction. And I think that’s not happening with AI. in the enterprise right now. think there are leaders who are saying we have to be AI first. We have to lean into AI. We need to start generating revenue and cutting headcount. So let’s just make it happen. And there’s no planning. There’s no setting the environment for employees. There’s very little training. Although I do see that there is a shift in. the dollars that are being invested in AI moving to the employee side and away from the tool side, which is heartening. employees are concerned about this because they’re not getting the training. They’re not getting the guidance. They’re not seeing the plan. All they’re hearing is, we got to start using this. And I think that would leave people concerned. think that explains a lot of the angst that we’re hearing about. among employees. @nevillehobson (16:19) Yeah, that makes sense. mean, again, just glancing through these statistics in the HR Dive Report, interesting, the contrast that I’m reading. It says 84 % of workers said they feel more productive using AI. 71 % said they know how to use it efficiently. They report less burnout, less work stress, better job satisfaction. Nearly a third said they feel less lonely. Shel Holtz (16:33) that would be me, by the way. @nevillehobson (16:43) Those are the ones who’ve developed a relationship with CHAP GPT, I know. And a quarter said they collaborate more. Four in particular, I was right there, I tell you. But then in contrast, some workers said they’re struggling to keep up. One in four feeling often or always overwhelmed with AI developments. And the third said that learning, using, and checking AI takes as much time as their previous approach to work. So 25 % of those expected to use AI at work said they have received no training. Shel Holtz (16:47) Yeah, the 4.0 in particular, @nevillehobson (17:11) Another 25 % said they did receive training, and a third were given dedicated time at work to learn AI skills. So it’s not all bad. That’s a fact. But it goes on, they’re some people in Deloitte about AI development. Disconnect has emerged, where some people are pretending to understand the tech and others declined to prioritize it. So you’ve got a real mixed bag of landscapes, if you like, that need, well. To me, seems that you need to identify this and figure out how you’re to address it. Because the conflict, well, the contrast of diet seems to me, you’ve got high percentage of them saying they’re more productive. Others struggling to keep up. Others don’t get any training at all. You mentioned those examples you gave of construction examples, like the Empire State Building going up real fast. The reality with AI is that this is, I mean, to coin a corny phrase again, I suppose, is things are developing at light speed, things are happening so fast that it is hard to keep up with it. So the pressure is there, particularly in the kind of more relaxed environments today, more informality, less formality, where you can’t, the control has vanished from top down. and that anyone can get access to information about literally anything, just go online. And so people are finding out about these things. They’re exposed to, this is the latest AI, look at this one, and they hear from their peers and so forth. And unless you’ve got a credible resource that is appealing to people, they’re going to do their own thing. Particularly, they don’t feel they’re getting any kind of support on how to implement all this stuff. So this is quite a a challenge for communicators. But I think it’s a bigger challenge organizationally in leadership where you’ve got this challenge that doesn’t seem to being addressed by many companies. And I would stress that this is not widespread. I don’t see anything in here that tells me this is the majority overall in in organization in the US, in spite of some of these percentages that suggest otherwise. But it is a it is definitely a situation that is not good for the organization. And surely that must be apparent to everyone, I suppose. Shel Holtz (19:32) You know, I would hope, but I would also hope that communicators step up and start documenting what’s going on in their organizations and feeding that back up, uh, representing the employee voice to the leadership of the organization. So maybe that they’ll start taking a step back and thinking about how we do this strategically, because it hasn’t been strategic to this point. As employees read about these claims of 95 % pilot failure. Those who are not really enthusiastic about AI will be able to use that as an excuse for not embracing it. Well, it doesn’t work anyway, and it’s not really making a difference, and companies aren’t achieving any ROI. So why should I spend time on this? It’s probably going to be gone in six months, right? And I was listening to an interview with Demis Asabas, the CEO of Google DeepMind. And this is on the Lex Friedman podcast, long two and a half hour interview, but great. one of the things that he talked about is, and as Lex Friedman brought it up, he said, I have a friend who studies cuneiform, ancient images carved on stone, right? And he didn’t know a thing about AI. He barely heard about it. And… @nevillehobson (20:41) Chip show. Shel Holtz (20:48) It was a sabbath who made the point. said, you know, there are a lot of us who are talking about this and enthusiastic and you know, if you spend time on X, for example, everything is AI all the time and we lose sight of the fact that there is a huge part of the population that is blissfully unaware of all of this still. So there’s that to deal with too. @nevillehobson (21:10) challenge without doubt. Okay, so speaking of AI, one of the big AI stories this month comes from Mustafa Suleiman, the CEO of Microsoft AI. He’s written a long essay with a striking title, We Must Build AI for People, Not to Be a Person. In it, he raises a concern about what he calls seemingly conscious AI. These are systems that won’t actually be conscious, but will be so convincing. Shel Holtz (21:16) Are we? @nevillehobson (21:40) that people will start to treat them as if they are. He argues that this isn’t a distant science fiction scenario. With today’s models, long-term memory, and the ability to generate distinct personalities, it could arrive in just a few years. Already some people project feelings onto their chatbots, seeing them as partners, friends, or even divine beings. We’ve been hearing a lot about that recently. I’ll hold my hand up. I had a great relationship with my good friend and assistant, ChatGPT 4.0. I was not happy with the move to chat GPT-5, which ditched all of that. And I felt like I was talking to someone I didn’t know at all or who didn’t know me. So I get that. But Suleiman in his essay warns that this trend could escalate into campaigns for AI rights or AI citizenship, which would be a dangerous distraction, he says. Consciousness, he points out, is at the core of human dignity and legal personhood, confusing this by attributing it to machines. risks creating new forms of polarization and deep social destruction. But what stood out most for me wasn’t the alarm over AI psychosis that some commentators have picked up on. It was Suleiman’s North Star. He says his goal is to create AI that makes us more human, that deepens our trust and understanding of one another and strengthens our connections to the real world. He describes Microsoft’s generative AI chatbot, Copilot, as a case study. millions of positive, even life-changing interactions every day, carefully designed to avoid overstepping into false claims of consciousness or emotion. He argues that companies need to build guardrails into their systems so that users are gently reminded of AI’s boundaries, that it doesn’t actually feel, suffer, or have desires. This is all about making AI supportive, useful, and empowering without crossing into the illusion of personhood. Now this resonates strongly in my mind with our recent FIR interview with Monseigneur Paul Tai from the Vatican. He too emphasized that AI must be in service of humanity, not replacing or competing with it, but reinforcing dignity, ethics and responsibility. And it echoes strongly something I wrote following the publication of the FIR interview about the wisdom of the heart, the core idea that we should keep empathy, values and human connection. the center of AI adoption. It’s a central concept in Antiqua et Nova, the Vatican’s paper published earlier this year, comparing artificial intelligence and human intelligence. So while the headline debate might be about whether AI can seem conscious, the bigger conversation, and the one I think we really should have, is how we ensure that AI is built in ways that help us be more human, not less. What strikes me is how Suleiman pulled tie in even our own conversations. all point in the same direction. AI should serve people, not imitate them. But in practical terms, how do we embed that principle in the way businesses and communicators talk about AI? Thoughts? Shel Holtz (24:43) It’s an interesting conundrum, largely because we are told by experts like Ethan Molluck, the professor out of the Wharton School in Pennsylvania, who is one of the leading posters on LinkedIn about AI and AI research, that the best way to get great results from AI is to treat it like a human, engage in conversation with it, and I find that to be true. find that giving it a prompt and getting a response and letting it go with that is not nearly as good as a conversation, ⁓ a back and forth, asking for refinements and additions and posing questions and the like. And the more we have conversations with it and treat it like a human, the easier it’s going to be to slide down that slope into perceiving it. to be a person. I think that’s, we’re hearing a lot of people who do believe that it’s conscious already. I mean, not among the AI engineering community, but you hear tales of people who are convinced that there is a consciousness there and there is absolutely not. But it mimics humanity pretty well and is gonna get much, much better at it. As as Malik said, at any point, the the tool that you’re using today is the worst one you’ll ever use because they’re just going to continue to get better. So getting people to not see them as conscious, I think is going to be a challenge. And it’s not one that I think a lot of people are thinking about much. Looking at the. productivity gains and other dimensions of this. Certainly looking at the harm, I mean, there’s a lot of conversation out there among the do-mers as they’re called and what kind of safety measures are being considered as these models are evolving. But specifically this issue of treating it like a human thinking of it. as a person with a consciousness, I don’t think there’s a lot of attention being paid to that and what the steps are going to be to mitigate it. @nevillehobson (26:52) Yeah, interesting. think I have great respect for Ethan Molyke, I must admit. I read a lot of what he says, but I utterly disagree entirely with this whole point about you must treat it as if it is a person. That’s completely and utterly counter to the whole notion of the wisdom of the heart, which I think is a magnificent way to look at this. aware and all your thinking the new dignity of the human being is at the center of what we do with AI. So we do not pretend it’s like a human at all. It is a tool that we can build a relationship with, but we don’t consider it to be like a person at all. but it’s not about how it develops. The point is, how do we develop Shel Holtz (27:33) sure, difference between considering it @nevillehobson (27:41) in how we use this, not how it’s developing, because we are the ones who are enabling it to develop through all the tools and activities we go on. And the missing piece in all of that is what about the people? What about the humanity here? Where everyone who talks about this, and Ethan Molyk seems to one of those too, it’s about the benefits we get from using an AI. It’ll make more money. It gives us better market share. We enable people to do these things better, et cetera, et cetera. And yet, reflecting on your report just prior to this, there are many people in organizations who feel ignored, who feel overwhelmed, who are unhappy with this. There’s not enough explanation of what the benefits are. And those tend to be couched in. These are the benefits for the organization and the employees who work there and the customers who buy our products and so forth. So I think. we have to develop a way of thinking that gives a different focus to this than we are being pressured to accept, I suppose you could argue. There are strong voices arguing this. I get that. And like you said, to which I truly find it extraordinary that there are people who say, yeah, they’re sentient. These are like humans. Not at all. They’re algorithms, a bit of software. That’s it. So… This is not about a Luddite approach to technology at all. It’s not about thinking out, it’s like the Terminator and Skynet and all that kind of stuff. No, not at all. It’s the moral and philosophical lens that is missing from all of this. And so that is what we need to develop into our conversations about this is that element of it that is missing largely everywhere you look. Shel Holtz (29:27) It is. I still think that most of the time I’m engaging with a model, I’m having a conversation with it. I if I’m looking for a simple fact, I’ll go to perplexity and get my answer. But if I’m developing a strategy, for example, which is something that I use AI to help me with, I’ll tell you, I have created a custom GPT that is a senior communication consultant. took me about four hours. to build this out with all of the instruction set. I don’t have the budget to work with a consulting organization and there’s nobody who is higher in the hierarchy than me in communications where I work. So if I wanna bounce my ideas off a senior communications professional, I had to create one. So I did. And I didn’t give it a name. I know Steve Crescenzo has one, he named Ernie after Ernest Hemingway, but I didn’t name mine, but I’ll go have conversations with it about the strategy that I am considering. And it works really well and it works best when I treat it like a consultant, when I have that conversation. That’s what I coded it to be. I didn’t code it, I gave it the instructions. And I think it’s this behavior on top of the fact that you have character AI and you have… @nevillehobson (30:33) Right. Shel Holtz (30:40) Facebook and Metta introducing characters that you can engage with that are designed to be people. And you have the therapists now that are coming, AI therapists, and they’re all designed to behave and engage with you like people. And I don’t have a problem with that. This is a tool and this is one of the things that it does well, but how do we keep front of mind among people? that while you’re doing this, you need to remember that it is not ⁓ a person and it is not conscious. I just want to say that in our intranet, when I sign onto our network in the morning, I have to click OK on a legal disclaimer. Every single time I turn my company laptop on, shouldn’t we have something like perhaps a disclaimer before you start interacting with these that this is a very lifelike, human-like experience that you’re about to have? Keep in mind, it’s not. @nevillehobson (31:10) Well, that’s the whole point. No, absolutely. I think I do the same show. I’ve talked about this a lot over the last couple of years on this show elsewhere. I treat my chat GPT assistant like a person, but I do not. call it I have a name. Jane is what I call the chat GPT one. I don’t see it as a real person at all. Far from it. I’m astonished, frankly, that some people would think this is a person I’m talking to. And come on, for Christ’s sake, it’s an algorithm. So yet it enables me to have a better interaction with that AI assistant if I can talk to it the way I do, which is like I’m talking to you now almost almost the same. But the bit that’s missing, and I think this is the heart of what Paul Tai was talking about, quoting from Antiqua et Nova, and I think this is the core part of the reflection of all this. must not lose sight of the wisdom of the heart. which reminds us that each person is a unique being of infinite value and that the future must be shaped with and for people. And that has got to underpin everything that we do. And as I noted in my kind of rambling post I did write, it was actually better than the one I did the first draft, I must admit. It’s not a poetic flourish, it’s a framing. That’s the thing that we’re missing. We mustn’t see AI is a neutral tool. It’s not really because we shape it and we need to encourage critical reflection on that human dignity. Wisdom can’t be reduced to data. The Vatican says that ethical judgment comes from human experience, not machine reasoning. Totally agree with that. So, I mean, this is to me the start of this conversation, really. And I think the kind of wisdom or the thinking, certainly not wisdom, seems to me, the thinking that is the counter to that, such as what you outlined, is very powerful, is embedded almost everywhere you look. So I looked at this myself and think, OK, fine, I’m not going to evangelize this to anyone at all. I know what I’m going to do as far as I’m concerned. And that made me feel very comfortable that I’m going to follow the principles of this myself, which I have been doing for a while now, that is, in a sense, reflective in the world of algorithms and automation. What does it mean to remain human? So I’ve changed how I use the AIs, I must have, and maybe chat GPT-5 happened at the time I started making that change. That is something I’ve started talking to people about. Did you think about this? How do you feel about that? And seeing what others think. And I’ve yet to encounter anyone who would say, this is amazing. What that’s saying, it makes total sense to me, let’s do this. No one’s saying that that I’ve talked to. So. It’s something that I think interviews, the interview we did, others that Paul Tai is doing, and what I’m seeing increasingly other people starting to talk about, is the framing of it within this context. That’s where I think we need to go. We need to bring this into organizations. So ⁓ an invitation to reflect, let’s say, that, yes, this is great, what’s going on, and you’re doing this, you need to also pause and think about it from this perspective as well. That’s what I think. Shel Holtz (34:34) Mm-hmm. I would not disagree. And a lot of the development that’s happening in AI is focused on benefiting humanity. I’m looking at the scientific and medical research that it’s able to do. mean, just the alpha fold, which won the Nobel Prize for Demisys Abis is to benefit people. Where it’s probably benefiting people less is in business. @nevillehobson (34:59) Thank Shel Holtz (35:06) Because as you say that it needs to benefit people, think most business leaders think it needs to benefit profitability. And that could be at the expense of people. @nevillehobson (35:16) Well, it’s actually not about benefiting people in that sense, because yes, it is. It’s about reintroducing, in a sense, conscience, care and context into thinking about what AI can do that is related to efficiency scale and all those business benefits. That’s not people oriented at all. No matter how they dress it up, saying, well, you employees are going to be more effective. No, it means that our share price will go up from a public listed company, we’ll get paid more money and all that kind of stuff. That’s what drives all of that, seems to Shel Holtz (35:35) Mm-hmm. @nevillehobson (35:45) And I’m not saying it’s wrong, not by any means. In a capitalistic economy, for instance, as we’re all in, it isn’t wrong. But it’s missing this part of the jigsaw puzzle. And it’s hard to quantify it. And I know one person had a conversation with someone who give me the ROI on this. thought, whoa, you’re right there with the wrong way to think about this. But we have to. And I think this is really just, I would say to me, an invitation to reflect on how you’re thinking about this, not necessarily to… change it but reflect on it bring into this what does it mean to remain human in this world of algorithms and automation where things move so fast and the the ROI acronym is right there in the middle of Shel Holtz (36:26) Yeah, it reminds me of the late great shell Israel asking, what’s the ROI of my pants? Remember that? Does we do we need ROI on everything? @nevillehobson (36:35) He would have loved the wisdom of the heart, I tell you, he would. Shel Holtz (36:39) Yeah, yeah, he was very skeptical of the need for ROI for everything. Hence, what’s the ROI of my pants? Of course, somebody came up with the ROI of pants. I remember that too. Insofar as determining what would happen if he went to work without wearing any versus the cost of pants for a year. ⁓ Yeah. All right, well, let’s away from. @nevillehobson (36:50) It’s super. There’s some are away there, that’s a fact, yeah. Cool. Yep. Shel Holtz (37:03) AI and talk about more traditional public relations matters. The term rent-a-mob gets thrown around a lot in political discourse, usually as a way to delegitimize real opposition. But behind the rhetoric, there’s a very real, very troubling practice of paying people to pose as protesters to create the illusion of grassroots support. And that practice is alive and well, and some firms, including companies that present themselves as PR or advocacy agencies, provide it openly. Crowds on demand, for example, has made no secret that it will recruit and script protesters calling the service advocacy campaigns or event management. I thought event management was like hiring the band and making sure the valet people showed up on time. If all this sounds like a modern twist on an old tactic, it is for sure. From free whiskey and George Washington’s day to front groups created by big tobacco in the 90s, engineered public opinion has a long history. What’s new is the professionalization of the practice. Today, you can literally hire a firm to stage a rally, a counter protest, or a city council hearing appearance. It’s a service for sale and the bill goes to the client. Legally, This all sits in a very gray zone. U.S. law requires disclosure for campaign advertising, for paid lobbying, but there’s no equivalent requirement for paid protesters. If you buy a TV ad, you have to disclose who paid for it. If you hire lobbyists, they have to disclose who they’re working for. But if you pay 200 people to show up at City Hall and protest, there’s no federal law that requires anyone to disclose that fact. That’s the protest loophole. Ethically, though, There is no gray area whatsoever. PRSA’s code of ethics is clear. Honesty, disclosure, and transparency are non-negotiable. The code explicitly calls out deceptive practices like undisclosed sponsorships and front groups. IABC’s code says much the same. Accuracy, honesty, respect for audiences. Paying people to pretend to care about a cause or policy fails those tests. The fact that it’s not illegal doesn’t make it acceptable. It just makes it a bigger risk for the profession because when the practice is exposed, as it inevitably is, the credibility of public relations is what takes the hit. And it does get exposed. In one case, retirees were recruited to hold signs at a protest they didn’t understand. In another, college students were promised easy money to show up and chant at a rally. These are not grassroots activists. They’re actors in somebody else’s play. And when the story surfaces in the press, it’s not just the client who looks bad. It’s the agency and then by extension, the rest of the industry. So let’s be clear. Rent-a-mob tactics are not clever. They’re not innovative and they’re not public relations. They are deception. They turn authentic public expression into a commodity and they undermine democracy itself. If our job is to build trust between organizations and their publics, this is the opposite of that. Here’s the call to action. PR professionals must refuse this work. Agencies should set policies that forbid it and train staff on how to respond if they’re asked. Use the PRSA code of ethics as your shield and point to IEBC standards as backup. And don’t just say no, educate your clients about why it’s wrong and how badly it can backfire. because agencies can get pulled into this even without realizing it. A subcontractor or consultant may arrange the crowds, but the agency’s name is still on the campaign. That’s why vigilance is critical. Build those guardrails now. At the end of the day, this comes down to the disconnect between what the law allows and what ethics demands. Just because a tactic falls into a regulatory loophole doesn’t mean we should touch it. The opposite. is true. It means communicators must hold themselves to the higher standard, because public trust is already fragile. If we let paid actors masquerade as genuine voices, we’ll find we have no real voices left at the end of the day. @nevillehobson (41:20) So the word that comes to my mind readily, listen to what you’re saying and then look at some of the links you get is astroturfing. So remember that? I mean, that was a big deal. I remember you and I talking about that a lot in the first few years when we started this podcast from 2005 onwards. A couple of campaigns I remember being run by PR bloggers as was the primary social network at the times to address that. Shel Holtz (41:29) sure. @nevillehobson (41:49) ⁓ So nothing’s really changed. I mean, one of the links you included was from a woman called Mary Beth West, who wrote a post just a couple of days ago, where she’s actually… Shel Holtz (41:57) We interviewed Mary Beth West on the show, by the way. Yeah. @nevillehobson (42:00) we did? Okay. So she criticizing very strongly PRSA in the US primarily, remaining silent on the issue. And she says they are therefore complicit in this quite strong accusation that but Shel Holtz (42:13) She is PRSA’s fiercest critic. @nevillehobson (42:17) Right, right, okay. But I just wonder why is it that from a communications perspective, whether it’s PR or another element of communication, that these sort of issues pop up and so forth, and yet they repeat what was going on decades prior. AVE is a great one, advertising value equivalence, that was banned by professional bodies well over 15, maybe two decades ago, and yet people still use it. So what is it about this that we can’t seem to… it’s like whack-a-mole, something else pops up all the time. So this astroturfing version 6, let’s call it, because there’s got to be at least five versions prior to this, how do we stop it? Shel Holtz (43:02) I don’t know other than to demonize it within the industry and to call it out when we see it. The fact that it happens empowers people to accuse legitimate protesters of being rent-a-mobs. The protesters show up, they demonstrate, it gets news coverage and the opposition says, they were all just paid. They have no evidence to support that. But because people in their audience know that this actually does happen, they at least suspect that it might be true. So it makes it really easy to dismiss the voice of one segment of society that has chosen to take to the streets or to come to the city council meeting or whatever in order to express themselves and be heard. And I think as… some of these reports say that’s very, very dangerous for democracy. So there are a number of reasons that we need to call this out as inappropriate as a profession and to disassociate this practice from the practice of public relations. @nevillehobson (44:12) So who should take the lead on that? Shel Holtz (44:16) Well, I don’t know that they well, but PRSA and IEBC, CPRS, CIPR, all should, Global Alliance, the professional bodies need to be pushing this hard, I think. @nevillehobson (44:23) professional bodies. I agree. I agree. So CTA for the professional bodies, I think, you need to pay attention to this. We’d love to hear from anyone on any of those bodies you mentioned, offer a comment on what do they think about all this and what should they be doing? Is it their call? How do we persuade members of those organizations to consider this and pay attention to this issue? Call to action then. Shel Holtz (45:00) Thank you, Dan. Great report. I think the approach Mastodon is taking to quote posts is interesting. I’m not sure I am a big fan of the user control concept. It seems to me that that is a bit of censorship. If I say something in public, anybody is welcome and free in a free society anyway, to riff on that. to disagree with it, to pull my quote and say, look what this idiot said, you know? And to put it in the hands of the person who created the quote to determine whether somebody can do that on a social platform. I’m not sure I’m a big fan of that. I’m gonna need to give that one more thought and read more about Mastodon’s rationale. So I’ll be reading the links that you shared, Dan, but thank you, great report. @nevillehobson (45:54) So there’s a fascinating and pioneering move happening in Denmark right now. The government there has proposed changing copyright law so that every citizen has the right to their own likeness, their body, their face, and their voice. In practice, this would mean that if someone creates a deepfake of you and posts it online without your consent, you could demand that the platform takes it down. The idea is to use copyright as a new line of defense against the spread of deepfakes. Unlike existing laws that focus on specific harms, such as non-consensual pornography or fraud, Denmark’s approach is much broader. It treats the very act of copying a person’s features without permission as a violation of rights. Culture Minister Jakob Engels-Schmidt put it bluntly, human beings can be run through the digital copy machine, be misused for all sorts of purposes, and are not willing to accept that. The law, which has broad political support and is widely expected to pass, would cover realistic digital imitations, including performances, and allow for compensation if someone’s likeness is misused. Importantly, it carves out protections for satire and parody. So it’s not about shutting down free expression, but about addressing digital forgery head on. Supporters see this as a proactive step, a way of getting ahead of technology that’s advancing far faster than existing rules. But here’s the catch. Copyright law is a national law. Denmark can only enforce this within its own borders. Malicious actors creating deepfakes may be operating anywhere in the world, well outside the reach of Danish courts. Enforcement will depend heavily on cooperation from platforms like TikTok, Instagram or YouTube. And if they don’t comply, Denmark says it will seek severe fines or raise the matter at the EU level. That’s why some observers compare this to GDPR, the General Data Protection Regulation. a landmark idea that set the tone for digital rights, but struggled in practice with uneven performance and global scope. Denmark is small, but with its six months presidency of the European Union that it assumed on the 1st of July, it hopes to push the conversation across Europe. Still, the reality is that this measure will start as Danish law only, and its effectiveness will hinge on whether others adopt similar approaches. So we’re looking at a bold test case here. Can copyright law with all its jurisdictional limits really become the tool that protects people from the misuse of their identities in the age of AI. Shel Holtz (48:24) Maybe. It kind of worked with Creative Commons, didn’t it? The whole idea there was that the Creative Commons license had to be defensible in any country. So they worked to make sure that it would… qualify under every country’s law. And the first test, as I recall, was actually Adam Curry, had something that he, I think he’s something he created was used by an advertiser in a bus stop poster in the Netherlands. That could be. ⁓ And he took it to court and won on the Creative Commons license. So maybe @nevillehobson (48:48) He was. I think it was a photo of his daughter or one of his children. Yeah. Shel Holtz (49:09) A broader approach like that as opposed to country by country would be the way to use copyright to deal with this. Otherwise, you’re looking at every country implementing their own laws and many won’t. @nevillehobson (49:17) Yeah. The trouble with critic comments, is that you’ve got the license. That’s A, it’s voluntary, apart from anything else. B, it still requires the national legal structure in a particular country to adhere a case that’s presented to it. So that’s no different to as if it were the national law. And in Curry’s case, he didn’t get any money out of it. He got a victory, almost a Pyrrhic victory, but didn’t get any compensation. But there are very few and far between the examples of success with Creative Commons. And I think part of the problem actually is that it’s still relatively rare. I’ll find anyone who knows what Creative Commons is. I mean, we’ve had little badges on our blogs and websites for 20 plus years. And, you know, I don’t see it on businesses, on media sites, nothing. I don’t see it at all anywhere other than people who are kind of involved in all this right at the start. So it’s a challenge to do this. And I think the key is, would it get adopted by others? And I think it’s going to require a huge lift to make that happen. And maybe the example of Denmark might be good if they were able to show some successes in short order addressing this specific issue about deepfakes in particular. So it’s a great initiative and I really hope it does go well. It’s not law yet, but it probably will become, from what I’ve been reading, the expectation is extremely high it will become law. And if they’re running, if they’re leading the EU in this next six months, the rest of the year, then they’ve got a good opportunity to make the case within the EU for others to do this. So it wouldn’t surprise me if one or two more countries might adopt this as a trial. Then which is you think of three doing it, let’s say they do. Will it make any difference? Let’s see. Don’t write it off at all. GDPR has been held up as the kind of the exemplar regulation, ⁓ state regulation on data protection. And whether it’s had uneven enforcement and global scope, I agree. And the penalties against it, no one’s collecting money. It’s a huge deal to do that. But it’s still in place and it does have an effect on other countries. The US in particular has all sorts of things about, you know, if you’re doing business in the EU, you need to pay attention to this and do all that kind of thing. You don’t have the freedom to do things as you did before. So it’s generally seen, I believe, as a good thing that it happened. But, you know, we’re at that stage where technology is enabling people to do not good things like deepfakes. And so there is no real protection against that, it seems to me. I think the real trick will be is the compliance by social media platforms. If they are found culpable of hosting imaging or a video or whatever, not taking it down when they’re notified, they’ll get severe fines. I’m not sure what that means, but we need to see an example being made of someone. Haven’t seen that yet anywhere. Shel Holtz (52:24) No, we haven’t. And this is all part of the broader topic of disinformation. And we just had Hurricane Aaron. And there were actually warnings on some of the news sites I saw that there were deep fakes of the storm that were leading people to make bad decisions about what to do for their safety. So. @nevillehobson (52:24) a struggle. Here too. Shel Holtz (52:48) You know, this is happening faster than organizations, media outlets and political organizations and the like are able to figure out how to deal with it. And you can have fatal consequences down the road. There was, I guess I heard there was one image, I didn’t see it, but somebody told me about it, about a massive wave breaking over a road with cars trying to get out of town and a whale coming out of that wave. That’s sort of what gives it away at the end, but. @nevillehobson (53:00) We can, we can. Yeah. Shel Holtz (53:15) ⁓ @nevillehobson (53:16) Yeah, you’ve got to be vigilant yourself. And I think in light of this and realities of this, you have to be vigilant. It’s easy to say what does that actually mean? How can you be really vigilant? Good example. I’m sure you’ve seen this show, the meeting on Monday last week between Trump and the leaders of the EU and Zelensky from Ukraine. It shows an image that was posted many places online. US media in particular and social networks, X notably, showing like a photo, all the European leaders sitting in chairs in a kind of hallway outside Trump’s office waiting to see him, being called in to see him. And stories I read about this is how they were treated. Yet you don’t need to even look too closely at the image. Giveaways like the second person along has got three legs. And the linoleum pattern on the floor. Shel Holtz (54:08) No, no, no, he really does, you know. @nevillehobson (54:09) Yeah, the pattern on the carpet or the learning on the floor, as it got further from your vision, it got a bit blurred and the lines were not so wrong with it, you know. That surely would give you pause, but no, people were sharing this all over the place. shows you people don’t really pay attention too closely. They look at the hit factor for them. I’ve shared something cool and 50,000 people go and view it. That’s a cultural thing that isn’t going to change anytime soon, unless changes happen to how we do all these things. So this is just another element in this hugely disruptive environment we’re all living in with technology, enabling us to do all these things that are nice until the bad guys start doing them. And that’s just human nature. Sorry, that’s how it is. So before you click and share this thing, this is now logic talking to reasonable people. Just be sure that you’re sharing something real. I shared something recently that I forgot what it was now, but I deleted the post about 10 minutes after I sent it on Blue Sky. And I then wrote another post saying I had done that because I was taken in by something I saw and I should have known better because I normally don’t do this, but I just shared it. I don’t know why I did that even. I was having my morning coffee and I wasn’t paying attention too closely. So that’s the kind of thing that could trip people up. This is what’s going on out there. So I think this thing that Denmark’s trying to do is brave and very bold and I hope they get success with it. Shel Holtz (55:38) or at least it leads to other ideas that work. @nevillehobson (55:41) Yeah, exactly. Shel Holtz (55:44) Well, the pendulum always swings. One of the hardest questions communicators are facing today is whether their company or client should speak out on a contentious issue or stay silent. Silence was the default once upon a time, but research shows that in many cases, silence carries its own risk. Horton Research, ⁓ published just this month, found that silence backfires most when people expect a company to speak and believe it has a responsibility to do so, which let’s face it, this is why we were advocating for companies to take positions on certain issues under certain circumstances for ⁓ many years supported by research like the Edelman Trust Barometer. A separate Temple University study of the Blackout Tuesday movement showed that companies that stayed quiet faced real backlash on social media, but the consequences aren’t uniform. Sometimes silence has little visible effect, at least in the short term. Take Home Depot. Just last week, immigration raids targeting day laborers took place outside stores in California. Reporters reached out to Home Depot for comment. Home Depot chose not to respond. So far, investors don’t seem to care, and the stock hasn’t suffered. But employees, activists, and customers who see this issue as central to the company’s identity Well, they may feel differently. Silence can create space for others to define your values for you. This tension between internal and external audiences is critical. Employees are often the first to expect their employer to speak out, especially on issues that touch human rights, diversity, or workplace fairness. Silence can erode engagement and retention. Externally, it’s more complicated. Some customers or policymakers may punish a company for taking a stand, Others may punish it for not taking one. And I’m thinking now of it was Coors Light with the one can that they made for the trans activist and that created polarization. People who said, we’re going to go out and buy Coors no matter how bad it is, just to offset the people on the right who are boycotting it. In Europe, where stakeholder governance is stronger, there’s often a higher expectation that companies will weigh in. In the U.S., polarization makes every move fraught. Either way, communicators can’t afford to pretend that silence is neutral. It’s a choice, and it has consequences. So the question is, how do you decide? Well, here’s a simple decision framework. Start with expectations. Do you have stakeholders who believe your company should have a voice here? Next, consider the business nexus. Does the issue intersect directly with your operations, employees, or customers? Timing is important. Is there an urgent moment where absence will be noticed, or is this more of a slow burn? Authenticity matters. Do you have a track record that supports what you’d say, or would a statement ring hollow? Then look at consistency. Have you spoken on similar issues before? If you break the pattern, can you explain why? people notice? And finally, consider risk tolerance. How much reputational risk can the organization realistically absorb? Sometimes after applying this framework, silence might still make sense, but there’s a way to be silent well. It starts with transparency inside the organization. Explain to employees why the company isn’t taking a public stance. Reinforce the company’s values in operational ways through hiring practices, supplier standards, community investments. brief key stakeholders privately so they’re not blindsided, and set monitoring targets so you can pivot if the situation escalates. For communicators, here’s a quick checklist to keep handy. Map stakeholder expectations, test the business nexus, pressure test your authenticity and consistency, advise on operational actions that back up values, and plan both the statement and the silence. Corporate silence doesn’t have to mean cowardice and speaking out isn’t always virtuous, but both are strategic choices and both can have lasting impact on trust. Communicators are the ones who can help leaders cut through the noise, weigh the risks and make sure that whichever choice they make, voice or silence, it’s intentional, transparent and aligned with the values the company claims to hold. @nevillehobson (1:00:22) Yeah, it’s a complicated story, think, Shell. I think actually paying attention to one of the links you put in our Slack channel about Home Depot, that is quite staggering what I’m reading in this report from NPR talking about this. It goes into some detail in describing the customer base. They talk about day laborers. Now, I don’t know what that term means. We don’t have that term. Does that mean you’ve hired someone Shel Holtz (1:00:46) well, let me explain. @nevillehobson (1:00:48) Okay. Shel Holtz (1:00:48) Yeah, they hang out in the parking lot by the driveway and you have a home project and you need some help. So as you’re pulling out of the parking lot with all of the stuff that you’ve bought, you’ll say three of you and they’ll hop in the car and come home with you and do the work that you direct them to do and you pay them cash. And that’s how they make a living. That’s day labor. @nevillehobson (1:01:11) So cash-based, you have no idea who they are, you let them into your home. A bit risky, I would say. I mean, we have a system here which is they then call them day laborers. I don’t think they’re called anything handymen or something like that, DIY help, whatever. Websites, companies set up these things where you post your needs and someone says, yeah, I can do that for you, they give you a quote. And I’ve used someone like that in the past. And indeed, I had that. Shel Holtz (1:01:39) Well, I’ve used TaskRabbit. @nevillehobson (1:01:41) I had that person. Yeah, I know that doesn’t work here at all. TaskRabbit. It exists, but I think in one city only. And there are other equivalents to TaskRabbit, but some of these local websites are really good. But I used someone not long ago where I hired someone to do something and I had them go to the DIY store to pick up the stuff and buy the stuff and all that kind of bit. So I kind of get that. So the NPR piece says day labor sites have sprung up. as Home Depot grew and it became one of their big customer base, if you like, as a direct result. But this struck me, this piece kind of leapt out at me that talking about this on Reddit, according to NPR, Home Depot workers have begun trading tales of raid impacts. Some claim fewer contractors are visiting and stores are struggling to meet sales goals. Others say it’s business as usual and sales are booming. So it’s a mixed bag. But that’s going on. That to me will be a huge alarm bell for the company if they kind of button their lip and zip their lip in public and private or internally, and your employees are doing this. So that signifies quite a few things. They quote one example, again, another alarm bell in Los Angeles this time after a raid that happened from the immigration police. This person talked about the car park. The car parking lot was always full, she said. Right now, though, there’s so many spaces, there’s hardly anyone here. And this woman runs a housekeeping business and usually sends her employees to stock up on cleaning supplies or liquids for a pressure washer. But today, for the first time in a while, she herself was out the Home Depot. Why? Because they’re afraid to come, she said, they’re afraid to be here. That’s not not good at all, that kind of environment. So If I were Home Depot, I mean, I wonder, tell me what you think, Charmin. They should be paying attention to that, I would have thought. Shel Holtz (1:03:38) Well, they should. Their stakeholders have a clear interest here. Their customers are the ones who hire these people as they’re pulling out. It’s become for many folks a service, even though Home Depot doesn’t specifically provide the service. They’ve done nothing to keep it from growing into something that you expect to see at a Home Depot parking lot. it’s part of the ethos now. their employees care about it. So it gets back to that little framework for deciding whether you’re going to say something about it. Is there an expectation and does it intersect with your business? And in this case, the answer to both those questions is pretty clearly yes. Now, what they would say, I don’t know. Home Depot’s founder was famously very, very right wing on the political spectrum. @nevillehobson (1:04:11) Yeah, it does. Shel Holtz (1:04:31) And it is my understanding that that political preference continues to be part of the DNA of the leadership of the organization. So they may be fully supportive of immigration raids, but coming right out and saying, yeah, we’re glad to see these people get swept up might not set well with customers who have come to rely on them. So. @nevillehobson (1:04:50) Ha Shel Holtz (1:04:54) You know, these are things that make me very happy that I’m not doing public relations for Home Depot, but saying nothing, just sitting back and saying nothing seems to me to be a bad choice. @nevillehobson (1:05:04) Yeah. And then look at the kind of business imperatives on this. The NPR says in their report, investors so far have shrugged off the immigration spotlight on the company. Home Depot stock price is at its highest since February. So there’s no pressure from that point of view. Shel Holtz (1:05:22) Right, well, the article pointed out that this is a short-term response to the silence, not a long-term response. We’ll see. mean, if the people who are posting to Reddit who work in the stores are right that contractors aren’t going there and that the parking lots are not as full as they used to be, you could have longer-term problems arising from this. Yeah. @nevillehobson (1:05:43) Well, you got some alarm bells ringing there, I would say, with that going on. But this just illustrates to me the huge complication on say something or not. If you do, what do you say? If you don’t, what don’t you say? I mean, in a sense, you can’t not say anything, although I guess that’s what they’re doing. That doesn’t seem very healthy for relationships internally, because this kind of thing, from what I observe across the Atlantic here in the UK, seems to be getting worse in America with these immigration raids, the uncertainty, the cruelty, the awfulness of it all that doesn’t look like it’s going to diminish any time soon. And if anything, it’s going to get even worse than what I’ve been reading. Shel Holtz (1:06:26) It is. They have set quotas for the number of seizures and they’re going after anybody for anything. This notion that it’s the worst of the worst, the murders and the rapists and the like is ridiculous. In fact, my friend Sharon McIntosh just shared a photo of somebody being grabbed up by ICE who is a janitor in her church. Says he’s the neighbor that everybody counts on to come over and help them fix things. He’s a great father and husband. He’s a great member of the community. He has a side hustle business. He is what you want in a member of your community. And yet he was grabbed up by ice. So yeah, there’s a reason that, you know, downtown LA is dead. People are afraid to be out. That’s affecting the people who ⁓ sell them stuff when they come out to do shopping and, and. live their lives. So this is going to have long-term fallout for sure. And I think that the organizations that are at the heart of this, the fact that they’re saying nothing, I think leads people to see them maybe as cowardly or maybe as complicit. You have to think about the consequences of silence. And that’s what this article that I drew this report from. makes clear that article from the Wharton School. I quote more from the Wharton School these days. It’s really become a source that it wasn’t when we started this show. But in any case, use a framework. Don’t say, should we say something about this or not? Use a framework to reach a good logical decision. @nevillehobson (1:07:49) you Thank I think it’s, yeah. Yeah, and I’m thinking as well, OK, what’s going to happen? Let’s just use Home Depot as an example here. When, not if, when, someone either in the media or in the old media, let’s say, or someone in the new media landscape publicly asks a question of them. What are they going to do about X? You know, what about that guy who was beaten up in the parking lot of your store in LA? What are you going to do about that? What are they going to say? So I’m wondering, and this is kind of straying into an area of like pre-crisis communication planning perhaps, but have they got a what if scenario plan? I wonder. Shel Holtz (1:08:46) Home Depot saying nothing because the media have been calling and they have not been responding and you have to understand two things about the media in a situation like this. One is when you don’t say anything, they’re not going to shrug and go, okay, then we won’t report anything. So you’re going to hear in the media that you did not respond to requests for information. The second thing to be aware of is the fact that @nevillehobson (1:09:03) Right. Yeah, but my point… Shel Holtz (1:09:13) the media will go after secondary and tertiary sources of information in the absence of your comment, and that may not be what you want heard. @nevillehobson (1:09:19) But my point is that none of that is happening. So when it does happen, are they ready? None of that’s happening. mean, you say they’re pursuing them, but no one’s talking about that at all. I don’t see any reporting about that. What I would see reporting about is someone with a lot of influence online, as perceived by whoever, frankly, asks a question and that gets amplified widely and gets picked up everywhere that they This happened in the parking lot at Home Depot. And this is what this person said. And they embed the video, you know, when he recorded what he did. And they’re silent. That’s what I’m talking about. Shel Holtz (1:10:00) Over there and that’s what’s happening. Yeah, I mean, there are all kinds of videos from people in parking lots and people where these raids are happening. And I’ve heard no comment from the institutions involved. Home Depot being at the top of the list. @nevillehobson (1:10:22) that case we’re waiting for that hugely influential person to be in the spotlight. It hasn’t happened yet. It’s a when, not an if, I feel pretty sure. exactly. all right. okay so this I think is our final topic isn’t it in today’s in this is okay so so this one I thought worth exploring. Shel Holtz (1:10:28) Yeah, whoever that may be. Mr. Beast, that’s who needs to do it. It is. @nevillehobson (1:10:46) I saw this quite a lively discussion I saw in the marketing and PR community on Facebook. It highlights an issue that many communicators may think they understand, but often don’t fully appreciate when it comes to using images online. So the post that kicked it off, and by the way, I’m anonymizing it because that’s a private group on Facebook. And unless you’re a member of Facebook and a member of this group, you can’t see the content. But so I’m not going to mention anyone’s names, but the story is quite interesting. So the poster kicked it off came from someone who had just received an email from PA Images, one of the photo agencies here in the UK, demanding £700 for using one of their photos. The image had appeared in a blog post more than two years ago and the author noted that the photographer had been credited. They thought this counted as fair use and were shocked to discover it didn’t. They’d since removed the image and asked the group whether this was simply an expensive lesson. or if there was room to negotiate. Well, that prompted a flurry of comments. One person pointed out that the cost of the fine could have paid for multiple original photos, properly licensed for unlimited use. It reminded that investing in photography upfront can save headaches later. Another comment stressed that fair use is an American legal concept. In the UK, what we have is fair dealing. And crucially, it doesn’t apply to photographs in this way. Using a photo without explicit permission or a license is infringement. At best you might negotiate the charge down to what the license fee would have been. Others shared their own experiences. One person described how AFP, that’s a French news agency, fined their organization £270 for an old image that had been carried over from a previous website. They’d apologized, paid up, and then run copyright training for their team to avoid repeat mistakes. Another said they’d removed an image straight away, but the agency still produced a screenshot of the original post and pursued them for payment anyway. The practical advice that emerged was fairly consistent. If you don’t have written permission or license, you are liable. Remove the image immediately, apologize, and then try to negotiate. Some suggested starting with a quarter of the asking fee. Keep detailed records of where every image comes from and the terms of its license. There was also a broader ethical undercurrent. Some respondents had little sympathy. saying that too many people still think photos are fair game online when they aren’t. One even noted that their partner, photographer, often earns more from infringement settlements than from people licensing his images in the first place. So the original poster clarified that their agency normally does hire photographers and pays them fairly. This was an old blog post that predated the agency and they genuinely wanted advice rather than sympathy. Still, they accepted that it was a mistake that would cost them money. So the takeaway here is clear. Crediting a photographer is not the same as having permission. Unless you have a license or explicit written agreement, you’re exposed to claims. And with agencies increasingly using bots and reverse image search to enforce copyright, the risk of being caught is only growing. For communicators, it’s a sharp reminder that visuals are not free to use simply because they’re online, and that professional practice means treating images with the same respect as written. Shel Holtz (1:14:03) Just one more reason to be using AI to generate your images. @nevillehobson (1:14:08) Yeah, but I think it all comes down to, doesn’t it? The kind of, it’s so easy, it’s there, people have been doing it for long time, and suddenly this is coming out, the bites bite you in the bum, this will, I tell you. Shel Holtz (1:14:14) I’m guilty. I’m guilty. used to, when I was an independent consultant, which I haven’t been for nearly eight years now, good God, time really does fly. I used to do a email newsletter. It was a wrap of the week’s news in communication and technology. You know, as we continue to do for this show and for blog posts. @nevillehobson (1:14:39) I used to subscribe to your newsletter. I remember it. I remember it. Shel Holtz (1:14:42) Yeah, was called the Friday Wrap. And at the top of the Friday Wrap, I always had an image of something that was wrapped in one way or another just to play off of that whole wrap concept. And I was always out there searching images for something that was wrapped. was trees that were wrapped and buildings that were wrapped and vehicles that were wrapped. And I just grabbed them and put them at the top of my newsletter without giving any thought to where that came from. If I were doing that today, I would definitely have AI produce an image of something wrapped, because I certainly didn’t have a budget to pay for photography. The only reason I was able to do that was because they were online, but it was not right. Even though most of those images were, I seem to recall using the Creative Commons library of images, which is still available. But… ⁓ @nevillehobson (1:15:31) Yeah, I I was similar, Shel, to you. I was doing exactly the same thing, but I always used to credit the source, thinking that was fine. I didn’t feel guilty. And even if a website that I saw a great picture had copyright, blah, blah, all rights reserved, I’d say, well, give them a credit and a link to their website. I’d be OK. I fell foul of that once only back in early days, about 2007, Reuters. Shel Holtz (1:15:39) That’s adequate. @nevillehobson (1:15:57) wanted me to remove a picture I used from one of their agency reports and a blog post that I’d I too, throwed in a conversation with them and said, and eventually they agreed, okay, fine, but don’t do it again. And I thought, okay, that’s interesting. They were really early in on that. We know it’s not right. And indeed the point here of that first practical advice from the thing, if you don’t have explicit permission or a license, you are liable. There’s no ifs or buts, there’s no gray areas, black or white. So I tend not to use a lot of AI generated images. I subscribe to Adobe Stock, which is a good library. I use Unsplash. I pay for the premium version that gives you pictures that aren’t to take. I tend to go big on metaphor type pictures, and there’s loads of stuff for that. But I’m always also looking for someone who says, you know, create a commons for instance, and that’s great. Flickr still a good source. So that, but if you’re like in a large enterprise and you’ve got, you know, multiple stuff, things going on, that’s not really a practical approach. And you’ve probably got a license with Shutterstock or one of the big image licensing firms. So that’s right. But this is, I think for small to medium sized businesses, individual consultants and so forth, this is an easy trap to fall into. And again, it’s just remembering that key advice point if you don’t have permission or a license you’re liable. So don’t do it. Shel Holtz (1:17:19) Yeah, I used to have a subscription to Dreamstime, a stock photo service, and how people will say, I can tell if something was produced by AI. I can tell if something came from a stock photo service. Those stock photos do tend to have that stock photo stamp on them, you know, that look. come on, the computer key that has the word that is… @nevillehobson (1:17:32) Yeah, well it depends. But I find depends on the image. You wouldn’t use stuff like that. I certainly don’t. I you see them, the kind of happy smiling group in a business meeting. People are not like that at the workplace. Yeah, yeah. So don’t use those. No, don’t use those. Shel Holtz (1:17:48) or the hand writing on the board or the hands raised. Yeah, no, I use AI to create images. I mean, we’ve changed our internet platform, but the one we changed from required an image for every article, a hero image. And if it was about a project, that was easy. We had photos from projects. But if it was about a more abstract concept, You know, either it was a stock photo service or we were stuck. But now I can come up with an image that just perfectly conveys the idea that we’re trying to get across. I remember I did one about what goes into developing a proposal, an article on what goes into developing a proposal for a commercial construction project. We’re talking about, you know, two, three, four hundred million dollars of project cost. And these are big deals and the proposals take weeks, months to put together. And I think there’s a lack of appreciation for what the business development team goes through when they’re putting these together. And for the hero image, I had a group of people who are clearly office workers, not out in the field building. And they’ve got their laptops and their tablets and their phones out. But in the middle of the table, rising out of the table is the building that they’re pitching. It was an ideal image. Another one that I use this for is our April Fool’s article, which was always about something completely ridiculous. I did an April Fool’s article one year about new sustainable building material of chewed bubble gum that has been scraped off of underneath desks and underneath railings. Not only is it sustainable, but it’s minty fresh, that type of thing. @nevillehobson (1:19:21) Thank Shel Holtz (1:19:30) And I was able to have a building that was built out of bubble gum that looked real to accompany that article. So I get a lot of use out of generative AI that I can’t get. I mean, if I had a budget for artists and if I had a budget for photographers, I’d be using that. I’d hire photographers. I’d hire graphic designers. I don’t have the budget. So this works. This is a great alternative to. @nevillehobson (1:19:36) That’s cool. Yeah. thing. Yep. In which case, so the takeaway from all of this, yeah, so the takeaway from all this is if you don’t have written permission or a license, you are liable. So get permission or a license or use a generative AI approach. Pretty clear. Shel Holtz (1:19:54) using what’s not yours. Works for me. And that will bring us to the end of this episode of For Immediate Release, episode 478, the long form episode for August 2025. Our next long form episode will drop on Monday, September 29th. We’ll be recording that on Saturday, the 27th. And until then, Neville, have a great couple of weeks until we get together for our next short midweek episode. @nevillehobson (1:20:37) Absolutely.   The post FIR #478: When Silence Isn’t Golden appeared first on FIR Podcast Network.
undefined
Aug 21, 2025 • 60min

Circle of Fellows #119: Can Sustainability Be Sustainted?

With numerous competing business priorities demanding attention, and government policy decisions often pushing sustainability to the back of the agenda, organizational communication professionals play a critical role in keeping sustainability front and center. We are uniquely positioned to connect sustainability to the organization’s purpose, values, and long-term success, ensuring it is viewed not as an isolated initiative but as an integral part of the company’s identity. Through strategic messaging, storytelling, and the consistent integration of sustainability into internal and external narratives, communicators can make environmental goals relevant to every employee’s role and decision-making process. We can spotlight progress, celebrate champions, and translate complex sustainability metrics into compelling, human-centered stories that inspire action. By making sustainability a visible, shared priority across all levels of the organization, communication professionals can help ensure it not only survives but also thrives, despite shifting external pressures. Four Fellows of the International Association of Business Communicators (IABC) gathered on August 21, 2025, to share their expertise and experience communicating sustainability during these fraught times. The international panel featured Zora Artis, Brent Carey, Bonnie Caver, and Martha Muzychka, with Shel Holtz serving as moderator. About the panel: Although Zora Artis began her career outside the communications field, she has had an outsize impact on the profession since entering it more than 20 years ago to as an account director and then strategic planner with branding and integrated marcomms agencies. Since then, she has led her own brand and communications consultancy and served as CEO of a 20-person creative, digital, and strategic communication firm. In 2019, formed her current management consulting practice bringing together strategic alignment, brand, and communication expertise. She has received five Gold Quill awards. Her significant contributions to the profession and the body of knowledge include her original research with IABC colleague, Wayne Aspland, on strategic alignment, the role of communications and leadership – the first substantial research effort for the reconfigured IABC Foundation – and co-authoring a subsequent white paper, “The Road to Alignment,” supported by 27 senior communicators from five continents. Zora has also researched the correlation between strategic alignment and experiences and the impact on stakeholder value and brand. This has led her to develop her own proprietary Alignment Experience Framework. She has also examined gender equity, perceptions, and bias in organizations, and wrote a chapter on this topic for the Quadriga University e-reader, Women in PR. Since joining IABC a decade ago, she has impacted IABC as a volunteer, including roles as chair of the IABC Asia Pacific Region and IEB director; she served as the chair of the 2022 World Conference Program Advisory Committee. A certified company director, as chair of the IABC Audit and Risk Committee she introduced proper risk oversight to the board’s processes. Zora has been honored with the 2021 and the 2015 IABC Chair’s Award for Leadership and was named IABC’s 2020 Regional Leader of the Year. She is also a Strategic Communication Management Professional, Fellow of the Australian Marketing Institute, and Certified Practicing Marketer. Bonnie Caver, SCMP, is the Founder and CEO of Reputation Lighthouse, a global change management and reputation consultancy with offices in Denver, Colorado, and Austin, Texas. The firm, which is 20 years old, focuses on leading companies to create, accelerate, and protect their corporate value. She has achieved the highest professional certification for a communication professional, the Strategic Communication Management Professional (SCMP), a distinction at the ANSI/ISO level. She is also a certified strategic change management professional (Kellogg School of Management), a certified crisis manager (Institute of Crisis Management). She holds an advanced certification for reputation through the Reputation Institute (now the RepTrak Company). She is a past chair of the global executive board for the International Association of Business Communicators (IABC). She currently serves on the board of directors for the Global Alliance for Public Relations and Communication Management, where she leads the North American Regional Council and is the New Technology Responsibility/AI Director. Caver is the Vice Chair for the Global Communication Certification Council (GCCC) and leads the IABC Change Management Special Interest Group, which has more than 1,300 members. In addition, she is heavily involved in the global conversation around ethical and responsible AI implementation and led the Global Alliance’s efforts in creating Ethical and Responsible AI Guidelines for the global profession. Brent Carey is an award-winning communications executive and corporate storyteller who has been helping organizations connect with their stakeholders and achieve successful business outcomes for more than 30 years. During his career in corporate communications, he has practiced the complete range of the profession’s disciplines, including internal/HR communications and employee engagement, recruitment marketing, issues management and crisis communications, public and media relations, marketing communications and government relations. Brent is currently Vice President, Communications, at Mattamy Asset Management (the parent company of Mattamy Homes), based in Toronto, where he leads the corporate communications function and a small, impactful team that provides strategic planning and execution across Mattamy’s operations in Canada and the US. Brent has also held communication leadership roles with KPMG International, Deloitte Canada, CIBC, TD Bank and Imperial Oil. In 2004 he earned the Accredited Business Communicator (ABC) designation from IABC and in 2024 was recognized with the prestigious IABC Canada Master Communicator Award, an accolade bestowed upon select professionals who have demonstrated exemplary contributions to the field of communication. Brent graduated from York University in Toronto with a double honours degree in Communications and English. Martha Muzychka, ABC, MC, speaks, writes, listens, and helps others do the same to make change happen. Martha is a strategic, creative problem solver seeking challenging communications environments where we can make a difference. She helps her clients navigate competing priorities and embrace communication challenges. Martha offers strategic planning, facilitation, consultation services, writing and editing, qualitative research, and policy analysis. Her work has been recognized locally, nationally, and internationally with multiple awards. The post Circle of Fellows #119: Can Sustainability Be Sustainted? appeared first on FIR Podcast Network.
undefined
Aug 18, 2025 • 21min

FIR #477: Deslopifying Wikipedia

User-generated content is at a turning point. With generative AI models cranking out tons of slop, content repositories are being polluted with low-quality, often useless material. No website is more vulnerable than Wikipedia, the open-source reference site populated entirely with articles created (and revised) by users. How Wikipedia is handling the issue — in light of its strict governance policies — is worth watching, especially for organizations that also rely on user-generated content. Links from this episode: Wikipedia Editors Adopt ‘Speedy Deletion’ Policy for AI Slop Articles How Wikipedia is fighting AI slop content From the technology community on Reddit: Volunteers fight to keep ‘AI slop’ off Wikipedia Wikipedia:WikiProject AI Cleanup Wikipedia loses challenge against Online Safety Act verification rules Wikipedia can challenge Online Safety Act if strictest rules apply to it, says judge The next monthly, long-form episode of FIR will drop on Monday, August 25. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Shel Holtz (00:00) Hi everybody, and welcome to episode number 477 of For Immediate Release. I’m Shel Holtz. @nevillehobson (00:08) And I’m Neville Hobson. Wikipedia has long been held up as one of the internet success stories, a vast collaborative knowledge project that has largely resisted the decline and disorder we’ve seen on so many other platforms. But it’s now facing a new kind of threat, the flood of AI generated content. Editors have a name for it, not just editors by the way, we do as well. It’s called AI slop. And it’s becoming harder to manage as large language models make it easy. to churn out articles that look convincing on the surface, but are riddled with fabricated citations, clumsy phrasing, or even remnants of chat bot prompts like as a large language model. Until now, the process of removing bad articles from Wikipedia has relied on long discussions within the volunteer editor community to build consensus, sometimes lasting weeks or more. That pace is no match for the volume of junk AI can generate. So Wikipedia has now introduced a new defense, a speedy deletion policy that lets administrators immediately remove articles if they clearly bear the hallmarks of AI generation and contain bogus references. It’s a pragmatic fix, they say, not perfect, but enough to stem the tide and signal that unreviewed AI content has no place in an encyclopedia built on human verification and trust. This development is more than just an internal housekeeping matter. It highlights the broader challenge of how open platforms can adapt to the scale and speed of generative AI without losing their integrity. And it comes at a moment when Wikipedia is under pressure from another front, regulation. Just this month, it lost a legal challenge to the UK’s online Safety Act, a ruling that raises concerns about whether its volunteer editors could be exposed to identity checks or new liabilities. The court left some doors open for future challenges, but the signal is clear. the rights and responsibilities of platforms like Wikipedia are being redrawn in real time. Put together these two stories, the fight against AI slop and the battle with regulators shows us that even the most resilient online communities are entering a period of profound change. And that makes Wikipedia a fascinating case study for what lies ahead for all digital knowledge platforms. For communicators, these developments at Wikipedia matter deeply. They touch on questions of credibility. how we can trust the information we rely on and share, and on the growing role of regulation in shaping how online platforms operate. And there are other implications too, from reputation risks when organizations are misrepresented, to the lessons in governance that communicators can draw from how Wikipedia responds. So, Shail, there’s a lot here for communicators to grapple with. What do you see as the most pressing for communicators right now? Shel Holtz (02:52) Well, I think the most pressing is being able to trust the content that you see is accurate and authentic and able to be used in whatever project you’re using it for. And Wikipedia, we know based on how it’s configured, has always been a good source for accurate information because it is community edited, errors are usually caught. We have talked about in past episodes, the fact that more obscure articles can have inaccuracies that will sit for a long time because nobody reads it, especially it’s not read by people who would have the right information and correct it. But by and large, it is a self-correcting mechanism based on community, which is great. It does seem that the shoe is on the other foot here because when Wikipedia first launched, I’m sure you’ll recall that schools and businesses banned it. You can’t use this, you can’t trust it. It’s written by regular people and not professional encyclopedia authors. Therefore, you’re going to be marked down if you use Wikipedia, it’s banned. And they fought that fight for a long time and finally became a recognized authoritative site. And here they are now banning something new. that we’re still trying to grapple with. We do need to grapple with it. The AI slop issue is certainly an issue. I worry that they’re going to pick up false positives here. Some of the hallmarks of AI writing are also hallmarks of writing. I mean, if I hear one more person say, an dash is absolutely a sign that it was written by AI. I’m gonna throw my computer out the window. I’ve been using dashes my entire career. I was using dashes back when I was doing part-time typesetting to make extra money when I was in college. And dashes are routine. There is nothing about them that makes them a hallmark of AI. That is ridiculous. But we are going to see some legit articles tossed out with the slop. The other thing is some of the slop may have promise. It may be the kernel of a good article, and this is a community platform, and wouldn’t people be able to go in and say, wow, this is really badly written, I yeah, yeah, I may have done this, but there’s not an article on this topic yet, so I have expertise, I’m gonna go in and start to clean this up. It’s a conundrum, what are you gonna do at this point? We haven’t had the time to develop the kinds of solutions to this issue that might take root. And yet the volume of AI slop is huge. The number of people producing it is equally large. And you have to do something. So I think it’s trial and error at this point to see what works. And there will be some negative fallout from some of these actions. But you got to try stuff to take it to the next level and try the next thing. @nevillehobson (05:52) Yeah, I think there’s a what I see is a is a really big issue generally that this highlights and part of it is based on my own experience of editing Wikipedia articles in a couple of cases for an organization working with people like butler inc. Bill Butler has been an interview guest on this podcast a few times, which is The speed of things, the one memorable thing that stays in my mind about using Wikipedia or trying to progress change or addition is the humongous length of time it takes with the volunteer editor community. The defense typically is, well, they’re volunteers, they’re not full-time, they’re not employees, they’re not dedicated, they say you’ve got to be patient, they’re doing it for their own free will to help things. I get all that, I’m a volunteer myself in many other areas, but… That’s great. But as they themselves are saying, things are moving at light speed with AI slop generation, you can’t afford to have three to four weeks where you you the the person editing is asked the community, is this good? Are you okay with this? What else? And three weeks go by before you get a reply. And often you don’t you have to nudge and so forth that to get that ain’t going to work today. So it needs something better. They have this really interesting looking projects called Project AI Cleanup, which is got, it’s well defined in the Wikipedia, on Wikipedia what it is. They’ve also are developing a non AI powered tool called Edit Check. That’s geared towards helping new contributors fall in line with the policies. So part of the problem a lot of the time, I think is the elaborate policies and procedures you’ve got to follow. It’s not user friendly for people who don’t know all this. And they do have a history of people not welcoming newbies to it readily. So all that’s in the background. But this is quite interesting, EditCheck, towards helping new contributors fall in line with the policies and writing guidelines. They’re also working on adding a paste check to the tool, which will ask users who’ve pasted a large chunk of text into an article whether they’ve actually written it. So it’s kind of helping that kind of focus. I think I get what you say and I don’t disagree by the way on the discovery of things and you know there might be something good here and all this I get all that and I hope that continues but this is urgent this really does require attention and I think the one of the points in the why this matters to communicate this section is the big one, think reputation risk. I mean, some of the research I did, this is going back a couple of years now when I was working on a particular project, was the reality that when you, let’s say as a communicator, I think about something related to your employer, your client or some work you’re doing about an organization, the first place you will go to typically is Wikipedia or the first place that shows up in the search results in that old traditional day that we’ve now passed that now, but as it was back then. You get your, you know, above the fold screen full of results on Google. And the three things that make you feel this is this I will go to would be ideally the the organization’s website first and foremost. And it might have someone else talking about the organization’s maybe a second, the third result is going to be the Wikipedia entry. And then you have a little box on the right hand side, which summarizes everything and that’s taken from the Wikipedia entry. So if you have not updated your Wikipedia entry or it is inaccurate, that’s what’s going to show up there. So getting this right is good. But unfortunately, that won’t work in the day of AI slot because things change so fast. And just wait till agenting AI gets on the case and you got all these bots creating content as well. I think the point about dashes and so forth, that isn’t going to stop anytime soon, I don’t think. And I believe that. that presents a big challenge to Wikipedia where you have human verifiers checking things, where this artist has got 15 dashes. Hey, come on, that’s got to be AI generates. So all that kind of thing, they still have to figure out how to do that. I think your point about they got to do something for this. Absolutely. And this is probably the one thing they’re doing, but there’s more they still need to do that. is likely to be quite a challenge, speed with, because of the speed how this is evolving so fast. So I think it comes down to, suppose, you the information consumer, the user of the stuff you’re finding, you absolutely need to do your own due diligence more than ever you have done before. Don’t just believe Wikipedia because, hey, it’s Wikipedia, it’s a community generated site. I hear all the stories I’m sure you have about that’s the reason why it’s no good. because it’s community generated. I don’t buy that argument. I’ve rarely had any issues. And one thing I do use a lot myself as part of the verification is the talk pages and the history of editing pages. And you get a feel for, you know, what’s been happening here and so forth. Plus, there are services. Butler has one. There’s another one that’s the name escape me, but it was it’s owned by that that guy we interviewed in Israel who was behind that. that will, Wiki Alerts, that will notify you whenever a page you’re paying attention to has got changes, tells you what the changes are. So, and Wikipedia itself has some pretty good analytics now and alerts and so forth and so on. So, communicators can help with this as well in reviewing stuff they know about pages and content they know about to make sure that it doesn’t require, it doesn’t have any issues that they should be concerned about. But that’s the regular climate. You have AI literacy. communicate this need to know literally or need to know how to get help in recognizing AI generated text. There isn’t a single guide. There’s lots of people with opinions out there. Your own common sense will often help you. Pitfalls like fabricated citations, how do you really check those? Responsible use in professional context. This brings that to the forefront again, like it was originally and you mentioned. organizations, schools, academia banning the use of all this back in the 2000s. Now we’re in the 2020s and this is becoming more urgent it seems to Shel Holtz (12:05) Yeah, and you you talked about the difficulty in having action taken when you propose a change to an article. Where I work, I check our Wikipedia entry. The first time I did, I saw that the earnings hadn’t been updated in about six years. So I left a note saying, I can’t make the change. I’m a representative of the company, but these are earnings from six years ago. Here are our most recent. This is the doc. @nevillehobson (12:21) Yeah. Shel Holtz (12:32) I heard crickets. Absolutely nothing. So I wonder if agentic AI might actually be a solution for Wikipedia down the road. When a challenge like that comes up, an agent will go out and find the correct information and maybe send a note to the editor saying I have confirmed this information or I have found this information to be not accurate. Just put that step in there to speed this up. The other issue that I think is going to be interesting is that the quality of the output is going to only improve. And where you can tell bad writing from a bad prompt right now. Well, first of all, you’re going to have more people learn how to prompt well, which will make it harder to identify that it was done by AI, especially if somebody takes five minutes to go through it and edit it and make a few revisions. the AI is just going to crank out better stuff. @nevillehobson (13:09) Thank Shel Holtz (13:25) as new models appear because the work that they’re doing is just designed to produce better outputs. That’s going to make it harder to find these things. So again, finding a way to identify it and address it has to be top of mind. And the current process, I think is just a first step, what they’re doing now. It’s not going to scale. @nevillehobson (13:46) No, totally. I agree. So that would help. I agree. But they would have to make significant changes to their structure and the whole policies and procedures set up. So one of the things that is at the front of this and you mentioned what you’d found for your company, I’ve come across as many times as well. It’s no good sending them, you you the representative of the company, you’ve disclosed that fact. Great. Doesn’t matter. You’ve sent them information that is not neutral point of view, no matter Shel Holtz (13:54) yeah. @nevillehobson (14:14) how you see it, if it’s your own website, for instance, you could even send them well, this is what it says on the SEC website as well, that might help, but it needs to be and they explained it in excruciating detail what neutral point of view means and how you can provide reliable verifiable sources by a third by reliable and it was I forgot the other word, but the third party in other words is not it’s a totally neutral. The famous example I’ve heard so many times is probably in the very early 2000s, a British author asked Wikipedia to correct his entry because his date of birth was wrong. And they said, sorry, you’re not a neutral point of view. We can’t do that. And they refused to make the change. I mean, the absurdity of that struck a lot of people as utterly absurd. But if you actually read the policies on this, it’s not absurd at all. Shel Holtz (15:03) No, it’s just what the policy is. @nevillehobson (15:03) So what that author needs to do is find a neutral point of view to him or her, tell Wikipedia and provide the source proving it, which could be, here’s a copy of the birth certificate from Somerset House or whatever it was at the time they do. it’s clear that, and again, in the context of communicators, when the first round of guides for PR people from the CIPR back in 2012. that the need for this was apparent very, very quickly to educate PR folks particularly who did not grasp that concept of neutrality and neutral point of view. So they’d have to change a lot if ⁓ agentic AI got into the mix there. I think the more pressing realization perhaps is not agentic AI as an ally of you, not at all. It’s a tool for the bad guys to create really questionable content. You’ve never spot that. And that’s you need another AI to pay attention. So all these things are probably in the mix there somewhere. The tool they’ve got or the project they’ve got the AI cleanup project, as they’re calling it, is advice, editing advice for community members. It’s a little light on the detail, but I haven’t drilled into all the stuff on the menu that you can go to on how to do this is actually very well thought through this. ⁓ where it goes into detail that you wouldn’t think of, know, broken links, how to fix those to make them to not be broken, finding something that does work properly on fixing common mistakes of free images in the Wikipedia, Wikimedia Commons, there’s copyright issues surrounding that. So all that’s in there too. But what strikes me at the end of it, Shell, is that there are smart people thinking about all of this and it’s great. But I’m not sure that ⁓ humans doing this alone are going to be able to grasp the speed with which this is is upon us. And that’s where I think also AI Genre, whether it’s agentic or not, I don’t know, could be a huge aid in the kind of here’s something that needs reviewing and checking fast, boom, put your AI on it’s not not a human. Although you’ve got to have the human as the kind of ultimate arbiter or something that’s to be removed or not. So it’s a challenge for the very methodology of Wikipedia, it seems to me, and this reliance on community generated content because the bad guys, and I don’t mean this only in that there are people deliberately doing this, but the bad guys embracing bad agentic AI are moving faster than you can think of. And that’s the bigger threat to Wikipedia, it seems to me. Shel Holtz (17:39) One closing thought, and that is that Wikipedia is the poster child for user generated content, but they’re not the only ones. User generated content has been embraced by a lot of organizations. And if yours is one of them, you’re probably facing at a smaller scale, the same issue with people contributing content that they used AI to produce. And some of it may be quite bad. So you might want to keep an eye on what Wikipedia is doing in order to inform your own. @nevillehobson (17:55) Yeah. Shel Holtz (18:07) processes for addressing this problem in your own organization. And that’ll be a 30 for this episode of Four Immediate Release. @nevillehobson (18:11) Total Sense.   The post FIR #477: Deslopifying Wikipedia appeared first on FIR Podcast Network.
undefined
Aug 12, 2025 • 24min

FIR #476: Rewiring the Consulting Business for AI

Swarms of consultants descend on companies that have engaged their firms, racking up billable hours and cranking out PowerPoint presentations that summarize the data they’ve analyzed. That business model is at risk, given the amount of that work that AI can now handle. Recognizing the threat, some consulting firms are actively reengineering their businesses, with McKinsey out in front. In this short midweek episode, Shel and Neville review the actions of several firms and agencies, and discuss what might come next for consultants. Links from this episode: AI Is Coming for the Consultants. Inside McKinsey, ‘This Is Existential.’ The AI Revolution in PR: A Wake-Up Call for PR Agencies Inside the AI boom that’s transforming how consultants work at McKinsey, BCG, and Deloitte EY CEO says AI won’t decrease its 400,000-person workforce — but it might help it double in size PwC is training junior accountants to be like managers, because AI is going to be doing the entry-level work How AI is Redefining Strategy Consulting: Insights from McKinsey, BCG, and Bain Will AI Empower the PR Industry or Create Endless Seas of Spam? How AI Agents Benefit PR Agencies Study reveals rising application of AI across communications by the public relations industry Adapting to Change: The Key Trends Redefining Public Relations Firms July layoffs up 140% from last year The next monthly, long-form episode of FIR will drop on Monday, August 25. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: @nevillehobson (00:02) Hi everyone and welcome to Four immediate release. This is episode 476. I’m Neville Hobson. Shel Holtz (00:08) And I’m Shell Holtz. If you’ve been following the consulting industry lately, or maybe you’re part of it, you’re aware that AI is all the buzz. The Wall Street Journal recently reported that McKinsey & Company, the gold standard in management consulting, is deep in an existential transformation. And if you weren’t watching the video, you didn’t see me make air quotes around existential transformation. It’s their words. ⁓ AI is the catalyst for that transformation and the realization that it can do a lot of what McKinsey’s highly paid human consultants do faster, cheaper, and sometimes just as well. For nearly a century, McKinsey has built into business on armies of bright young consultants, fresh from top universities, synthesizing vast amounts of complex information and advising C-suites on what they ought to do next. But now, instead of a small battalion of analysts, a project might require just two or three people, along with an assortment of AI agents, tools that write in the classic McKinsey tone, check the logic of arguments, summarize interviews, and crank out PowerPoint decks. McKinsey has rolled out, are you sitting down, 12,000 of these AI agents, and its CEO predicts a not too distant future with one AI agent for every human employee. And they’re not alone. Boston Consulting Group has Dexter for presentation building in Gene, a conversational assistant. Deloitte has Sidekick and Zora AI. PwC, KPMG, EY, they all have their own fleets of AI helpers. At McKinsey, over 70 % of employees are already using a tool called Lili, which taps into a century’s worth of the firm’s knowledge. EY is using AI with 80,000 of its tax professionals, and rather than seeing that as a job killer, its CEO says it could actually help them double the firm’s size. And that’s an important point. While AI is eliminating some roles, particularly entry-level repetitive work, it’s also changing the skill mix. For consultants, that means fewer suits with PowerPoints and more partners in the trenches. co-creating solutions with clients and helping organizations implement change. As one Oliver Wyman executive put it, the age of arrogance of the management consultant is over. Clients don’t want abstract strategy anymore, they want execution, training and transformation. Now for those of us in organizational communication, there’s a clear parallel. AI is already reshaping our own work in much the same way, handling… media monitoring, sentiment analysis, and first draft content creation. In public relations, agencies like Edelman and Golan are using AI to track reputation, analyze audience sentiment, and even test campaign ideas on synthetic focus groups. The USC Annenberg study on AI and communications found many agency leaders are building cultures of experimentation around AI, seeing it as a way to free their teams for higher value work. But, and this is a big but, The risks are real. In both consulting and PR, the differentiator going forward won’t be who can use AI. It will be who can layer human judgment, creativity, and trust on top of AI speed and scale. As one McKinsey partner put it, the basic layer of mediocre expertise goes away. The distinctive expertise becomes even more valuable. And that’s a key takeaway for communicators. First, the business model has to adapt. Just as consulting firms are moving from billable hours to outcome-based fees, agencies are rethinking traditional retainers in favor of value-based pricing, charging for results, not hours. We’ve talked about this on FIR before. Second, the human skills that AI can’t replicate, relationship building, empathy, strategic thinking, become even more critical. And third, the organizations that thrive will be the ones that treat AI as a collaborator, not a threat. @nevillehobson (04:14) Thank Shel Holtz (04:16) just as McKinsey is doing by pairing agents with experienced human experts. The AI era isn’t all about being replaced. There’s plenty of validity in the augmentation argument. The winners are gonna be the fast learners, the ones who can adapt their craft, rethink their value proposition, and work seamlessly with both humans and machines. But let’s not sugarcoat this. The threat to jobs isn’t hypothetical. According to HR Dive, US employers cited AI as the reason for over 10,000 job cuts in July alone, and layoffs surged 140 % year over year in that month. In total, more than 800,000 job cuts have been announced so far this year. Now, many of these AI-related layoffs are under-reported or hidden, buried under euphemisms like technological updates or restructuring. The outplacement firm Challenger Gray and Christmas notes that only 75 layoffs in the first half of the year were explicitly attributed to AI, but they warn the real number is almost certainly higher, much higher. This isn’t theoretical. Across sectors from consulting to communication, the integration of AI doesn’t just augment roles, it reshapes them and in some cases eliminates them entirely. In any case, the future of consulting and of communication belongs to those who can bring something to the table that no chatbot ever will. @nevillehobson (05:40) Yeah, but it’s interesting. It’s quite apparent that this will have a dramatic impact on this. see a lot of articles and opinion pieces published about the kind of tactical use to make of AI. But what you’ve explained in terms of what firms like the consulting firms you mentioned are doing, this is a quite a significant shift that’s happening. A trigger in my mind ⁓ reminded me of a post I wrote in my own blog in February about the need to change the bill about hours model. As you said, we talked about it in FIR. We actually came up in our interview with Steve Rubell earlier this year as well. And I’ve been trying to do this. This is the catalyst where you can’t ignore this. No one’s going to want to pay a consultant on an hourly retainer basis. when AI in its various shapes and forms, particularly agentic AI, it’s going to be totally apparent to anyone that that’s doing the grunt work. Your value is kind of like, this is what it means, Mr. Client, or actually more than that, much more than that, in fact, although AI will help you do that. So it will require a rethinking of the whole relationship model between consultants and clients. And I doubt truly doubt more than 5 % of consultants, which basically are the big ones you’ve mentioned, are ready for that. I don’t think they are. So it doesn’t mean to say that no one’s going to change. It’s going to be a painful process. The ones who I think will benefit from this will be the ones who are already making plans for a shift. And it could be that ⁓ number of organizations already having conversations with clients who are quite happy, quite comfortable with how things currently are and don’t really like the idea of change. That is all to do with human resistance to change more than anything I would say. You could give them the logic about here’s the cost benefits, why we should change, but it’s the emotional bit you need to bring into this, which I think is an opportunity for consultants to kind of, as you mentioned in your narrative, to position you, the consultant, as the critical addition to the mix that blends the value of the agentic AI and other aspects of artificial intelligence with your irreplaceable skill in literally bringing that to the table to illustrate to the client. what it means for what they’re trying to achieve and the role that these tools will have. This is not new at all in the sense of suddenly we’re talking about it. We’ve been talking about this in different ways for quite a while. I think though what I’ve seen most of the conversations tend to be at a really high level and very technically focused. So this needs conversations on a far more emotional level that so that people can grasp quite clearly what the benefits are. PRSA had a report just a month ago in June about how AI is driving PR innovation. It’s not about the main topic here, but it’s absolutely connected to it, because that’s exactly how you’re going to be able to use a word I dislike intensely, but it’s apt to harness the power of this that enables you to do the things that the PRSA mentions in their report, hyper-personalized media pictures they talk about, predictive crisis simulation. real-time event adaptation, visual storytelling, and cross-cultural adaptation, that one’s most interesting. So you’re seeing these are the kind of outcomes that you would get when you harness all this intelligence, combining yours with the, ⁓ well, the algorithmic intelligence of artificial intelligence. So it’s an exciting time, although I fear that many people Will not do well out of this change that’s coming Shel Holtz (10:06) Undoubtedly not. It reminds me a lot of this notion that I shared in the early days of the net and basically the computer revolution in business, which you and I have both been around for. mean, you and I both worked in days where we had typewriters and typewriters and fax machines, absolutely. And it was… @nevillehobson (10:28) Fax machines, yeah, true, after that fax machine. Shel Holtz (10:34) Something, I don’t remember if I heard this or just observed it, but the first thing we tend to do with a new technology is stuff that we were doing with the old technology. The first thing we did with computers was type. Word processors, right? It was WordStar and WordPress. And it was just a replacement for a typewriter. And instead of a ledger, we had Excel for… @nevillehobson (10:49) Yeah. Shel Holtz (11:01) keeping our numbers. It was the same job we were doing before we had just found this technology that allowed us to do it better and faster and easier. But it was stuff that we were already doing. And even with that list that you read from the PRSA report, the things that they have figured out how to get AI to do are things that we are already doing. Where technology gets interesting is After you move beyond that, say, okay, we’ve adapted this technology so that it makes our lives easier with all of these tasks that we had to perform in a more manual way before. What else can we do with this? That’s where it gets interesting is where you start to see what these things can do that you hadn’t imagined and that you weren’t already doing that benefit your life and your organization. And when you look at this idea of reshaping the business. I don’t think most businesses of any kind, consulting or anything else, are there yet. They’re still at the, how can we do what we’re already doing better, faster, cheaper than we were doing before with, by using AI and not how can we rethink the way this business operates. And McKinsey, think, is already heading there. They’re already saying, look, rather than sending a pod of 30 consultants out to the client and taking over 10 offices on site and having this presence and cranking out PowerPoints and crunching all of this data, we’re just gonna have some people in the trenches there working with them. The AI will come up with @nevillehobson (12:24) Thank Shel Holtz (12:50) the data analysis and all of those things that it’s good at. We’re going to be with the client figuring out how to bring this into the workplace, how we’re going to synthesize this into their operations, and how we’re going to train them and prepare them and guide them through this shift, whatever it may be, whatever they brought us in to consult with them about. @nevillehobson (13:01) and Shel Holtz (13:17) So they’re rethinking what their business is all about, how they’re going to execute it. And it doesn’t look a lot like what it looked like before, but to their credit, they’re ahead of it. They’re not waiting the way so many industries did in the world of Web 1.0, Internet 1.0, where newspapers started to suffer because they didn’t see the writing on the wall and figure out a better way or a different way to… fulfill their fundamental mission, which is getting news and information into the hands of the people in the communities that they serve. So it’s good to see McKinsey doing this. It’s worrisome that we ⁓ don’t yet see other organizations, many other organizations, that they’re still in that phase of how can this replace the tactical work that we do or augment the tactical work that we do, not how does it reshape how we get our work what our business is all about and how we do it. @nevillehobson (14:19) Yeah, I think there’s also another element here that you’re talking about, know, McKinsey, Ernst & Young, and these these mega big consulting firms with typically huge enterprise level clients in different countries all over the world. And that’s great, because they will lead the way for that level of relationship building with clients of who have needs very different to the small to medium enterprise, the small consulting firm, mid-size consulting firms, for that matter, ⁓ who aren’t going to be about changing the whole business model. They’re going to be about what can they deliver to the client that makes absolute sense to where that client’s at on this journey that we’re all on. So it could well be that you’re going to see, you won’t see uniformity in this at all. What we’ll see, which is most welcome, is the McKinsey’s and the Ernst & Young’s taking the strongest lead to reinvent the businesses of consulting, if you like. And that’ll filter out, gradually. It’ll be very uneven. And some won’t do it, in which case they may not survive or they may not evolve. Who knows what’s going to happen in that sense. But that’s good, in my opinion, because the kind of Let’s look at how AI will help us reshape entirely our business model. It’s for everyone right now. If you’ve got a 10 person consulting practice, that’s not for you, I would think, or you could argue, well, actually, it might be, it should be easier because you’re smaller and more nimble. We don’t know really what this is going to look like. But what I see, though, is a two-way pressure stream where clients are going to be pressuring their agencies, their consulting partners, if you will. I’m hearing about this, let’s talk about this. We want to do X. You then got an opportunity to say, let’s talk about that because I’ve got three ideas that could work that are absolutely on this avenue. But it’s not going to happen quickly and it isn’t going to be for most businesses, I would argue, not yet. ⁓ For the big ones where there’s a lot at stake and they need to do things like, hey, we’re not going to send 100 consultants out this client at X hundred dollars per hour or whatever it might be, we’re going to work out a deal where the outcomes are really what we’re going to be judged upon. And yes, we’ll have a hundred agentic consultants and there’ll be three human beings who lead them. Or as we have discussed on F.I.R. Shell, one of the leaders may well be another AI. So but that’s not for everyone. These are this kind of big picture ideas that scares the hell out of a lot of people, I have to say. So this is uneven. It’s exciting or a nightmare depending on your point of view. I think it’s exciting personally. I could see for instance, independent consultants are the ones who are probably most at risk, where you’ve got a nice gig going with a handful of clients, you all like each other, they know you. But as an individual, you just absolutely cannot deliver what they need without partnering with someone else. be a big shake up, I would say in this, but it’s going to take a while and by a while, I don’t mean 10 years, you’re looking at two to three years before you see an impact in this I would venture. So interesting times ahead, as someone said recently. Shel Holtz (17:45) Yeah, it’s definitely the fastest growing technology we’ve ever seen, which is a conundrum because we tend to adapt at the same pace that we always have, regardless of the fact that this technology is growing by leaps and bounds. And for those… smaller consulting firms that maybe now is not the time, I would worry about that because if I get an assignment and I go into a client and I’m presenting them with a monthly invoice showing hours burned on this client’s work and I’ve got 15 people taking four months to do this assignment and I’m hearing through my network about colleagues that are working with other consulting firms that are getting it done. in three weeks with a team of two consultants and getting great outcomes and spending less, then I am going to worry about my client wanting to work with them because they’re more efficient at it. @nevillehobson (18:48) Yeah, I agree, but that’s not really what I’m talking about because not everyone is going to be impacted by this yet, not at all. And in fact, I don’t believe it’s feasible to say everyone suddenly has got to change right now. Get rid of the billable hours model. ⁓ indeed, indeed, many are not. You’re right. They ought to start thinking about it. You’re not going to see the kind of horrific outcome suddenly that this nice little consulting firm Shel Holtz (19:04) I think everybody needs to start thinking about it right now. @nevillehobson (19:18) suddenly lost all the clients because they can’t deliver the AI content that they need to do. Some will argue, yeah, we’ll use AI models and we know how to do that and we can deliver what’s right for this client where the role of AI ⁓ is important in the fit with humans. But the conversations about how it’s going to replace your employees and they’re going to be agents all over the place isn’t realistic in the way most people are talking about it, it seems to me. So I think you’re right, and I don’t disagree at all that everyone needs to be cognizant of what’s coming down the track in one way or another. And so people who are talking about it, people who are writing about it or whatever it might be, or demonstrating how this all works, pay attention to that. I’ve attended good probably four, yes, four webinars in the last month on this not exact topic, but in this area of working with the new ⁓ elements of artificial intelligence are coming into the workplace. And they’re all useful, slightly different. I learn something new from all of this that shapes my thinking, if you like. So it is a time to be aware that this is happening. And that’s where paying attention to McKinsey’s and so forth is really important. Shel Holtz (20:38) Yeah, I think if you’re a consulting firm, especially a smaller one, it’s time to start considering what differentiates you from your competition? What are your genuine strengths? What do you bring to the table that an AI chatbot can’t? What do your clients really appreciate about you? And to start thinking about how do we leverage those things that can’t be commoditized with artificial intelligence that all of the consulting firms are gonna be using in roughly the same ways. ⁓ I think that differentiator is going to be critical ⁓ in the early thought process on all of this. @nevillehobson (21:25) Yeah, you’re right. And I’m just thinking, I’m looking at what we discussed with Steve Rebell back in the early part of this year, early February. So that’s seven months ago. A lot has changed in seven months already. But what Steve says, I think is still the case. He says, you know, we know this. is not a substitute for PR professionals. It’s a force multiplier, he says. AI can analyze trends, detect patterns and all that. It still requires human intelligence. to interpret, contextualize and act on these insights. That I totally agree is still true. I would add that for how long though I wonder, where’s only the humans who can do that. But it is a significant value of PR as Steve points out. And he talks about ⁓ who’s gonna survive, who’s gonna thrive in this era that’s coming. Well, in simple terms, those who embrace AI as an augmentation tool rather than resist its impact. You know what? That conversation was common. just seven months ago. Now, though, I don’t believe it has the same, well, impact, frankly, where you talk about if you don’t do this, you won’t thrive, you resist, it’s going to happen no matter what. We’ve moved to the how rather than the what in that conversation, it seems to me, even those skeptics who don’t see this. So strategic advisory roles rather than commoditized execution work is what humans need to be thinking about, according to Steve. I don’t disagree with that. Shel Holtz (22:53) Not at all. And that will be a 30 for this episode of For Immediate Release.   The post FIR #476: Rewiring the Consulting Business for AI appeared first on FIR Podcast Network.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app