

The FIR Podcast Network Everything Feed
The FIR Podcast Network Everything Feed
Subscribe to receive every episode of every show on the FIR Podcast Network
Episodes
Mentioned books

Jan 5, 2026 • 7min
FIR 21st Anniversary Celebration
In which Neville and Shel take a few minutes to acknowledge FIR’s 21st birthday.
The post FIR 21st Anniversary Celebration appeared first on FIR Podcast Network.

Jan 5, 2026 • 27min
FIR #495: Reddit, AI, and the New Rules of Communication
Reddit, the #2 social media site in the US, has surpassed TikTok to become the #4 site in the UK. It has no algorithm that forces you to see what’s most likely to keep you on the site; it just lets users upvote what they think is most interesting, valuable, or relevant. Every topic under the sun has a subreddit. Several organizations, from Starbucks to Uber, have taken advantage of it. So why is it absent from most communicators’ list of social media platforms to pay attention to? Neville and Shel look at Reddit’s growing influence in this episode.
Links from This Episode:
Reddit overtakes TikTok in UK thanks to search algorithms and gen Z
Brian Niccol said a Reddit thread of people interviewing for his company showed him that his ‘Back to Starbucks’ plan was working
Playing Defense: How (and When) Big Brands Respond to Negativity on Reddit
Wayfair uses Reddit Pro to help redditors get answers, and grow traffic as a result
Uber puts Reddit Ads in the Driver’s Seat and cruises to significant lifts
Reddit category takeover contributes to 5X higher Ad Awareness for OREO x STAR WARS™ collaboration
The next monthly, long-form episode of FIR will drop on Monday, January 26.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com.
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel Holtz: Hi everybody, and welcome to episode number 495 of For Immediate Release. I’m Shel Holtz.
Neville Hobson: I’m Neville Hobson and let’s start by saying we wish you a happy new 2026. We’re recording this in the first week of January, so it’s a new year. Last week the Guardian reported something that might surprise people who still think of Reddit as a noisy corner of the internet best avoided. In a deep analysis, the paper noted that Reddit has now overtaken TikTok to become the fourth most visited social media site in the UK, with three in five UK internet users encountering it regularly, according to Ofcom, the industry regulator. Among 18 to 24-year-olds—the Gen Z cohort—it’s one of the most visited organizations of any kind. And the UK is now Reddit’s second largest market globally, behind only the US.
That growth hasn’t happened because Reddit suddenly reinvented itself; it’s happened because the wider internet has changed around it. Google’s search algorithms now prioritize what it calls “helpful content,” particularly discussion forums. Reddit threads increasingly surface high in search results, and they’re also being cited heavily in AI-generated summaries. Reddit has licensing deals with both Google and OpenAI, which means its content is being used to train AI models and then redistributed back to users as part of search and discovery.
At the same time, users, particularly Gen Z, are actively seeking out human-generated content—not polished brand messaging or single definitive answers, but lived experience, contradiction, debate, and advice that feels like it comes from real people dealing with real situations like parenting, money, housing, health, and sport. Jen Wong, Reddit’s chief operating officer, described this as an “antidote to AI slop.” Reddit, she says, isn’t clean; it’s messy. You have to sift through different points of view, and increasingly, that is the point.
For communicators, this raises several important points. For a start, Reddit is no longer a niche platform you choose to engage with or ignore. It’s become part of the discovery layer of the internet. People may encounter your organization, your industry, or your issue there before they ever see your website or your carefully crafted statement. Search visibility is no longer just about content you own; it’s about conversations. Conversations at search engines and AI systems are now amplifying its scale.
Many organizations are still quietly hoping Reddit will remain hostile, chaotic, or irrelevant enough to ignore. That stance is becoming harder to justify when government departments are hosting AMAs (“Ask Me Anything”) and major public narratives are forming in plain sight. Finally, lurking is no longer neutral. Silence can allow perceptions—accurate or not—to solidify without challenge, context, or correction. So the question for communicators isn’t whether Reddit is for them, it’s whether they’re prepared for a world where human conversation, amplified by algorithms and AI, shapes reputation just as much as official messaging does. Look at the Omnicom layoffs announced not long before Christmas and the significant role Reddit played as a communication channel parallel to official company communication. We discussed this in depth in FIR 492 just a few weeks ago.
So, Shel, this feels like another signal that the ground is shifting under communicators’ feet. Where would you start unpacking what this means?
Shel Holtz: Well, if the ground is shifting, it’s because communicators weren’t standing in the right place in the first place. Reddit has been a significant and important platform for a long time. I’ve been advocating for communicators to start taking advantage of it for many years. I’m glad to see it getting this kind of attention, and there are a lot of reasons to consider using this in multiple ways—including the fact that AI is now relying on Reddit for some of the content that it’s trained on.
Let’s look at just a couple of things about Reddit. First of all, the people on Reddit are very committed to the communities that they are part of. This is not a “drop-in” community like we see on LinkedIn, nor is it tight, insular communities like you see on Facebook. These welcome new people, but they’re looking for people who are very committed to engaging, sharing, and contributing. Second, there’s no algorithm driving what rises to the top. It’s the community that upvotes the most valuable posts. That’s why you see the most valuable information at the top of any thread. It’s why in the early days, BuzzFeed relied on Reddit to determine what content it was going to publish. Reddit had the nickname “the front page of the internet,” and how you can ignore that eludes me.
If you look at what happened with Omnicom, that’s just one thing it’s useful for: social listening and insight generation. It is also issues management and crisis communication. If these large communities are talking about your industry, company, or product, and you’re not listening, you’re missing what is being discussed more broadly via “sneakernet”—people just talking to each other voice-to-voice or over instant messages where you can’t hear it. This is where you gather that intelligence to help you come up with the next product iteration or address issues important to your stakeholder base.
I use Reddit basically two ways. One, whenever I have a problem with a product, like my Nikon Z6 II camera, there is a community there more than happy to answer my question. While I’m there, I’ll scroll through and see if there’s something I can contribute, because it’s important to give as well as take. The other is monitoring construction subreddits for good intelligence that I can share up in the organization. There are so many other ways to take advantage of Reddit, and now is the time to invest.
Neville Hobson: Yeah, I’ve been on Reddit for about 10 years with an account. In those early days, it was very much a geeky place—not really mainstream. But reality, as the Guardian’s analysis outlines, is that you can’t just treat it like that anymore if you’re wearing a business hat. It is showing up in places like Google AI overviews and is heavily surfaced in those search results because of the licensing deal that allows Google to train models on Reddit data.
The UK government is active on Reddit, with departments hosting “Ask Me Anythings” to engage with people. That sort of activity is probably more appropriate for Reddit than LinkedIn, where I’ve seen government activities attract nothing but extreme, politically motivated negativity in the comments. On Reddit, you’re probably going to get a more balanced view.
The Omnicom example was really intriguing. The depth of comment on Reddit told lived experience stories that contrasted sharply with the formal communication from corporate communicators. It was a subject lesson in how not to do this from a corporate point of view. Ignoring it is not an option anymore.
Shel Holtz: You mentioned “Ask Me Anythings.” That is a great opportunity to present your CEO or subject matter experts to build reputation proactively or reactively during a crisis. Siemens did an AMA featuring their engineers and reported strong click-through rates. Novo Nordisk leaned into sensitive topics and reported an “astoundingly positive reception”. Oatly and IBM also reported strong engagement and brand lift through this format. Of course, there are disasters if executives are not well-prepared, as authenticity is highly valued.
Community engagement is another missed opportunity. Wayfair uses discovery tools on Reddit to surface conversations about their service and pops in to answer questions and address issues. You can build relationships with customers, enthusiasts, and even critics. You can also use it for your employer brand to monitor interview processes and culture signals. The CEO of Starbucks explicitly treated a Reddit hiring thread as a signal that a culture shift was taking hold.
Neville Hobson: I think one reason for past failures is that companies brought their old methods of communicating to a place where that just doesn’t work. The Guardian findings show that human experience now outranks polish. If you come to Reddit with all your corporate baggage and structured messaging, it’s not going to work. Users are actively seeking “signals of humanity,” and messiness is becoming a trust cue. It’s an “anti-automation” movement. Lurking is no longer neutral because you are being talked about whether you are present or not.
Shel Holtz: There’s an illusion of control that you get from things like press releases, but get over it—you don’t control the conversation. To be credible in these spaces, you have to stop being polished. “Press release voice” is a trigger on Reddit; plain talk is valued. Make sure you have the right subject matter expert in the right subreddits who can talk in a plain voice. Don’t just do “drive-by” communication when you need something; be a regular contributor.
Neville Hobson: So, human experience-led communications are regaining strategic value. You can’t ignore this.
Shel Holtz: LinkedIn’s value seems to be diminishing as it turns into a combination of Facebook with non-business content and AI-generated posts. If you’re looking for a community to tap into people who care about what you do, Reddit is the best place. You can even use paid amplification—Uber and Oreo have reported brand lift from boosted posts. Don’t dismiss it as hostile; develop a strategy and start doing it.
Neville Hobson: Keep an eye on the resurgence of other networks, too. The new “Digg” is coming, which was a fixture like Reddit in the early days. There is also “Tangle,” a new one from one of the Twitter founders focused on genuine conversation.
Shel Holtz: I’d keep an eye on them, but Reddit already exists with millions of users and tens of thousands of subreddits. Use it. Don’t ignore it. And that’ll be a “30” for this episode of For Immediate Release.
The post FIR #495: Reddit, AI, and the New Rules of Communication appeared first on FIR Podcast Network.

Dec 29, 2025 • 1h 40min
FIR #494: Is News’s Future Error-Riddled AI-Generated Podcasts, or “Information Stewards”?
In the long-form episode for December 2025, Neville and Shel explore the future of news from two perspectives, including The Washington Post‘s ill-advised launch of a personalized, AI-generated podcast that failed to meet the newsroom’s standards for accuracy, and the shift from journalists to “information stewards” as news sources. Also in this episode:
WPP founder Sir Martin Sorrell argued that PR is dead and advertising rules all.
Is AI about to empty Madison Avenue
Should communicators do anything about AI slop?
No, you can’t tell when something was written by AI
In Dan York’s tech report: Mastodon’s founder steps back, and new leadership takes over; the UN reaffirms a model of Internet governance that involves everyone: and Dan talks about what he’ll be watching in 2026, including decentralized social media, agentic AI, and Internet technologies.
Links from this episode:
Sherilynne Starkie’s “Stark Raving Social” podcast
Neville’s Strategic Magazine article: Your Value is Not Your Timesheet
Questions of accuracy arise as Washington Post uses AI to create personalized podcasts
‘Iterate through’: Why The Washington Post launched an error-ridden AI product
Washington Post Says It Will Continue AI-Generating Error Filled Podcasts as Its Own Editors Groan in Embarrassment
The Washington Post Deployed Its Disastrous AI-Generated Podcasts Even After Internal Tests Showed It Was Failing Miserably
Washington Post Stands Behind AI Podcast Plan Despite Staff Outcry
Washington Post’s AI-generated podcasts rife with errors, fictional quotes
Radio 4 Today segment featuring Martin Sorrell and Sarah Waddington
Martin Sorrell: There’s No Such Thing as PR Anymore
Martin Sorrell: The PR Industry is Over-Sensitive
Chris Gilmour LinkedIn Post on Martin Sorrell
Stephen Waddington’s Facebook Post on the Sorrell-Waddington segment
Sir Martin Sorrell Declares PR is Dead. PR Pros Respond
The Future of News is Happening Where No One is Looking
This is Local News Now
Social Media and News Fact Sheet
The State of Local News
AI is About to Empty Madison Avenue
AI Slop: How Every Media Revolution Breeds Rubbish and Art
Merriam-Webster’s word of the year delivers a dismissive verdict on junk AI content
Pinterest Users Are Tired of All the AI Slop
The Impact of Visual Generative AI on Advertising Effectiveness
No, You Can’t Tell When Something Was Written by AI
How Can You Tell if AI Wrote Something?
Wikipedia: Signs of AI Writing
Detecting AI-written text is challenging, even for AI. Here’s why
FIR Interview: AI and the Writing Profession, with Josh Bernoff
FIR #464: Research Finds Disclosing Use of AI Erodes Trust
Neville’s Blog: When AI Lets Go of the Em Dash
Links from Dan York’s Tech Report:
Eugen Rochko on Mastodon’s blog: My Next Chapter with Mastodon
Mastodon Blog: The Future is Ours to Build
Tim Chambers: My Open Social Web Predictions
Internet Society: WSIS 20 Reaffirms Multistakeholder Governance and a Lasting IGF
Wikimedia Foundation: In the AI Era, Wikipedia Has Never Been More Valuable
Landslide: A Ghost Story
The next monthly, long-form episode of FIR will drop on Monday, January 26.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com.
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Neville Hobson Hi everyone, and welcome to the For Immediate Release long-form episode for December 2025. I’m Neville Hobson.
Shel Holtz And I’m Shel Holtz.
Neville Hobson And we have six great stories to discuss and share with you that we hope you’ll enjoy listening to during Twixtmas. What is that, you may ask? Well, Twixtmas is the informal name for the relaxed period between Christmas Day and New Year’s Eve, typically focusing on the 27th to the 30th of December. It’s a time for winding down, enjoying leftovers, watching TV, listening to podcasts, and simply existing without the usual hustle of holidays or work before the new year starts.
The name comes from blending Twixt, an old English word for “between,” and Christmas. It’s a modern term for a timeless lull in the calendar, often called the “festive gap.” That’s probably more information than you wanted, but now you know what it means. So, without further ado, let’s begin the Twixtmas episode with a recap of previous shows since the November long-form one.
Shel Holtz We’ll have to start using that over here.
Recent Episodes & Listener Comments
Neville Hobson That was FIR 489, published on the 17th of November. The story we led with in amplifying the long-form episode across social media was an explosion of “thought leadership slop,” where we riffed on a post by Robert Rose of the Content Marketing Institute. He identified idea inflation as a growing problem on multiple levels. Other stories in this 101-minute episode included quantum computing, vibe coding, “Is it OK to use an AI-generated photo in your LinkedIn profile?”, Dan York’s tech report, and more. And we have listener comments on this episode.
Shel Holtz We do, starting with Sherilyn Starkey up in Canada:
“I was just listening to the latest episode and you were commenting about a lack of female participation in podcasting. I thought I’d drop in a plug for my latest show, Stark Raving Social. I started it earlier this year and it delivers bite-sized episodes for marcomms pros. I do ‘how-to,’ ‘why you should,’ and ‘have you noticed’ type shows. I’m a hobbyist, so I publish when I have time and feel inspired, but it’s pretty regular. Last year I had a show where I interviewed 50 women over 50. And although the project’s complete, I still get about a thousand downloads monthly. I’ve been podcasting on and off since about 2007 and was—and still am—greatly inspired by FIR and your excellent work. Thank you.”
Thank you for that, Sherilyn, and hope to see you soon. Sherilyn’s terrific.
We have two comments on this episode from Darlene Wilson. She said:
“Enjoyed all of your content in this episode. Wanted to share that my role shifted from a marketing and comms managerial title to ‘Senior Manager, Corporate Brand and Communications’ a few years ago. It combines communication and brand leadership in one portfolio under which are marketing, sponsorship and events, promo, and change management. It’s a great role for a raging generalist. Moving brand and comms together—or brand under the comms umbrella—does signify part of a shift from end-deliverer of the message to a focus on reputation, trust, judgment, and the ability to oversee and connect what a company says and what it does. Given today’s environment, organizations do seem to want leaders, as Neville said, who bring judgment, sensitivity, and crisis literacy. That’s the comms person bringing broad and strategic thinking. Thank you both for your long-term commitment to this valuable profession.”
She added in another comment:
“The ‘every media revolution has slop’ analogy is directionally useful, but it can underweight what is genuinely discontinuous here: 1. Near zero marginal cost at massive scale, 2. Algorithmic distribution optimizing for engagement, and 3. Slop feeding back into training and ranking systems (i.e., model collapse plus search quality). If you treat it as just another cycle, you may miss that the mechanism is now self-reinforcing in ways Gutenberg-era pamphlets were not. The sources above—Google spam policies plus model collapse plus platform case studies—give you the evidence to make the distinction without turning the argument into moral panic.”
Neville Hobson Great comments. Thank you very much for that.
FIR 490 on the 1st of December: We unpacked some AI studies that claim to show what large language models actually read. But the sources shift month to month, and many citations aren’t reliable at all. We have a comment on this episode.
Shel Holtz From our friend, Niall Cook, who says:
“I don’t think anyone should be surprised that different studies report different results. It’s the same in many other research domains, but especially so here when the prompts, the models, the model parameters, and the methods will always produce differences—in the same way that no two users of the same generative AI system will get exactly the same response for the same question. We shouldn’t conflate visibility and citation reliability, though; two different things.”
Neville Hobson FIR 491 on the 8th of December shone a spotlight on big four consulting firm Deloitte, which created costly reports for two governments on opposite sides of the world, each containing fake resources generated by AI. Not only that, but a separate study published by the US Centers for Disease Control also included AI-hallucinated citations and the exact opposite conclusion from the real scientist’s research. We have a number of comments on this one.
Shel Holtz We have four, starting with Monique Zitnik:
“I’ve been nearly caught out with a source pointing to a website. After much digging, I discovered the website was AI-generated, and other websites had quoted this website. It was a myriad of AI-invented rubbish that sounded plausible.”
Mike Klein threw some praise your way, Neville. He said:
“It’s also a business model problem, as Neville pointed out in his excellent article for Strategic.”
That’s the magazine that Mike edits and you contributed to. He provided a link which we will add to the show notes; your article was titled Your Value is Not Your Timesheet.
Steve Lubetkin said:
“AI can be a useful tool, but humans need to review and confirm its output. The fact that they don’t or won’t is troubling.”
And Chris Lee wrote:
“You have both done some great episodes this year around AI. Very useful. Thanks. Keep them coming.”
Neville Hobson That was a great comment. Steve actually says it all: you’ve got to check up on all this stuff before you publish anything or rely on something. I see many more people now talking about it. You’ve got to verify everything all the time. You cannot trust it, whether it’s generated by AI or quoted by AI or linked to by the AI; you’ve got to verify all of that.
Shel Holtz Yeah, and I think we mentioned in one episode that I believe—and I think you do too—that there is likely to be a verification role that will be a new job classification. I’ve seen a little bit more about that since we made that assertion. There are actually companies that are hiring people to verify AI.
Neville Hobson That’s interesting, isn’t it?
In FIR 492 on the 15th of December, we looked at how the story of the untimely Omnicom layoffs in the US unfolded with one official investor-focused narrative and another on LinkedIn and Reddit. We observed that when people have platforms, the press release isn’t the whole story. We have one comment on this?
Shel Holtz Yes, from Roberto Capodici. Apologies if I pronounced that wrong. Roberto says:
“I think what’s really interesting here is how the whole situation highlights the tension between curated corporate narratives and the unpredictability of human experience playing out in public forums like LinkedIn.”
Neville Hobson In FIR 493 on the 22nd of December, we discussed how artificial early engagement can manufacture visibility that algorithms and media treat as significant. The tactics aren’t political; they’re portable and already familiar to communicators. It’s alarmingly easy to do.
And finally, we published an FIR interview on the 10th of December where we enjoyed a great discussion with Josh Bernhoff about his major survey of writers and AI. The deep divide between users and non-users, productivity gains, AI slop, trust, and the real story isn’t replacing people but resorting them. We have a comment or two, think, Shel?
Shel Holtz We have one. There are more on Josh’s repost of this. This one is from Susan Mangiero, PhD:
“I enjoyed your lively discussion about AI. In fact, I stopped the video and repeated several sections. I don’t think you addressed the use of AI for purposes of author marketing, unless I missed it:
What are your thoughts about using AI to help authors and their collaborating ghostwriters market their books?
Given Shel’s work in the area of employee communications, what are your thoughts about using AI for research? (Note: I do a lot of work with financial clients.) Josh, keep up the great work. I enjoy your blog. And the book survey was fascinating.”
Do you want to tackle these? I’m wrapping up work on a book right now. I have a proposal consultant helping me prepare the proposal, and I am thinking heavily about marketing these days. There’s no question that I will use AI as an aid to this in identifying targets to approach and testing language with different stakeholders. Every opportunity I have to use it to improve the marketing output, I will. I’m not going to outsource this to AI, but if AI can play devil’s advocate for me and help me brainstorm and ideate, I will take advantage of that all day long. What do you think?
Neville Hobson Absolutely, it is a natural tool to use. One of the biggest benefits of AI is its ability to literally be your right-hand person, your assistant who will work with you—not just respond to things you ask it, but challenge you on things. It’s the same as having a human being by your side, except this one doesn’t need to eat lunch.
It allows you to identify audiences, figure out what messaging is appropriate for which audience and when and where. It helps you concentrate on the next steps you’re to take.
Shel Holtz In terms of research for internal communication, I don’t see it as any different from research for external communication. It comes back down to the need to verify everything that you get.
I wrapped up a white paper for my company not too long ago on adaptive reuse of buildings. Since COVID, office occupancy has declined, and some large office buildings have defaulted on leases. The immediate thought is converting them to residences, but it’s harder than you think because of plumbing and natural light issues. The white paper explores other opportunities.
This is way outside my expertise, so I relied heavily on internal experts but also did a lot of research using Google’s Gemini Deep Research. I got a lot of great information, but some sources it found didn’t exist. I would have been humiliated if I had put out a white paper with that kind of information. I spent a lot of time verifying every source and every fact. It took less time than doing the research myself, but it was still time-consuming. As Steve Lubetkin noted, it’s disheartening that there are people who are not doing that.
Circle of Fellows Update
Shel Holtz I want to let everybody know about the most recent Circle of Fellows, which is now available for you to listen to or watch. It was a great conversation about the future of communication in 2026 and beyond. Zora Artis, Bonnie Caver, Adrian Cropley, and Mary Hills were the panelists.
The next Circle of Fellows is coming up on Thursday, January 22, at noon Eastern time. The topic is the impact of mentoring. We have a great panel: Amanda Hamilton-Attwell, Brent Carey, Andrea Greenhous, and Russell Grossman. You can tune in live or watch the replay on the FIR Podcast Network.
1. The Washington Post’s AI Podcast Debacle
Shel Holtz The core currency of a news organization isn’t its reporting; it’s trust. In mid-December, The Washington Post decided to trade that currency for a tech demo when it launched “Your Personal Podcast,” an AI-driven feature that generates audio summaries of the day’s news.
At its core, this doesn’t sound like a bad idea. Nicholas Negroponte suggested this in the 90s with the “Daily Me.” But at the Post, cracks appeared immediately. The AI mispronounced names, invented quotes, and editorialized. In one egregious example, AI announced a discussion on whether people with intellectual disabilities should be executed, stripping away the crucial context regarding a specific legal case.
According to internal documents obtained by Semafor, the product team knew exactly what they were releasing. During testing, between 68% and 84% of the AI-generated scripts failed to meet the newsroom’s own standards. In any other industry, a failure rate approaching 85% would trigger a recall, not a launch.
The Post is chasing a younger demographic that consumes audio, which is a valid goal. But serving them hallucinations doesn’t build a future audience; it alienates them. The Post needs to pull this tool, fix it, and apologize—not just for the errors, but for the decision to treat their subscribers as beta testers for a broken product.
Neville Hobson Extraordinary, truly. I was reading the NPR article you shared. It asks: “Will listeners embrace an AI news podcast?” The podcast is tailored to listeners based on what they’ve read in the Washington Post. That implies the likely listener is someone who spends a lot of time reading the Post, not a casual user.
It’s an intriguing step, but unfortunately, a misstep in terms of how they’ve dealt with it.
Shel Holtz Podcasting has become a staple for newspapers. The New York Times has The Daily and Hard Fork. Nothing is wrong with embracing podcasting. I just have a problem with the decision to launch it the way it was. The Washington Post is a storied institution—Katherine Graham, Ben Bradlee, Watergate, the Pentagon Papers. With this one decision, they have undermined that legacy.
Neville Hobson It symbolizes much of what is not right in the United States at the moment regarding freedom of speech and truth-telling. You mentioned Jeff Bezos owns the Post; where is the independence of journalists?
Shel Holtz We’re rapidly seeing this converted into state media, which is terrifying.
2. Martin Sorrell and the “Death of PR”
Neville Hobson Let’s talk about Martin Sorrell, the founder of WPP. On December 17th, in a debate on BBC Radio 4’s Today program, he declared the death of PR. Appearing with him was Sarah Waddington, the Chief Executive of the PRCA.
Sorrell made the blunt assertion that public relations is effectively dead and that the world has moved on to scale, reach, and speed—flooding the internet with content. Sarah Waddington pushed back firmly, anchoring PR in enduring purpose: helping organizations explain who they are and building trust.
The exchange was combustible, with Sorrell frequently talking over Waddington. Many felt Waddington was defending a way of thinking about communication that resists reduction to metrics alone.
Shel Holtz Every time Martin Sorrell opens his mouth, I roll my eyes. He once said WPP was more critical than human mortality. Advertising and public relations are not interchangeable. Advertising is about selling stuff; PR is about building relationships.
I always come back to the tuna boycott example. When StarKist addressed dolphin safety in their nets, PR agency Burson-Marsteller brought the parties to the table. The boycott organizers came out saying, “StarKist are the good guys.” Advertising could never have achieved that credibility.
Neville Hobson It feels like he was being provocative to generate headlines. But he seems to genuinely believe that scale, reach, and speed are what matter. If Sorrell thinks flooding the internet with detergent ads is the future, I think he’s crazy. I applaud Sarah Waddington for her calmness in the face of his bullying behavior.
Shel Holtz I challenge Sir Martin to find a client that will outsource their next existential crisis to WPP to handle with advertising. Let’s see how that goes.
3. The Future of Local News & Information Stewards
Shel Holtz The death of local news has been a consistent drumbeat. A new report from Northwestern University confirms news deserts have hit a record high. But a piece from the Nieman Journalism Lab argues the news hasn’t died; it just relocated to barbershops, church halls, and Facebook groups.
The Press Forward report suggests we look for “Information Stewards”—librarians, civic leaders, admins of neighborhood groups. If you’re a communicator, you can’t pitch a press release to a group chat, but you can provide clarity. Supply these stewards with fact sheets and FAQs. Trust has migrated from institutions to individuals.
Neville Hobson In the UK, local news is declining, though where I live in Somerset, there are three lively local papers. But generally, the commercial scale for local news is difficult. The idea of “Information Stewards” reminds me of the Epic 2015 flash video from years ago, which predicted a similar future.
Shel Holtz Local news is vital for accountability—school boards, zoning commissions. If no one reports on them, officials can do whatever they want. We need to reach these information stewards.
Dan York’s Tech Report:
Greetings, Shel and Neville, and all our listeners all around the world. Is Dan York coming at you from a snowy Shelburne, Vermont? And I want to begin this final episode of twenty twenty five, reflecting back on some of the topics and then upcoming changes with some of the things I’ve been talking about over these many episodes. A big one, of course, has been Mastodon and decentralized social media in general, and that had some big changes that have been happening in the past month.
Right around the time where we were recording the November show, there was a change at the the head of Mastodon. Now Mastodon is open source software, been around for ten years. That was created by a gentleman named Eugen Rochko, and he is the founder of this uh, has been based in Germany. And over time the organization evolved to be a German. Well, it tried to be a nonprofit, but then they’re a for profit entity. It’s they’re now in the scope of twenty twenty five. They have been looking to transfer to a, um, to a full non-profit, European based non-profit entity, most likely based in Belgium, according to the latest plans and all that, and they’re going through that process. But in the meantime, in late November twenty twenty five, Eugen announced that he will be stepping down as CEO and taking on a role as an advisor.
Now, this is critical because anybody who’s watched startups, whether they’re companies or whether they’re projects, knows that there’s a critical point when the founder needs to step away and let another management team come in and run the organization and grow. I have seen too many projects, including ones that I’ve led myself, where the founder, including myself, has stayed on too long and it just it dies at some point. Sometimes there are there are certainly cases where it has not, but there’s other times when it needs to move from the founder to others. So huge props to to Eugen and all the masks on folks for taking this step. And so there is a new leadership team. There’s a new executive director, a technical director, a community director, and there’s a team of employees and people who are continuing to evolve. Mastodon as one of the leading properties within the broader Activitypub based space that we call the Fediverse. So look for more to happen.
There’s a greater evolution going on over the scope of twenty twenty six. So cool things happening. I’ll note that this year, too many Mastodon servers just played on this whole wrapped thing, right? So you could get a wrap Staddon for twenty twenty five that wrapped up your most popular posts.
Some of the things you did, the most used hashtags, your archetype, all these different kinds of things. A little bit of fun just in the theme of all of the various different wrap things that are out there, but the fediverse will, I think, see a lot of activity and decentralised activity in general, because you’re seeing that through Mastodon and the other parts of the Fediverse. You’re seeing that with blue Sky and some tremendous work happening within the at, at protocol and some pieces that were there. Tim Chambers, somebody I come to really enjoy his writing over the time around open social items had a whole series of of predictions.
I’ll have the link for the show notes. He included some that were what he considered safe bets like blue Sky will cross sixty million registered users in twenty twenty six. He thinks he thinks the overall Activitypub fediverse outside of threads will cross fifteen million registered users, monthly active users, etc.. Uh, he’ll look at he had some ideas around threads. We’ll pass five hundred million. There’ll be continued federation. Anyway, if you’re looking for quick takes, it’s a good read. It’s some kind of interesting, fun stuff to think about and see where it will go around that.
Now, another story that I’ve been following this whole year has been internet governance kind of issues. And that culminated this month with a meeting at the United Nations called the World Summit on the Information Society. The twenty year review shortcut it has with this plus twenty. The good news coming out of all of that was that the governments of the world continued the path that we’re on, where everybody can be involved in some fashion in shaping the future of the internet, what was called in policy circles, the multi-stakeholder process.
But basically it means everybody has the potential to be involved in some way. It’s how the internet has worked since its origins. But there were some governments that wanted to put a different spin on it, where only governments would be involved and not businesses such as many of those of us listening to this, or universities or Or individual users. Anybody else like that?
So there were some good things that happened here. And something called the Internet Governance Forum, or IGF, has been made permanent rather than being renewed every ten years. It also had some other elements that recognize the the global network of national, regional and youth igfs that are happening all around the world. This is a venue, a way in which people, all of us listening, can be involved in internet governance. So it’s a great move, good step, lots of things. What am I looking at in twenty twenty six? You know, this whole episode is going to be about AI in different forms. I’ll be watching that too, specifically around the AI Agentic platform agents, the different pieces that were there and the different parts.
There’s a good article written by somebody over at the Open Future Foundation around why Wikimedia needs a seat at the Agentic AI Foundation, pointing out the work that happened in December that OpenAI and Anthropic and Block announced the creation of the Agentic AI Foundation, which also had Google, Microsoft, AWS, Bloomberg and Cloudflare joining into it. A lot of the commercial players all doing this. The point of this article was that it needs to have folks like Wikimedia involved and others. But in general, I personally will continue to be watching what’s happening at this agent level.
Agent to agent. Because that’s so much, I think, of what we’re going to be seeing as we increasingly look at AI driven tools and things. You know, that I’ll continue to be talking about decentralized social media, Mastodon, everything else. I’ll continue to be looking at, uh, internet and internet access.
You’ll hear me talk about low Earth orbit satellites, I’m sure, because we’re actually getting into a competitive situation where it’s more than just Starlink out there and also internet and information resilience. And so I want to leave you to a pointer, actually to a long read called landslide semicolon. A ghost story from Aaron Cassin, and I’ll have a link for the notes. But she writes a very long article piece around starting out about earthquakes, but getting into our information ecosystem and where we are, what’s out there, how it’s jumbled.
It’s worth reading and thinking about. Because really, the point is we need to think about how we have stories, how we work with things, how we have resilience in the information that we receive in some different forms. I encourage you to take a read about that. Think about it. Think what we will do in twenty twenty six. And with that, I wish you all a Happy New Year. I look forward to coming back at you in January. That’s all. You can find more of my audio writing at Dan York. Bye for now and back to you. Shel and Neville, Happy New Year.
5. AI Emptying Madison Avenue
Neville Hobson In a Wall Street Journal op-ed titled “AI is About to Empty Madison Avenue,” Rajiv Kohli of Columbia Business School argues that AI is quietly dismantling the agency model. Google, Meta, and Amazon are using AI to automate the advertising value chain.
While advertisers see efficiency, agencies see an existential threat. Madison Avenue isn’t being disrupted by better ideas, but by better systems. Kohli warns that unless things change, advertising may become a clear example of AI-driven creative hollowing out.
Shel Holtz I recently joined the advisory board for an AI certificate program at the University of San Francisco. The faculty stated they don’t believe AI will take jobs, which made me want to bang my head on the table. It already is.
Organizations need to strategize: what are the risks of outsourcing everything to AI? You can be efficient, but what do you lose? If you outsource everything, you’re going to see advertising overwhelmed with “slop.”
Neville Hobson The focus on speed and efficiency misses the important part: the people. We need to help educate leaders that AI should augment people, not replace them.
Shel Holtz In a capitalist society, leaders feel compelled to maximize ROI. If they can run a company with no employees and produce larger returns, they will. That’s why strategic analysis is vital to show where humans add value.
6. “Slop” is the Word of the Year
Shel Holtz Merriam-Webster has crowned “slop” as its Word of the Year for 2025. It defines it as digital content of low quality produced by AI. But a Scientific American article reminds us that every media revolution produces rubbish. The printing press produced libelous pamphlets; desktop publishing produced ransom-note newsletters.
The backlash isn’t a rejection of AI, but of low quality. To stand out in a sea of slop, your content needs to be exceptional.
Neville Hobson That Scientific American piece was great—calling Gutenberg the “ChatGPT of the 1450s.” It isn’t anti-AI; it’s about the sheer volume. If you automate production at scale, that’s flooding the internet, and much of it will be slop.
Shel Holtz You have to stay on top of the research. A study found people liked AI-generated ads more than human ones—until they were told it was AI. That shows an anti-AI bias, but also that the “human in the loop” matters for trust.
7. Detecting AI Writing
Neville Hobson There is a growing confidence that we can tell when something is written by AI. But in the Financial Times, Elaine Moore argues that most “AI tells”—like the use of dashes or words like “delve”—are just normal writing habits. Large Language Models sound human because they are trained on us.
However, Wikipedia has a field guide to spotting AI writing, looking for clusters of signals like vague abstractions. The debate is shifting from “Can we detect AI?” to “How much certainty do we really need?”
Shel Holtz If the writing meets our needs and is accurate, do I care if it was written by a human or a machine? Disclosure is going to be important for trust purposes.
Neville Hobson Trust is becoming ever more important. Finding a source you can trust—someone who verifies and doesn’t hoodwink you—is the key.
Shel Holtz I used Google Gemini to help find sources for my book, but I checked every single one. I saved time, but I kept the human in the loop.
Outro
Shel Holtz We hope you enjoy your Twixtmas. Please leave us a comment on LinkedIn, Facebook, Threads, or Blue Sky. You can email us at fircomments [at] gmail [dot] com or leave a voicemail on the FIR Podcast Network website.
Our next long-form episode will drop on Monday, January 26th. We will resume our short mid-week 30 for This episodes starting next week.
The post FIR #494: Is News’s Future Error-Riddled AI-Generated Podcasts, or “Information Stewards”? appeared first on FIR Podcast Network.

Dec 22, 2025 • 22min
FIR #493: How to (Unethically) Manufacture Significance and Influence
For somebody who posts on X or other social media platforms to become recognized by the media and other offline institutions as a significant, influential voice worth quoting, it usually takes patience and hard work to build an audience that respects and identifies with them. There is another way to achieve the same kind of reputation with far less work. According to a research report from the Network Contagion Research Institute, American political influencer Nick Fuentes opted for the second approach, a collection of tactics that made it appear like a huge number of people were amplifying his tweets within half an hour of posting them. While Fuentes wields his influence in the political realm, the tactics he employed are portable and available to people looking for the same quick solution in the business world. In this short midweek episode, we’ll break down the steps involved and the warning signs communicators should be on the alert for.
Links from this episode:
“America Last: How Fuentes’s Coordinated Raids and Foreign Fake Speech Inflate His Influence,” research report from the Network Contagion Research Institute
Eric Schwartzman’s LinkedIn post and analysis of the NCRI’s report
Raw Transcript:
Neville Hobson: Hi everybody and welcome to For Immediate Release. This is episode 493. I’m Neville Hobson.
Shel Holtz: And I’m Shel Holtz, and today I’m going to wade deep into America’s culture and political wars. I swear to you, I’m not doing this because of any political or social agenda on my part. What I’m going to share with you is not a social or political problem, it’s an influence problem. And in communications, influence and influencers have become top of mind.
We’re going to look at the rise of Nick Fuentes’s significance on the social and political stage. For listeners outside the US, you may not know who Fuentes is. He’s a US-based online political influencer and live stream personality who’s built a following around the “America First” ecosystem and has sought influence within right-of-center audiences, including by positioning himself in opposition to mainstream conservative organizations like Turning Point USA and encouraging supporters to disrupt their events. Tucker Carlson has had him on his show as a guest. President Donald Trump has hosted him at the White House for a dinner.
In a recent report that our friend Eric Schwartzman highlighted on LinkedIn—that’s how I found it—the Network Contagion Research Institute (NCRI) asserts that Fuentes is a fringe figure whose public profile rose to a level of significance by manipulating online systems. The NCRI, by the way, is an advocacy group focusing on hate groups, disinformation, misinformation, and speech across social media platforms. It’s been around since, I think, 2008. And they’ve taken their own fair share of criticism for bias, but this report looked pretty well researched, and there will be a link to it in the show notes.
The techniques that Fuentes used to rise to significance are, and this is the key here: If bad actors can inflate the perceived importance of a fringe political figure, the same mechanics can inflate the perceived importance of a product, a brand, a CEO, a labor dispute, or a crisis narrative.
I’ll share the details right after this.
In modern media ecosystems, visibility is often treated as evidence of significance. Of course, when the system can be tricked into manufacturing visibility, it can be tricked into manufacturing significance. Here’s the playbook. The report focuses heavily on what happens immediately after a post is published, specifically the first 30 minutes. That window matters because platforms like X use early engagement as a signal of relevance. If a post seems to be spreading fast, the algorithm acts like a town crier, showing it to more people.
The researchers compared 20 recent posts from several online figures. Their finding was that Fuentes’s posts regularly generated unusually high retweet velocity in the first 30 minutes, enough to outpace accounts with vastly larger follower bases. It outpaced the account of Elon Musk, for example.
The key detail here isn’t just the volume of retweets, it’s the timing. Rapid, concentrated engagement right after posting creates the illusion that the content is taking off, kicking it into recommendation streams. This is the same basic mechanic behind launch day boosting. You’ve seen this for people who have a new book out and they go out to friends and ask them to boost that new book the day it’s released. If you can create the appearance of immediate traction, you can trigger algorithm distribution that you didn’t earn.
In commerce, this shows up as engagement pods, coordinated employee advocacy swarms, and community groups that behave like a click farm. If your measurement system rewards velocity, someone can and will manufacture velocity.
So who’s responsible for those early retweet bursts? Across the 20 posts studied, 61% of Fuentes’s early tweets came from accounts that repeatedly retweeted multiple posts in the same window. In other words, this wasn’t a crowd. It was a repeatable mechanism, the same actors over and over, hitting the algorithm where it’s most sensitive. In business, you don’t need millions of genuine fans to create the signal of traction. You need a reliable, repeatable set of accounts that behave predictably at the right moment. This is why a relatively small number of coordinated actors can distort what public response appears to be, especially early in a narrative when journalists and internal leaders are trying to interpret what’s happening.
The report describes the amplification network as dominated by accounts that aren’t meaningfully identity-bearing. Among the repeat early retweeters, 92% were anonymous. Furthermore, many of these accounts were essentially single-purpose. They existed solely to boost specific messaging. Now, anonymity is a feature, not a bug in manufactured influence. In a corporate context, we see this as sock puppet commenters flooding a CEO’s LinkedIn post with applause or fake grassroots accounts inflating outrage against a policy change. If you’ve ever seen a comment section where the voices feel oddly similar and oddly committed, you’ve seen the symptom.
Perhaps the most operationally important finding involves outsourced capacity. Before a major inflection point in September, about half of the retweets on Fuentes’s most viral posts came from foreign, non-U.S. accounts. The report highlights concentrations in countries like India, Pakistan, Nigeria, Malaysia, and Indonesia. There’s no organic reason for these regions to be driving a U.S.-centric fringe political account. These geographies match known patterns associated with low-cost engagement farms.
If you’ve ever dealt with fake reviews or fake webinar attendees, you understand the market for outsourced attention. It’s snake oil. The same infrastructure used to inflate a political persona can inflate a brand narrative, especially when the goal is to trigger secondary effects like investor interest or the internal belief that everyone’s talking about this.
In the report, Fuentes isn’t presented as a passive beneficiary of an algorithm. The report states that he repeatedly issues direct instructions to followers: “Retweet this. Everybody retweet.” Turning amplification into a synchronized act. If you run employee advocacy programs or franchise networks, you’re already sitting on “raid capability.” The ethical version is mobilizing real stakeholders transparently. The unethical version is instructing coordinated networks to simulate stakeholder response specifically to game recommendation systems.
This is where communicators need to be brutally honest. The distance between campaign mobilization and manufactured consensus can be uncomfortably short.
Fuentes’s final move is the flywheel. Once you’ve manufactured signals that look like relevance, institutions treat those signals as real. The report argues that mainstream media coverage increased sharply after major news shocks, while the persistent manufactured engagement helped keep the subject elevated between those shocks. It also reports a 60% increase in high-status framing of the subject in mainstream articles after that inflection point.
This is classic social proof laundering. Once a narrative appears prominent on-platform, it becomes easier to place it off-platform: press mentions, analyst notes, investor chatter. At that point, people stop asking, “Is this real?” And start asking, “How big is this?”
For business communicators, here are three practical takeaways.
First, treat attention as an attack surface. If a narrative is unusually fast, unusually concentrated, or driven by accounts that don’t look like real stakeholders, assume you’re looking at influence operations.
Second, build signal hygiene into your intelligence process. If your team reports on social activity, incorporate basic credibility checks, like repeat actors, anonymity patterns, and geographic anomalies.
And third, audit your own incentives. If your organization celebrates reach metrics without interrogating provenance, you’re teaching everyone—agencies, vendors, and bad actors—that synthetic engagement is rewarded.
This isn’t just a problem that’s “out there.” The PR and marketing industries have plenty of muscle memory around manufacturing perception. The difference is whether we keep that muscle under ethical control or let the algorithm decide what we’re willing to do. Just because you can manufacture influence doesn’t mean you should.
Neville Hobson: That’s quite a story, Shel. I’m wondering how many people in our profession truly understand how this actually works. Your call to action, as it were, was to pay attention to this and pay attention to that. But I think people need to understand why and the deeper picture surrounding it.
So, for instance, the report—some of which you summarized in your narrative—struck me. And indeed, from the summary I asked ChatGPT to create (that saved me reading the whole damn thing), it was very helpful. According to the report, the researchers said that Fuentes consistently generates extraordinary engagement in the first 30 minutes after posting on X. Early retweet velocity outperforms accounts with 10 to 100 times more followers than he’s got. You mentioned Elon Musk; he’s one of them. When normalized by follower count, his engagement is orders of magnitude higher than comparative political influencers.
Why does this matter? This to me is significant to try and get a handle on this. Platform algorithms heavily weight early velocity as a sign of relevance. So once triggered, content is promoted regardless of whether engagement is authentic. Speed, not scale, is a manipulation lever. This is a critical insight for communicators. Algorithms cannot distinguish motivation, only momentum.
So when people talk about—as they do, and I remember using this 10 years ago as a sign that something is working—”Look at how this thing’s taken off!” This is seriously significant: understanding how this works.
Another part of that is, as you mentioned, the foreign origin engagement—the synthetic catalyst, if you like. Half the retweets on Fuentes’s most viral posts came from non-US accounts, and you ran off a list of countries that are the prime originators of large volume. It says there is no plausible ideological or cultural reason for these regions to be organically amplifying a US-centric white nationalist figure. Makes sense, doesn’t it? So why does that matter? Well, these geographies closely match known low-cost engagement farm infrastructures. So foreign engagement appears to act as a spark, creating the illusion of virality.
And it uses phrases that most people won’t know about—I’m only just getting familiar with it myself—like classic “signal laundering.” You’ve heard of money laundering, right? But now signal laundering. It highlights this coordinated amplification, which is not spontaneous engagement. It’s not enthusiasm spreading naturally, it’s coordination masquerading as popularity.
So I think all of us, as communicators trying to grasp something like this to understand the significance of it, are going to have to spend a little extra time understanding how it all works.
There’s one element that came out that I thought, “Wow, yes, you see this.” I can think of two people I follow on LinkedIn who do this. Illustrating Fuentes in this example is not a passive beneficiary; he actively runs it. The evidence includes hundreds of documented instances where he issues real-time commands on live streams like “Retweet this,” “Everyone retweet,” “Quote tweet it now.” I see people doing that even on LinkedIn. There’s one individual I’m not going to mention—because it wouldn’t be right to do that in this way—who has got thousands and thousands of followers. I was looking back through some of his recent posts and they are full of stuff like that. His email newsletter is nothing but that, actually. These directives align precisely with the early velocity spikes observed in the data, according to the report.
Interestingly, X’s own policies say that this behavior qualifies as coordinated inauthentic activity, platform manipulation, and spam amplification, yet the activity persists on X. So to me, the question for everyone listening is: surely you cannot trust a platform like X with your brand messaging, right? So why are you still there in that case?
It means loads more we could dissect in that context, but I think it’s necessary for people to truly understand how this works before you can understand what to do about it.
Shel Holtz: Yeah, and you asked how many people in our business might actually understand this. I think if you look at a department like mine where there’s two of us and we’re mostly focused on internal communications, this doesn’t hit our radar. But if you’re a marketing agency and you are tasked with elevating a brand, you got to figure that if a 25-year-old white nationalist fringe character on the social-political scene can figure this out, the people running digital media for a mid-sized agency can easily figure this out.
I suspect there are probably YouTube videos telling you how to do this. You sign up with one of those farms in one of those countries that has the instruction to amplify every time you tweet, and you’re off to the races. And as you mentioned from the report, the algorithm can’t really tell the difference.
Now, this is something that I think is in large part on the platforms—whether it’s X or any of the others—to improve their processes so they can identify and block this sort of thing. The idea that you can start to get media coverage, that people will start including you in their reporting because you appear significant as a result of this blatant manipulation—when you really wield no influence, when the people retweeting you have accounts that have been set up just to retweet you—that’s on them, I think. But they’re clearly not doing anything about it. Musk wouldn’t do anything about it. I wouldn’t expect him to. Zuckerberg’s not going to do anything about it. I wouldn’t expect him to. I wish he would, but knowing what I know about these people, I wouldn’t expect them to spend time and money becoming more ethical. It’s just not in their DNA.
So it’s on us. And where I can see this being used in the business context most blatantly is by advocacy groups when an organization is having a crisis. Because who speaks first is the one who gets the traction. Everything else is reacting and responding to that. And if you could get that kind of momentum, that kind of velocity, that kind of visibility for your point of view in opposition to the perspective of the organization experiencing the crisis, then you’re going to win in that crisis. It’s going to be very difficult for the organization, even employing the best digital crisis communication practices, to overcome that kind of a process.
So this is why I think we need to be aware of this. From my perspective, I have my own personal views about Fuentes and the fact that he’s doing this, but that’s not what this is about. This is about the fact that if Fuentes can do it, your opposition can. It might be, let’s say, a union if you’re a non-union company and they’re trying to get a foot in the door. It could be a competitor trying to make you look bad and elevate their own organization as an investment or as a provider of goods or services. All of them can take advantage of this process because it’s possible.
And frankly, once you dig into it, while it seems complicated, it really isn’t. It’s just subscribing to these services, getting everything set up, and then you just start tweeting or posting on LinkedIn or wherever it is, and everything just follows.
Neville Hobson:: Yeah, I think I mentioned LinkedIn the way I did, but X is the serious negative platform, right? But I would imagine most other platforms that are used for business purposes are subject to this manipulation. And it makes you think you need to know more about the places that you spend time and populate and share information about your business.
The report goes into—or rather the interpretation I’ve made certainly—implications for communicators and organizations, or the key takeaways, I suppose, to summarize it all. I mean, you’re right, the report is long, and it would benefit from a simplified executive summary. Maybe what we’ve prepared might help people get a better handle on what to look at.
But some of the interesting things that summarize it: “Algorithms amplify speed, not authenticity.” And that’s what I think most people—and I’ve been guilty of this too—where speed is really the important thing here. The velocity of your message getting out there and going viral, as people still use that term, is what it’s all about. Absolutely, that’s not what it’s all about. And in this particular age we’re in now with artificial intelligence, I’m arguing very strongly that it is not about speed at all. It’s about being in the right place at the right time with the right message, not necessarily being the first or the fastest with that message.
Another point: “Anonymous and foreign networks can manufacture legitimacy.” How do you figure that out? Interestingly, and I agree with this very much so, “Mainstream media mistakes visibility for importance.” Absolutely true in my view. So all these tactics are portable.
And the final point, I suppose—there’s like 20 more I’ve got for now anyway—the real issue is not who used the playbook to do this. It’s how easy the playbook is to use. I think it’s absolutely right. And I think many people would succumb to kind of increased pressure to play the game because that’s what everyone else seems to be doing. But also it throws up, I think, a bigger concern. It’s become harder to measure engagement if what you’re measuring is suspect.
So that adds some big questions on how are you going to proceed from this point on. So:
What signals do you treat as evidence of relevance?
How easily could those signals be fabricated?
Are we rewarding momentum over substance? You need to know the difference.
And where does responsibility sit? Platform, media, or practitioner? Or all of the above?
Those are four questions—there’s probably lots more—but that might not be a bad starting point.
Shel Holtz: I don’t think it would. And I think the more practitioners who become aware of this, those that abide by an ethical code, need to raise their voices because I think the more pressure there is on the platforms, the more they will look to change the infrastructure to address this. If nobody complains or if it’s just people on the fringe like us, then nothing’s going to change.
And you’re right, Fuentes started all of this before the AI revolution. And AI is just going to make this worse with the ability to create those posts that get amplified because you have manipulated the system the way Fuentes has. So I’d like to see people kind of raise their voices. Maybe professional associations need to start advocating on behalf of fixing this. You know, AI has led a lot of people to talk about authenticity more than we already were, and we already were a lot. And if authenticity matters, then I really do think we need to raise our voices and demand change from the platforms so that people can’t do this.
Neville Hobson: I agree.
Shel Holtz: And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #493: How to (Unethically) Manufacture Significance and Influence appeared first on FIR Podcast Network.

Dec 19, 2025 • 1h 1min
Circle of Fellows #123: The Future of Communication — 2026 and Beyond
In a dynamic discussion, Adrian Cropley, Bonnie Caver, Mary Hills, and Zora Artis delve into the future of communication. They highlight the transformation driven by AI and the necessity for authenticity in the evolving workplace. The panel explores the vital role of communicators as trusted advisors, addressing complexity and ethical practices. They stress the importance of interdisciplinary skills, active listening, and defining communication value in business terms. Predictions about reputation management and the impact of technology provide exciting insights for aspiring communicators.

Dec 15, 2025 • 19min
FIR #492: The Authenticity Divide in Omnicom Layoff Communication
In this short midweek episode, Shel and Neville dissect the communication fallout from the $13.5 billion Omnicom-IPG merger and the controversial pre-holiday layoff of 4,000 employees. Among the themes they discuss: the stark contrast between the polished corporate narrative aimed at investors and the raw, real-time reality shared by staff on LinkedIn and Reddit, illustrating how organizations have lost control of the narrative. Against the backdrop of a corporate surge in hiring “storytellers,” Neville and Shel discuss the irony of failing to empower the workforce — the brand’s most authentic narrators — and analyze the long-term reputational damage caused by tone-deaf leadership during a crisis.
Links from this episode:
Another NOT SO HOT TAKE: Omnicom is a communications company. They didn’t forget how to communicate. They chose who to communicate to.
Omnicom layoffs—how a communications company created its own crisis
The Omnicom-IPG merger was confirmed this week. 4,000 jobs will be cut by Christmas. The announcement came the week after Thanksgiving. I’ve been here before.
Inside Omnicom’s Town Hall: Adamski confronts criticism, outlines new power structure after IPG acquisition
Companies Are Desperately Seeking ‘Storytellers’
The next monthly, long-form episode of FIR will drop on Monday, December 29.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com.
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel Holtz Hi everybody and welcome to episode number 492 of For Immediate Release. I’m Shel Holtz.
Neville Hobson And I’m Neville Hobson. In this episode, we’re going to talk about something that’s been playing out very publicly over the past few weeks in our own industry, i.e. communication. It’s about Omnicom, its merger with IPG, and the layoffs that followed. Following confirmation of the $13.5 billion merger, the company announced that around 4,000 roles would be cut, with many of those job losses happening before Christmas.
On the face of it, this is not unusual. Mergers of this scale inevitably create overlap, and redundancies are part of that reality. What makes this different was not simply the decision, but how the story unfolded and where.
On one level, there was the official corporate narrative. Omnicom’s public messaging focused on growth, integration, and future capability. It was language clearly written with investors, analysts, and the financial press in mind—not to mention clients. Polished, strategic, and familiar to anyone who has worked around holding companies. At the same time, a very different narrative was emerging elsewhere, particularly on LinkedIn and Reddit, driven by people inside the organization—people who had lost their jobs and people watching colleagues lose theirs.
That contrast became the focus of an Ad Age opinion piece by Elizabeth Rosenberg, a communications advisor who had handled large-scale change and layoffs herself. In the piece—which, by the way, Ad Age unlocked so it’s openly available—and later in her own LinkedIn posts, Rosenberg described watching two stories unfold in real time. One told to shareholders and external stakeholders, the other taking shape in comment threads written by the people most directly affected. Her point was not that Omnicom failed to communicate, but that it chose who to communicate to.
That observation resonated widely inside the industry. Rosenberg’s LinkedIn post made clear that she was less interested in being provocative than in naming something that many people were already seeing and feeling. She also noted the response she received privately—messages describing her comments as brave—and questioned what it says about our profession if plain speaking about human impact is now treated as courage.
As that conversation gathered momentum, another LinkedIn post took the discussion in a slightly different direction. Stephanie Brown, a marketing career coach, wrote about the timing of the layoffs. Her post was grounded in personal experience; she describes being laid off herself in December 2013 and what it meant to lose a job during a period associated with family, financial pressure, and emotional strain.
She acknowledged that layoffs are part of corporate life but argued that timing is a choice and that announcing thousands of job losses immediately after Thanksgiving, with cuts landing for Christmas, intensified the impact. That post triggered a large and emotionally charged response—thousands of reactions, hundreds of comments. Some people echoed Brown’s argument that holiday season layoffs carry an additional human cost. Others pushed back, arguing that earlier notice can be preferable to delayed disclosure even if the timing is painful.
What stood out was not consensus, but the depth of feeling and the willingness of people to share lived experience publicly. Across both posts and in the comment threads beneath them, a broader picture began to emerge. Former Omnicom and IPG employees described how they received the news. Industry veterans expressed sadness rather than surprise. Practitioners questioned what this says about internal credibility, culture, and leadership. Others pointed out that holding company economics have long prioritized shareholders and that this moment simply made that reality visible.
What’s notable here is that LinkedIn wasn’t just a reaction channel. It became the place where the story itself evolved. The press release was no longer the primary narrative. The commentary, the responses, and the shared experiences became part of how the situation was understood. So that’s the landscape we’re stepping into today: A major communication holding company announcing significant layoffs via a formal, investor-focused message, and a parallel, highly visible conversation driven by employees, former employees, and industry peers about audience, timing, and impact.
Rather than rushing to judgment, I think this is worth exploring carefully, especially for people whose job is communication, reputation, and trust. So, Shel, what would you say to all of this?
Shel Holtz I would say, first of all, that for an organization that purports to be a communication organization, their failure to recognize that they employ thousands of communicators who know how to use publicly accessible channels is a massive failure in communication planning. It should have been anticipated. But the story is dripping with irony, Neville. In light of an article the Wall Street Journal published last week, the article pointed to an entirely different approach that companies are taking than the one Omnicom defaulted to.
While Omnicom is watching its narrative get dismantled by its own employees on Reddit, the Wall Street Journal just reported that the hottest job in corporate America is—are you ready for this?—”storyteller.” Listings for jobs with storyteller in the title have doubled on LinkedIn in the past year. Executives used the word “storytelling” 469 times on earnings calls through mid-December.
Companies like Microsoft, Vanta, and USAA aren’t just hiring communicators anymore; they’re hunting for directors of storytelling and heads of narrative. Now, on one level, you can see why they’re doing this. The Journal points out that print newspaper circulation has dropped 70% since 2005. The army of journalists we used to rely on to tell our stories has evaporated. If companies want their news covered, they realize they have to become the media themselves. That’s what Tom Foremski said so many years ago: Every company is a media company.
But what this really means is that their traditional gatekeepers are gone. Listening to what’s happening with Omnicom, you have to wonder if these companies actually understand what storytelling means in 2025. We’re seeing a collision of two worlds here. In one world, you have the C-suite still believing they can control the narrative by hiring better writers. They think if they can just recruit a customer storytelling manager—that’s what Google is doing—or a former journalist to run corporate editorial—that’s what Chime is doing—they can fill the void. They think they can craft a sanitized, strategic message for investors and that will be the story of record.
Then you have the real world, Neville; it’s the one you just described. While Omnicom was probably busy polishing its official investor-focused story, the actual story was being written in real time on Reddit and LinkedIn by the people living through the chaos. These employees didn’t need a head of storytelling. They didn’t need a corporate newsroom. They had the truth. They had a platform.
This is exactly the loss of control we’ve been warning about for how many years. The Journal quotes a communication CEO who says leaders are finally realizing that brands that are winning right now are the ones that are most authentic and human. Yeah, he’s absolutely right. But here’s the problem: You can’t hire authenticity. If your new director of storytelling is busy writing a glossy piece about innovation while your employees are on social forums describing a culture of fear and disposal, you’ve lost the plot. The story isn’t what you publish on your corporate blog. The story is what your people say it is.
The Journal notes that a USAA storyteller might work some real experiences into an executive speech. Yeah, that’s fine. It’s also table stakes. If Omnicom or any of these companies rushing to hire storytellers want to tell a better story, they don’t just need to hire better writers. They need to give their employees a better story to tell. That’s the idea behind employee advocacy, after all, isn’t it? Because if the story you pay someone to write conflicts with the story your employees are living, the employees are going to win every single time. And as we’re seeing with Omnicom, they’re going to do it on their own channels and they’re going to do it without anybody’s approval.
Neville Hobson Yeah, one of the ironies that came across in the story, according to both of the women I quoted from the LinkedIn posts, is that Omnicom and IPG have spent decades advising clients on authentic communication, yet failed to apply that themselves. Rosenberg highlights comments from laid-off staff describing abrupt, impersonal Zoom calls, minimal explanation of rationale or future direction, and leadership absence at critical moments. These voices carried more weight than any press release because employees are the brand’s most credible storytellers.
Switch over to the Town Hall in early December, which Omnicom hosted—the first global company-wide Town Hall since the merger. It was actually completed at the end of November. The behavior of the CEO led me to think just reading this—is he tone-deaf or does he just not care?
One quote in Storyboard 18 says: “Opening the session, Florian Adamski, the CEO of Omnicom Media, reportedly addressed intense industry speculation surrounding the merger and restructuring. He criticized the tone of press and social media commentary, describing detractors as ‘haters’ and stressed that decisions have been taken after considerable deliberation, urging staff to stay patient as transitions rolled out.“
It goes on elsewhere to repeat that call from the leadership of Omnicom to be patient, everyone, it’s all going to be fine. But without any communication explaining how—or worse, even addressing the detail of what people have been saying about this. Is that tone-deaf or what?
Shel Holtz It is seriously tone-deaf. I remember years ago—this was at a Ragan conference in Chicago—a CEO was speaking. I think he was the CEO of Avon. He made the point that he thinks the minute a CEO is installed in that role and sits in the chair, there is a “stupid ray” aimed at them that affects their brains and makes them forget who employees are.
He made a point at least once a month of visiting frontline employees. It could be at a manufacturing facility where they were filling bottles, but he talked to them to remind himself that these are real people, that they have real lives, and that they are smarter than you tend to give them credit for when you don’t interact with them. You’re the CEO, you’re part of the executive team, and you think those are the “little people” down there doing all the work, not smart enough to absorb bad news.
In speaking to them, he found that they were scout masters, they helped their spouses run businesses, they were the president of the local Kiwanis club. They are smart, they can handle bad news, and they can understand things like business plans and corporate strategy. I think in this case, the Omnicom CEO obviously has not moved himself out of the path of that “stupid ray,” because his assessment of employees and the role they could play in this was seriously misguided.
Neville Hobson Yeah, your mention of that phrase “the little people” reminded me of that hotel owner in New York who went to jail for not paying taxes because she said “only the little people pay taxes.“
Shel Holtz That was Leona Helmsley.
Neville Hobson That’s it. So, one thing I also thought when I was thinking about this story: The optics are bad, but this isn’t about the optics. It’s about trust.
To me, this happened. 4,000 people are losing their jobs right before Christmas. It’s going to be extremely painful to many of them. They feel angry. The deeper risk is the long-term erosion of trust in Omnicom. Employees disengage or leave faster than those who are still there. Leadership messages lose credibility. Organizational resilience weakens, and clients notice the inconsistency between advice given and the behavior shown. This gap is damaging.
The other thing to mention—and it really confirms the point you made earlier—is that in a world where every employee has a public platform like this, organizations do not control the narrative. That will be obvious to you and me, but this sets it quite clearly. The story that endures is how people remember being treated when change was unavoidable.
You can’t actually predict what effects that is going to have on Omnicom. It may well be that in this age of polarization and utter cynicism, no one will care about this when they get hired and go work for Omnicom. But this is a firm that I wouldn’t like to work for based on this.
I started my working career in advertising at J. Walter Thompson back in the late 70s. Omnicom has a storied history in its current form, with the legacy brands they keep talking about in the press releases that are all being retired. Doyle Dane Bernbach, BBDO—some of these firms were around when I was at JWT all those years ago. It reminds me that nothing is permanent. The gloss in advertising is often just a veneer. I think they will not gain any credit for this, and the CEO’s reaction, just according to that town hall write-up, was pretty appalling.
Shel Holtz It’s just terrible. As we know, because we report on it every year, employees are still the most trusted source from a company according to the Edelman Trust Barometer. When you have this many employees out talking about what happened to them, telling their stories authentically, that’s what people are going to remember. They’re not going to remember the financial forecast that Omnicom has put forward.
Somebody needs to counsel this guy. I read somewhere that even for the layoff notification he was supposed to participate in, they said he couldn’t because he was having “technical difficulties.” I mean, come on, really? You’re not even going to get that personal message of regret from the leader of the organization?
We’re in a period right now where people are struggling to find jobs in communication. If Omnicom opens some jobs, people will take those jobs because it’s hard to find one right now. But if that pendulum swings and it becomes a seller’s market rather than a buyer’s market again, I can’t imagine a lot of communicators who are going to want to work there. They may find themselves hiring a more mediocre workforce because the best of the best are going to say, “No, I’m really good, the world knows I’m good, I can work anywhere, and I’m not going to go work for those jerks.“
Neville Hobson I think it’s a good point. Another thing to mention is that I was surprised to see the comments on Reddit. There are hundreds, if not thousands, and in a way I wasn’t expecting. I expected a lot of ranting, a lot of ugliness, and maybe trolling. I didn’t see much of that. I saw what I would describe as sheer sadness by many people, and calm acceptance of the awfulness of it all by those who’ve been fired. The two LinkedIn posts I discussed are very much worth looking at, along with the comments.
Layoffs are inevitable, and indeed in the case of this acquisition, they were inevitable. But the communication failure was not inevitable if they had handled it differently. Employees now shape the public narrative in real time. Trust, once lost, quickly becomes an external issue, which is what we’re seeing playing out still. Communication principles apply most when it’s hardest to use them, like this situation, and I think they failed the test totally.
Shel Holtz Yeah, I’ll tell you what, we just recently completed an acquisition here where I work, and in our little two-person communication team in our small billion-and-a-half-dollar company, the communication was far superior to what we see coming out of this behemoth of a communication organization. It’s pathetic.
This is what Zuckerberg always said when he got caught doing something bad: “We’ll have to do better.” He never does, and I doubt that Omnicom will either based on this behavior, but they need to do better. And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #492: The Authenticity Divide in Omnicom Layoff Communication appeared first on FIR Podcast Network.

Dec 15, 2025 • 23min
ALP 291: Embracing innovation to survive and thrive in 2026
In this episode, Chip and Gini discuss the importance of strategic planning for 2026. As they near the end of 2025, they emphasize the need for agencies to set themselves apart and adapt to the evolving landscape, particularly through the effective use of AI.
Despite ongoing economic challenges, they highlight the potential for AI to enhance both efficiency and strategic thinking. Chip and Gini also stress the importance of refining the ideal client profile and taking calculated risks. They share their personal experiences with using AI to assist in planning and decision-making processes, pointing out both the benefits and limitations of current AI technology. [read the transcript]
The post ALP 291: Embracing innovation to survive and thrive in 2026 appeared first on FIR Podcast Network.

Dec 10, 2025 • 58min
AI and the Writing Profession with Josh Bernoff
Josh Bernoff has just completed the largest survey yet of writers and AI – nearly 1,500 respondents across journalism, communication, publishing, and fiction.
We interviewed Josh for this podcast in early December 2025. What emerges from both the data and our conversation is not a single, simple story, but a deep divide.
Writers who actively use AI increasingly see it as a powerful productivity tool. They research faster, brainstorm more effectively, build outlines more quickly, and free themselves up to focus on the work only humans can do well – judgement, originality, voice, and storytelling. The most advanced users report not only higher output, but improvements in quality and, in many cases, higher income.
Non-users experience something very different.
For many non-users, AI feels unethical, environmentally harmful, creatively hollow, and a direct threat to their livelihoods. The emotional language used by some respondents in Josh’s survey reflects just how personal and existential these fears have become.
And yet, across both camps, there is striking agreement on key risks. Writers on all sides are concerned about hallucinations and factual errors, copyright and training data, and the growing volume of bland, generic “AI slop” that now floods digital channels.
In our conversation, Josh argues that the real story is not one of wholesale replacement, but of re-sorting. AI is not eliminating writers outright. It is separating those who adapt from those who resist – and in the process reshaping what it now means to be a trusted communicator, editor, and storyteller.
Key Highlights
Why hands-on AI users report higher productivity and quality, while non-users feel an existential threat
How AI is now embedded in research, brainstorming, outlining, and verification – not just text generation
Why PR and communications teams are adopting faster than journalists
What the rise of “AI slop” means for trust, originality, and attention
Why the future of writing is not replacement – but re-sorting
About our Conversation Partner
Josh Bernoff is an expert on business books and how they can propel thinkers to prominence. Books he has written or collaborated on have generated over $20 million for their authors.
More than 50 authors have endorsed Josh’s Build a Better Business Book: How to Plan, Write, and Promote a Book That Matters, a comprehensive guide for business authors. His other books include Writing Without Bullshit: Boost Your Career by Saying What You Mean and the Business Week bestseller Groundswell: Winning in a World Transformed by Social Technologies. He has contributed to 50 nonfiction book projects.
Josh’s mathematical and statistical background includes three years of study in the Ph.D. program in mathematics at MIT. As a Senior Vice President at Forrester Research, he created Technographics, a consumer survey methodology, which is still in use more than 20 years later. Josh has advised, consulted on, and written about more than 20 large-scale consumer surveys.
Josh writes and posts daily at Bernoff.com, a blog that has attracted more than 4 million views. He lives in Portland, Maine, with his wife, an artist.
Follow Josh on LinkedIn: https://www.linkedin.com/in/joshbernoff/
Relevant Links
https://bernoff.com/
https://bernoff.com/blog/ai-writer-survey-results-analyzing-royalties-neuroscientific-sneakers-newsletter-5-november-2025
https://www.publishersweekly.com/pw/by-topic/digital/copyright/article/99019-new-report-examines-writers-attitudes-toward-ai.html
https://gothamghostwriters.com/AI-writer
Audio Transcript
Shel Holtz
Hi everybody, and welcome to a For Immediate Release interview. I’m Shel Holtz.
Neville Hobson
And I’m Neville Hobson.
Shel Holtz
And we are here today with Josh Bernoff. I’ve known Josh since the early SNCR days. Josh is a prolific author, professional writer, mostly of business material. But Josh, I’m gonna ask you to share some background on yourself.
Josh Bernoff
Okay, thanks. What people need to know about me, I spent four years in the startup business and 20 years as an analyst at Forrester Research. Since that time, which was in 2015, I have been focused almost exclusively on the needs of authors, professional business authors. So I work with them as a coach, writer, ghostwriter, an editor, and basically anything they need to do to get business books published.
The other thing that’s sort of relevant in this case is that while I was at Forrester, I originated their survey methodology, which is called Technographics. And I have a statistics background, a math background, so fielding surveys and analysing them and writing reports about them is a very comfortable and familiar place for me to be. So when the opportunity arose to write about a survey of authors in AI, said, all right, I’m in, let’s do this.
Shel Holtz
And you’ve also published your own books. I’ve read your most recent one, How to Write a Better Business Book.
Josh Bernoff
Mm-hmm, yes. So, this is like, the host has to prod you to promote your own stuff. Yes. Yes. So by my two most recent books, I wrote a book called Writing Without Bullshit, which is basically a, a manifesto for, people in corporations to write better. and I wrote build a better business book that you talked about, which is a complete manual for everything you need to do to think about conceive. write, get published and promote a business book. Yeah, so they’re both both available online where your audience can find them.
Shel Holtz
Wherever books are sold. So we’re here today, Josh, to talk about that survey of writers that you conducted, asking them about their use of AI. What motivated you to undertake this survey in the first place?
Josh Bernoff
Well, I’ll just go back a tiny little bit. About two years ago, Dan Gerstein, who is the CEO of Gotham Ghost Readers and a really fantastically interesting guy, reached out to me because he knew my background of doing statistics and said, let’s do a survey of the ROI of business books, get business authors to talk about what they went through to create their business books and whether they made a profit from all the things that followed on that.
So at the conclusion of that project, which people can certainly still get access to that information, at authorroi.com, at the conclusion of that project, it was clear that we could do a really good job together. So when he came to me and said, let’s do a survey about authors and AI. It’s a topic I’ve been researching a lot, talking to many authors about how they use it. And I said, all right, yeah, let’s actually get a definitive result here. And we were really pleased that the survey basically went viral.
We got almost 1,500 responses, way more than we did for the business author survey, because there’s a lot more writers than authors in the world. And because we got such a large response, it was possible to slice that so I can answer questions like how do technical writers feel about AI or is this different between men and women or older or younger people. And so that enabled us to do a really robust survey which people can download if they want. It’s at gothamghostwriters.com/AI-writer, available free for anyone who wants to see it.
Shel Holtz
And we’ll have that link in the show notes as well.
Josh Bernoff
Okay, great.
Neville Hobson
It’s a massive piece of work you did, Josh. I, I kind of went through the PDF quite closely because it’s a topic that interests me quite a bit. And I was really quite intrigued by many of the findings that it surfaced. But I have a fundamental question right at the very beginning, because I’m a writer myself. But I encountered this phrase throughout, “professional writer.” I’m not a professional writer, but I’m a writer.
And I know a lot of communicators who would say, yeah, I’m a professional writer. I don’t think it fits the definition you’re working to. So can you actually succinctly say what is a professional writer as opposed to any other kind of writer that communicators might say they are? What’s the difference?
Josh Bernoff
Yeah, that’s there’s less there than meets the eye and I will describe why.
So, we fielded this survey, and we basically said if you are a writer, you can answer this survey, and we got help from all sorts of people who are willing to share it within their communities. So over 2000 people responded. But of course, you have to disqualify people if they’re not really a writer and the way we define that is, we said, you spend at least 10 hours a week on writing and editing? And somebody who didn’t, I’m like, okay, you’re not really a writer if you don’t spend at least 10 hours a week on it.
And we also looked at how people made their living. So let’s just say you’re a product manager. You’re probably doing a lot of writing, but you wouldn’t describe yourself as a professional writer. So part of what we did was to have people answer questions about what kind of writer are you?
And we had the main categories and we captured almost everybody in them, know, marketing writers, nonfiction authors, ghost writers, you know, PR writers and so on. And although we had not intended to do so, we got almost 300 responses from fiction authors. And we were like, okay, what are we going to do here? Because these people are very different from the people who are writing in a business context or non-fiction authors, but I don’t want to invalidate their experience.
So we basically divided up the survey and we said, most of the responses are from people who are writing things that are intended to be true. And a small group is written from people who are intentionally lying because they’re fiction writers. So then we had an ongoing discussion about what do we call the people who write things that are intended to be true. And Dan Gerstein and I eventually agreed to call them professional writers, which is not a dig on the professional fiction authors, but it’s just a catchall for people who are making their living as a writer and writing nonfiction.
Shel Holtz
Josh, you described in the survey report a deep attitudinal divide where users see productivity and non-users see what you called a sociopathic plagiarism machine.
Josh Bernoff
Thanks. Now, now, wait a minute. I didn’t call it that. One of the people who took the survey called it that. Yes, that was a direct quote. I mean, I just want to comment here that in the survey business, we call responses to open-ended questions verbatims, right? So these are the actual text responses. And because we surveyed writers, these are the best verbatims I’ve ever seen. This is extremely literate.
Shel Holtz
OK, that was, that was a response. Got it. Well, yeah.
Josh Bernoff A collection of people expressing their opinion and the sociopathic plagiarism machine came from one of those folks. Yes.
Shel Holtz
I did like that a lot. But for somebody like me, a communications director managing a team, how do you bridge that gap when half the team might be ethically opposed to the tools that the other half is enthusiastically using every day?
Josh Bernoff
You just tell the other people to go to hell. No, I’m kidding! Now this is, it’s true. So one of the most notable findings of the survey was that people who do not use AI are likely to have negative attitudes about it. So it’s not just like, you know, well, I don’t happen to drink alcohol, but it’s fine with me. No, these people are.
Josh Bernoff
This is bad for the environment. It’s an evil product. There were a lot of interesting verbatims in the survey from people like that. 61% of the professional writers said that they use AI. So this is a minority of people who are not using it, and an even smaller group who are opposed to it. But they are fervently opposed to it. The people who do use it are generally getting really useful things done. A majority say that it’s making them more productive. And the people who are most advanced are doing all sorts of things with it.
By the way, this is really important to note. The thing that everyone’s sort of morally up in arms about, which is people generating text that’s intended to be read using AI, is actually quite rare. Most of the, that was only 7% that did that and only 1% that did that daily. So most people are doing research or they’re, they’re, you know, using it as a thesaurus or, or, using it to analyse material that they find and, and are citing as own background or something like that. it, to come directly at your question though, it is important to acknowledge this divide in any writing organisation.
And I think that the people who are using AI need to understand that there are some serious objections and they need to address that. The people who are not using it, I think, need to understand that perhaps they should be trying this out just so that they’re not operating from a position of ignorance about what the thing can do.
And I think most importantly, the big companies that are creating AI tools need to be a lot more serious about compensating the folks who create the writing that it’s trained on because it is putting the sociopathic plagiarism machine aside, it’s pretty bothersome when you find out that the thing has absorbed your book and is giving people advice based on that and you got no compensation for that.
Shel Holtz
I just want to follow up on this question real quickly. Were you able to quantify among the people who don’t use it and object to it the reasons? I mean, you listed a couple, but I’m wondering if there’s any data around the percentage that are concerned about the environment, the percentage that, I mean, the one that I keep reading in LinkedIn posts is it has no human experience or empathy, which I don’t understand why that’s a requirement for say earnings releases or welcome to our new sales VP, but nevertheless.
Josh Bernoff
Yeah, I going to say that describes a bunch of human writers too. They don’t seem to have any empathy. So we looked at one of the questions that we asked is how concerned are you about the following? And then we had a list of concerns. And it’s interesting that they divide pretty neatly into things that everyone is concerned about and things that the non-users are far more concerned about. So for example, the top thing that people were concerned about was, and I quote, AI-generated text can include factual errors or hallucinations. So even the people who use it are like, okay, we’ve got to be careful with this thing because sometimes it comes up with false information.
For example, if you ask it for my bio, it will tell you that I have a bachelor’s degree in classics from Harvard University and an MBA from the Harvard Business School, and I’ve never attended Harvard. So it’s like, no, no, no, no, no, no, that’s not right!
On the other hand, there are some other things where there’s a very strong difference of opinion. So for example, question, AI generated text is eroding the perception of value and expertise that experienced writers bring to a project. 92% of the non-users of AI agreed with that, but only 53% of the heaviest users of AI agreed with that. So if you use AI a lot, it’s like, well, actually, this isn’t as big of a problem as people think.
The environmental question, I think that non-users, 85% of them were concerned about its use of resources, but only 52% of the heavy users were concerned about that. And I want to point out something which I think is probably the most interesting division here. If you ask writers, should AI-generated text be labelled as such, they all mostly agree that it should. But if you ask them, should text generated with the aid of AI be labelled as such, the people who use AI often think, well, you don’t need to know that I used it to do research, because it’s not visible in the output. Whereas the non-users are like, no, you used AI, you have to label it. So that’s a good example of a place where the difference of opinion is going to have to somehow get settled over time.
Neville Hobson
That’s probably one of those things I would say take a while to do that, given what you see. and I talked about this recently on verification. Some people, and I know some people who are very, very heavy users of AI who don’t check stuff that is output with the aid of their AI companion. That’s crazy, frankly, because as Shel noted in our conversation, the latest episode of FIR podcast, your reputation is the one that’s going to suffer when it’s when you get found out that you’ve done this and haven’t disclosed it.
But it also manifests itself in something, you know, the great em-dash debate that went on for most of this year. Right. But I wrote a post about a couple of weeks ago about this and about ChatGPT’s plan saying you can tell it not to use em-dashes.
And my experience is I’ve done that and it still goes ahead and does it. It apologizes each time, it still goes ahead and does it, you know. But you know what? That post produced a credible reaction from people. 40,000 views in a couple of days. That’s for me, that’s a lot, frankly. And I did an analysis, which I published just a few days ago, that showed the opinions people have about it are widely divisive.
Some see it as, I’m not going to give up my whole heritage of writing just because of this stupid argument to others who say you’ve got to stop it because it doesn’t matter if it got it from us in the first place, it signals that you’re using AI, therefore your writing is no good. That kind of discussion was going on. So I’d see this is continuing. It’s crazy. looking at the data highlights, there’s some really fascinating stuff in there, Josh, that caught my eye.
The headline starting with the right to see AI is both a tool and a threat. And yes, that’s quite clear from what you’ve been saying, but also this hallucinations concern 91% of writers. And I think that’s true across, you no matter how experienced you are, it concerns me, which is why I’m strongly motivated to check everything, even though sometimes you think, God, do it, don’t don’t question, just do it.
I reviewed something recently that had 60 plus URLs mentioned in it. And so I checked them all, and 15 of them just didn’t exist or 404s or server errors. And yet the client had issued it already and without checking that kind of thing. Stuff like that. So you’ve got a job to educate them.
So I guess this is all peripheral to the question I wanted to ask you, which is that correlation that comes across in the data highlights between AI usage and positive attitudes towards it and as opposed to the negative attitudes, but the users are very highly positive.
How should we interpret this divide? I guess is the question you may have touched on this already, actually, I think you may have actually, is it just a skills gap? Is it a cultural gap? Or what is it? Because the attitudes that are different, I guess, like much these days seems to me to be quite polarised, strong opinions, pro and con. How do we interpret this?
Josh Bernoff
All right, so I want to go back to a few of the things that you said here. I have some advice in my book, Build a Better Business book, and it’s generally good advice about checking facts that you find, finding false information on the internet has always been a problem for people who are citing sources.
There used to be a guy called the numbers guy in the, Carl Bialik in the Wall Street Journal, who would actually write a column every month about some made up statistic that got into print. All that we’ve done is to make it much more efficient. But people do need to check. And it’s interesting. You learn when you use these tools that it’s subtle. If you click and say, OK, that is a real source, that’s fine.
But often, it will tell you that that source says X or Y and then you go and you read it and you’re like, no, it doesn’t actually say that. So yes, you are now citing a source that when you go look at it says the opposite of what you thought it said. Real professional writers know that that is an important part of their job and it just happens to be easy to behave incompetently and irresponsibly now.
But believe me, I deal with professional publishers all the time and there are all these clauses now in their contracts which basically say you have to disclose when you’re using AI and if there’s false information in here then you’re responsible for it and we might not publish it. I will say this, so let’s just put this in a different context. So think about Photoshop.
Okay, when Photoshop started to become popular, people were like, wait a minute, we can’t believe what we see in pictures. Maybe the person doesn’t have skin that’s all that smooth. Maybe that background is fake. But in context where you’re supposed to be doing factual stuff, like a photo that’s in a magazine, there’s safeguards against this and the users have learned what is legit and what isn’t. And I think also that the readers have learned that, okay, we have to be a little skeptical about what we see. This AI has made it possible to do that with text way more easily, but it’s still the case that you, as a reader, you need to be skeptical and as a user, you need to be sophisticated about what you can and can’t do and what is and is not legit.
I do these writing workshops with corporations. I’m doing one next week with a very large media company. And I’m trying to help them to understand, start with clear writing principles and use AI to support them as opposed to use it to substitute for your judgment, generate crap, and then do a disservice to the poor people who are reading it.
Shel Holtz
I am always amused when I see people expressing such angst over AI generated images taking money from artists. And I didn’t hear the same level of anxiety when CGI became the means of making animated movies. What happened to the people who inked the cells? They’re out of a job. No, Pixar got nothing but praise.
Josh Bernoff
Yeah, I know. Right. Right. They should. Yes, yes Yes, right and it’s like no no, they should have actually gotten 26,000 dinosaurs in that scene and I’m like You you were entertained admit it and you know that they’re not real and that’s it…
Shel Holtz
Yeah. Josh, your data shows that thought leadership writers and PR and comms professionals are the heaviest users of AI. Thought leadership writers, 84% of them and 73% of PR and comms professionals are using AI in their writing. Journalists are somewhere around half of that at 44%.
Did you glean any insights as to why the people who are pitching the media are using this more than the people being pitched?
Josh Bernoff
I have some theories about that. What I’m about to tell you is not supported by the data, although I could go in and start digging around. There’s infinite insight in here if I do that. So I think journalists are a little paranoid about it. And the fact that, yes, 44% of the journalists said that they used it, but only 18% said that they used it every day, which is at the very bottom of all the professional writers.
So I think they are not only concerned about their livelihood, but also that they don’t wanna make a mistake. They don’t wanna get anything into print that’s false. Whereas if you look at the thought leadership writers and the PR and comms professionals, it’s a simple question of volume. These people are under pressure to produce a very large amount of information.
And I can tell you as a professional writer that that there are certain tasks that you really would rather not spend time on if an AI can do it. So if you’re gathering up a bunch of background information and perplexity does a better job on contextual searches than Google, which it absolutely does, then you’re probably going to use it.
Now, there is the risk that these people are basically generating large quantities of crap and then sharing it. But I think that that rapidly becomes unproductive. If you’re basically spamming people with AI slop, then they will immediately become sort of immune to that, and then you lose trust and at that point you’ve destroyed your own livelihood.
Neville Hobson
Yeah, absolutely. I want to ask you about one of the other finds you had in here about ChatGPT is the clear leader amongst all writers. 76% using it weekly. I use ChatGPT more than any other tool. I’m very happy with it. It does what I want. But in light of how fast things move in this industry, how things change. How do you see that shifting or does it not actually matter at the end of the day which tool you use as long as it delivers what you want from it?
Josh Bernoff
Well, what you have here is people spending hundreds of millions of dollars to become the default choice, the sort of dominant company here. And if you look at past battles of this kind to be like, who is the top browser or what’s the top mobile operating system, this is a land grab.
If you sit out and wait and see what happens, you could very easily end up on the sidelines, which is why there’s so much money flooding into this. ChatGPT definitely has an early lead, but there was an article in the Wall Street Journal yesterday, I believe, about the fact that they’re very concerned about Google. And the reason is on a sort of features and capability basis, Google is Google better?
It depends on what day it is, they keep making advances. But it does integrate with people’s basic use of Google in other ways, and for example, use of Google in email. And wait a minute, have we never heard this story before? Where a company that has a dominant position in one area attempts to leverage it in another area? Gee, that’s like the whole story of the tech industry for the last 30 years!
Josh Bernoff
The same is true, my daughter works in a company that uses Microsoft products, which is very common. And so everybody in that company is using Microsoft Copilot because they got it for free. There’s this, if you ask me who is going to have the top market share in 18 months, I have no clue, but I don’t think that ChatGPT is necessarily in a position to say, ours is clearly better than everybody else and so everyone will use what we have.
I will point out that the, I’m trying to remember if I have the number on this, but the average person who is using these tools in a sophisticated way is typically using at least three or four different tools. So just like you might use Perplexity for one web search and Google for another, you might decide to use Microsoft Copilot in some situations and use Google Gemini in another situation.
Neville Hobson
It’s interesting that because I started using Copilot recently through a change of how I’m doing something for one particular area of work I’m interested in. And it blew me away because I’m using Copilot, it’s using ChatGPT5. So and I see, I sense the output I get from the input I give it is in a similar style to what ChatGPT would write.
So I’m impressed with that and I haven’t gained any further significance to it. Maybe it’s coincidental, but I quite like that. So that’s actually getting me more accustomed to Microsoft’s product. So these little things, maybe this is how it’s all going to work in the end.
Josh Bernoff
Yeah, yeah, I will point out that professional writers that I talked to are very enamoured of Claude as far as the creation of text. And definitely if you’re doing a web search, Perplexity has got some pretty superior features for that. I find myself often using telling ChatGPT, don’t show me anything unless you can provide a link, because I’m not going to trust you until you do that. And I’m going to check that link and see what it really says.
So that’s, you know, the, the, the development of specialised tools for specialised purposes is absolutely going to continue here.
Shel Holtz
Yeah, I’ve been using Gemini almost exclusively since 3.0 dropped. I find it’s just exponentially better, but I’m sure that when ChatGPT releases their next model, I’ll be back to that. In the meantime, I did see Chris Penn commenting, I think it was just yesterday on that Wall Street Journal article pointing out that it’s baked into Google Docs and Google Sheets and all the Google products, whereas OpenAI doesn’t have any products to bake it into.
And that’s a clear advantage to Google. But Josh, you revealed in the research that 82% of non-users worry that AI is contributing to bland and boring writing. What I found interesting was that 63% of advanced users felt the same way, that it’s creating this AI slop.
So as a counsellor to writers, how would you counsel people, our audience is organisational communicators. So I’ll say, would you counsel organisational communicators? When cutting through the noise is vital, you need to get your audience. I deal mostly with employee communication, and we need employees to pay attention to this message, despite the fact that there are so many competing things out there, just clamouring for their attention. How do you avoid the trap of this bland and boring writing when you’re so desperate to cut through that clutter and capture that attention?
Josh Bernoff
Yes, well, large language models create bad writing far more efficiently than any tool we’ve ever had before. So, and of course, I’m talking to both corporate writers and professional authors all the time about this. And so basically, the general advice is that the more you can use this for things behind the scenes, the better off you are and the more you use it to actually generate text that people read, the worse off you are.
I’m gonna give you a very clear example. So I am currently collaborating with a co-writer on a book about startups for a brilliant, brilliant author who really knows everything about startups, has an enormous background on it. And he has insisted that I use AI for all sorts of tasks. In fact, he’s like, you know, why are you wasting your time when you could just send this thing off and tell it to do the research? And we’ve done some spectacular things like I had a list of startups and I told it to go out on the internet and get me a simple statement about who they are, what financing stage they’re in, what category they’re in.
And it goes off and it does that. That would have taken me days. But because this guy is intelligent, there’s a reason he’s hired me and not replaced me with AI because once it’s time to actually create something that’s gonna be read by people, we have to rewrite that from beginning to end. That’s, as a professional writer, that is my, how I make a living. And what I write is the complete opposite of bland and boring. And he doesn’t want bland and boring. He wants punchy and surprising and… insightful.
So I, you know, you can both say use AI for all of this other stuff and don’t you dare publish anything that it creates. and I feel like that is generally the right advice that everybody is going to end up where I have ended up, which is, even in a corporate environment, it can support you, but you’re not using it to generate texts that people are going to actually read.
Neville Hobson
It’s a really good point you’ve made there I think because one of the elements one of the findings in the survey report, AI powered writers are sure they’re more productive and I definitely sit in that category. I’m absolutely convinced I’m probably in that what is it 92% or whatever it is of the advanced users who think so how do I prove it?
Well it’s not so much the output it’s the quality. It kind of tunes your mind into some of the reports that you read or what others are saying elsewhere that use AI tools to support you in doing the stuff that is what AI is better at than humans. Unstructured structured data, whatever it is, finding patterns, all that stuff that we can all read about. And you do the intellectual stuff, the stuff humans are really good at.
Josh Bernoff
Absolutely.
Neville Hobson
And they sound great phrases and sentences. And I’ve said to lots of people, I don’t see too many people doing that. So they’re obviously not in the advanced stage, let’s say. I find it hard to believe, frankly. Really I do. In conversations I’ve had during this year on those who diss this, who say this is like some of your respondents have said, you know, it’s the, what is it, psychotic plagiarism machine or whatever it was, the stuff…
Josh Bernoff
Sociopathic, but yes.
Shel Holtz
Both things can be true.
Neville Hobson
…sorry, sociopathic, but it’s where they can, but it amazes me, it truly does. And I think if we’ve got this situation where clearly there is evidence that if you use this in an effective way, it will help you be productive.
It will augment your own intelligence, to use a favourite phrase of mine. So AI is augmenting intelligence, not artificial. And yet that still encounters brick walls and pushbacks on a scale that’s ridiculous. Worse in an organization when that’s at a leadership level, I would say.
So how do we kind of make this less of a threat as it’s seen by others, or is this part of the issue that those naysayers just see all this as a massive threat?
Josh Bernoff
Well, boy, that’s a deep question. So first of all, I always start with the data here, because I want to distinguish between my opinions and the data. And the data says that the more you use AI, the more likely you are to say that it is making you more productive. And as you said, 92% of the advanced users said that it made them more productive. And interestingly, 59% of the advanced users said that it actually made the quality of their writing better.
So it’s not just producing more, but producing better stuff. And one more statistics here. We actually asked them how much more productive. The average across all the writers who use it is 37 % more productive, but like any tool, you need to get adept at it and learn what it’s good at and what you can use it for. And this technology has advanced way, way ahead of the, the learning about how to use it.
So there has to be a, basically a movement in every company and all writing organizations to teach people the best way to take advantage of it and what not to do. And in fact, one of the things that I recommended and that I tell some of the corporate clients I work with is find the people who are really good at this and then have them train the other people.
Because there’s nothing better than somebody saying, okay, here, let me show you what I can do with this.
I’ll just give you an example. So this report itself, obviously people are saying, well, did you use AI to write the report? I started out trying to use AI to analyse the data and I found that it was not dependable. I’m like, okay, I’m gonna have to calculate these statistics the old-fashioned way with spreadsheets and data tools. Every single word of the report was written by a human, me, at least most people still think I’m a human.
But we had, you know, thousands of verbatims to go through. And the person to whom I delegated the task of finding the most interesting verbatim used AI to go in and find verbatims that were interesting, had certain, there were some positive ones, negative ones, you know, had some diversity in terms of who they were from. So we weren’t quoting all technical writers. And that’s a perfect use to go into a huge corpus of text and pull out some of the interesting things out of there because that would have taken days.
I can’t help mentioning here because in preparation for doing this report, I interviewed some of the most advanced writers that I knew, including Shel. And one of my favourite examples is a very intelligent woman who, Shel, I know you know, is completing her doctoral degree right now. And she told me that the review of existing research is an enormous element of this, and that using AI to help summarise and compare the existing research would save her three years in the completion of her doctoral degree.
You cannot walk away from that level of productivity. And she’s full of enormously creative ideas. So this is not a bad writer. This is an excellent writer, but what she’s doing is she’s saying, I had this brilliant idea. Hey, is there anything in the literature that’s similar to this? wait a minute. These people came up with the same thing. So I can’t claim the authorship. it went across all the research and nobody else is saying that. great. This is an original thing I can include. That’s a smart way to use it.
Shel Holtz
Yeah, just this past week, I interviewed our new safety director, just came on board. I used Otter AI to do the interview. I like that because I’m able to focus on the interview subject rather than scribble notes. And what I did was uploaded the transcript of the interview that I downloaded from Otter into Gemini. And I said because the interview led to a lot of digressions and a lot of personal back and forth that interrupted the substance of what we were trying to get to.
So I just said, clean up this transcript, get rid of everything that doesn’t have to do with his coming on board at our company as the new safety director, his background and all of that, and then categorise it. But don’t change any of his words, right? I want the transcript to be exact. And it did exactly what I asked it to do.
For me to take that transcript… well, first of all, for me to take all those notes and then put it in some sort of usable form before I even start writing the article would have taken a considerable amount of time. And yet it didn’t mess at all with what he was telling me in response to my questions. And I was able to use that to produce an article that I wrote.
One of my favourite uses though, as a writer, is when there’s a turn of phrase that I want to use and I can’t quite draw it out. I know what it is. It’s right there. So I’ll share what I’m writing about. And this is what I’m trying to say. And there’s a turn of phrase I’m thinking of. What is it? And it’ll say, well, it might be one of these. And almost always from the list it gives me, that’s the one I was thinking of.
Josh Bernoff
This is a way better thesaurus than anything else I’ve ever used. And at the age that we’re at, sometimes you can’t, you know there’s a word and you can’t bring it to mind. I’m like, yeah, that was a word I was looking for.
Shel Holtz
Yeah. Josh, you found that 40% of freelancers and agencies say that AI has eaten into their income. If you were advising, say, a boutique PR agency today on how to survive in 2026, what’s the one pivot that you would advise them that they need to make based on this data?
Josh Bernoff
I think you need to focus on talent that has two skills. One is, clear and interesting writing skills are even more valuable than they used to be. So, you know, if you say, well, who are the best writers in our organisation, do everything you can to hang on to those people, because you’re going to need that to continue to stand apart from the AI slop.
And then the other side of that is to become as efficient as possible with AI for the rote tasks. So you also want people who are really skilled at using these tools to conduct research tasks. I interviewed a woman at the gathering of the ghosts, which is the event where this research was first presented. She matches up ghost writers to to author clients. And she gets like a background, briefing on every single person that she goes and pitches. And it’s really good at that.
So when she gets on the phone with these people, they’re like, wow. She’s really smart. She, she did a whole lot of homework here. And this is the kind of person I want to work with. Okay. It has nothing to do with her writing ability. It has to do with her ability to take advantage of these tools and, yeah, I think that we’re going to be able to get more done with fewer people. which is a, tail is all this time, really. That’s just, that’s just the direction that things go with automation.
But I, I have, I can’t resist pointing out here on the flip side. I think, a bunch of people, including publishers are now delegating work to AI and laying people off and it’s doing a bad job. I ghost wrote a book recently where the copy editing came back and I was like, this is inadequate. This is a terrible job. This was obviously done by a machine and done badly by a machine.
And my client and I decided that in order to avoid errors, we would hire our own professional copy editor because the publisher had skimped in exactly the wrong place. And the professional copy editor did a fantastic job. It cost a bunch of money, but we were much happier with that.
Neville Hobson
To continue this theme slightly, think I had the question, which I think Shel answered part of it, but the page in the report with the headline, nearly half of writers have seen AI kill a friend’s job. And I found that interesting because there’s constant talk in some of the mainstream media, some of the professional journals too, is AI going to replace jobs? One report comes out and before you know it, the headline saying yes, it is. The other report comes out saying no, it’s not.
But these are intriguing, I found, that they’re actual real world examples you’ve got from people who answered the questions you asked them in the survey. Where it says only 10% of corporate workers have had AI driven layoffs at their organization, but 43% of writing professionals know someone who has lost their job to AI. So is this a trend that’ll continue this way, do you think? how would you interpret this overall picture that you’ve shown? This particular page, page 20 in the report.
Josh Bernoff
Okay. Okay. Yes. So it was interesting. We expected to hear a lot more direct response of, yes, they’ve done layoffs of my work as a result of this. And the fact that only 10 % of the people who worked in corporations, which includes media companies, said that they had seen this was an indication to me that at least at the time we did this survey in August and September, that that was not a huge trend.
The fact that a lot of people know somebody who lost their job, you know, if one person loses their job and they have 12 friends, then we’re gonna get 12 positives on that. But that having been said, I’m not convinced that even if we did this survey now, which is what, like four months later, that we would get the same results.
It’s clear to me that there’s a lot of layoffs happening that a significant amount of it is AI stimulated. A certain amount of that is coders, for example. They need fewer coders to do the same programming now. My daughter got a computer science degree a few years ago because it was like everyone knew that that was how you got a job and you know, it’s not so easy right now.
I think that we’re going to see two things. First of all, we’re going to see this trend of people being laid off because AI includes productivity across the entire employment spectrum. It’s a huge trend that’s likely to happen. But I also think that you’re going to find companies backtracking and saying, oh my God, we thought we could have all this productivity, but it turns out that we need more humans here than I realised and we need to go back and bring them back.
I feel that it is driven by a certain amount by investment mania to cut back expenses and that in the end, as in so many cases, when you replace people with automation, you end up with a poor quality result.
Shel Holtz
I want to talk about fiction authors for a minute. And I find it intriguing that they are so universally anti-AI. Neville and I are both friends with JD Lasica. I don’t know if you know JD. He’s got a product out there called Authors AI. It’s a model that he and his partners have trained. It’s not using ChatGPT or Gemini or any of the large frontier models.
But what you do is you feed your novel to it, presumably in a first draft, and it analyses the novel against all of the criteria that has been trained on about what makes a good novel and gives you a report about, you need to do a better job of character development here, the story arc is weak here, things like that. So, I mean, there are uses for fiction writers beyond actually writing for you, but you did note that they almost universally detest it. Only 42% use it and they are…
Josh Bernoff
No, no, no, no. Let’s be clear here. It was the non-users among the fiction authors who almost universally detested.
Shel Holtz
Okay, I misread that. Emphatically angry was the language that jumped out at me. I’m wondering for those of us in business writing, is there a lesson we should take away from fiction writers about the preservation of the soul of a narrative?
Josh Bernoff
No, no, it’s interesting to me. So I’ve been conducting surveys now for probably 20 years. And one of the main things that you learn is that it’s never black and white. There’s never a hundred percent of the people that agree with anything. There’s never 0% of the people that agree with anything until this survey, when I found that fictional authors that do not use AI are as close as you can get to unanimous about it being a horrible, evil thing.
So yes, I was like, 100% of the people agreed with this? I’ve never seen that in my entire career of analysing surveys. But to give you a little bit more thoughtful answer than no, soulless fiction is boring and nobody wants to read it. And that happens to also be true of soulless nonfiction writing.
So let’s just take this report. If I used AI to generate the text in this report, you wouldn’t be talking to me because I found the most interesting things in the most interesting language to describe it. And the same applies if you’re writing about, you know, should we adopt a new project management methodology?
That’s a story, you know? We have this problem. This solution was suggested to us. We compared this to that. It looks like this is going to save money, but here are the things that I’m really worried about. This is an emotional story. And really, all nonfiction writing needs to have a story element to it. until AI becomes a little bit less soulless, which may never happen, you still need humans to tell those stories.
Neville Hobson
Yeah, I agree with that. So before we get to that question of what question should we have asked you, I’m looking at page 28, what these findings mean for the writing profession. And it’s really well done this, Josh, you succinctly condensed it all. But to avoid me trying to interpret what you said, can you tell us a summary of what these findings do mean for the writing profession?
Josh Bernoff
Well, thank you.
You know, it’s interesting Neville, there was always a section like that at the end of my reports at Forrester Research, because that’s what they were paid for. And in this case, I said, no, I’m just going to do the data. And my partner here, the people at Gotham Ghostwriters, Dan was like, why don’t you write something about what this means for the industry? I’m like, I can do that. Good idea! Okay.
So I wrote this and I think that in corporate environments, it is important now to understand what this is good for and to take the people who’ve become advanced at it and use them to help train other folks. And it’s especially challenging, I think, in media organisations because on the one hand, they are under enormous pressure, profit pressure.
You know, think about a newspaper or magazine or publisher. It’s very difficult for them to be profitable, highly competitive environment. If they can cut costs, they’re gonna try and find a way to do it. On the other hand, it is exactly their content that’s getting hoovered up and ripped off.
So they need to have a balance here, think on a political basis, they need to lobby and basically do everything possible to preserve the value of their content and not have it be used for training purposes without any compensation. But I also think they have to be very prudent in what kinds of things that they take AI to do and what they don’t. Just like the people at that publisher who use the AI copy editing that did a terrible job. If they economise in the wrong places, it’s gonna be a very bad scene.
I can’t help but drop this in here. I learned recently about a romance bookstore, a bookstore that sells romances, a physical bookstore. And they’re using AI to analyse trends, figure out which books to stock and how to organise them and what to put into their marketing. And I just thought that was fascinating because the content is as human and emotional as you can be, and yet they figured out a way to use AI to be successful.
Shel Holtz
That’s really interesting. So let’s ask you that question now, Josh. I mean, we could spend another hour here, but what question didn’t we ask that you were hoping we would?
Josh Bernoff
I think that the most interesting finding here, and there were so many fascinating findings, so that’s saying something, was in the questions that we asked about what tasks do you do with AI? And what really amazed me was the huge variety of tasks. So I wasn’t surprised that research was, but I’m looking over to the side here just to make sure I get the information exactly accurate.
I wasn’t surprised that replacement for web search and finding words or phrases of thesaurus was something that people wanted, but I was surprised by how many people use AI as a brainstorming companion. That they’re actually asking questions about Can I write it this way or that way? What suggestions do you have? And getting great ideas back on that. To summarise articles is very popular, but you know, generate outlines, find flaws and inconsistencies. As a devil’s advocate, deep research reports. mean, this is, the people who get good at this, they keep coming up with new ways to use it.
So I think that if you look at what’s happening in the future, all this debate about AI-generated slop getting published is much less interesting to me than the capability that this has to make writers more powerful, smarter, more interesting, come up with more ideas, and to basically be an infinitely patient assistant that can get you to be the best writer you can possibly be.
Shel Holtz
Yeah, that devil’s advocate is one of the very first things I used it for when ChatGPT was first introduced. I would say I’m planning on communicating this this way. The goal, the objective is to get employees to think, believe, do X. What pushback am I going to get from this approach? And nine times out of 10, it would come up with a very valid list of reasons that this isn’t going to work. It would lead me to re-strategise.
Josh Bernoff
Well, Shel, as you know, you can contact me anytime if you need someone to tell you that you’re wrong! But I’m not available at three in the morning, and ChatGPT is so from that perspective, it’s probably better. Plus my rates are much higher than theirs.
Shel Holtz
Josh, how can our listeners find you?
Josh Bernoff
Well, the most interesting thing is to subscribe to my blog at bernoff.com. I actually write a blog post about books, writing, publishing, and authoring every weekday. People say, why do you do that? The only good answer I have is it’s a mental illness, but you may as well take advantage of it. And we shared the URL for this research report and certainly anyone who’s interested in writing a business book, just do a search on build a better business book and you can get access to that.
And certainly if someone is so desperate that they really want a human to help them, I am available for that.
Shel Holtz Thanks so much, Josh. We really appreciate your time.
Josh Bernoff Okay, was really great to talk to you.
Neville Hobson Yeah, a pleasure, likewise, thank you.
The post AI and the Writing Profession with Josh Bernoff appeared first on FIR Podcast Network.

Dec 9, 2025 • 14min
FIR #491: Deloitte’s AI Verification Failures
Big Four consulting firm Deloitte submitted two costly reports to two governments on opposite sides of the globe, each containing fake resources generated by AI. Deloitte isn’t alone. A study published on the website of the U.S. Centers for Disease Control (CDC) not only included AI-hallucinated citations but also purported to reach the exact opposite conclusion from the real scientists’ research. In this short midweek episode, Neville and Shel reiterate the importance of a competent human in the loop to verify every fact produced in any output that leverages generative AI.
Links from this episode:
Deloitte was caught using AI in $290,000 report to help the Australian government crack down on welfare after a researcher flagged hallucinations
Deloitte allegedly cited AI-generated research in a million-dollar report for a Canadian provincial government
Deloitte breaks silence on N.L. healthcare report
Deloitte Detected Using Fake AI Citations in $1 Million Report
Deloitte makes ‘AI mistake’ again, this time in report for Canadian government; here’s what went wrong
CDC Report on Vaccines and Autism Caught Citing Hallucinated Study That Does Not Exist
The next monthly, long-form episode of FIR will drop on Monday, December 29.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com.
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Neville Hobson: Hi everybody and welcome to For Immediate Release. This is episode 491. I’m Neville Hobson.
Shel Holtz: And I’m Shel Holtz, and I want to return to a theme we addressed some time ago: the need for organizations, and in particular communication functions, to add professional fact verification to their workflows—even if it means hiring somebody specifically to fill that role. We’ve spent the better part of three years extolling the transformative power of generative AI. We know it can streamline workflows, spark creativity, and summarize mountains of data.
But if recent events have taught us anything, it’s that this technology has a dangerous alter ego. For all that AI can do that we value, it is also a very confident liar. When communications professionals, consultants, and government officials hand over the reins to AI without checking its work, the result is embarrassing, sure, but it’s also a direct hit to credibility and, increasingly, the bottom line.
Nowhere is this clearer than in the recent stumbles by one of the world’s most prestigious consulting firms. The Big Four accounting firms are often held up as the gold standard for diligence. Yet just a few days ago, news broke that Deloitte Canada delivered a report to the government of Newfoundland and Labrador that was riddled with errors that are characteristic of generative AI. This report, a massive 526-page document advising on the province’s healthcare system, came with a price tag of nearly $1.6 million. It was meant to guide critical decisions on virtual care and nurse retention during a staffing crisis.
But when an investigation by The Independent, a progressive news outlet in the province, dug into the footnotes, the veneer of expertise crumbled. The report contained false citations pulled from made-up academic papers. It cited real research on papers they hadn’t worked on. It even listed fictional papers co-authored by researchers who said they had never actually worked together. One adjunct professor, Gail Tomlin Murphy, found herself cited in a paper that doesn’t exist. Her assessment was blunt: “It sounds like if you’re coming up with things like this, they may be pretty heavily using AI to generate work.”Deloitte’s response was to claim that AI wasn’t used to write the report, but was—and this is a quote—”selectively used to support a small number of research citations.” In other words, they let AI do the fact-checking and the AI failed.
Amazingly, Deloitte was caught doing something just like this earlier in a government audit for the Australian government. Only months before the Canadian revelation, Deloitte Australia had to issue a humiliating correction to a report on welfare compliance. That report cited court cases that didn’t exist and contained quotes from a federal court judge that had never been spoken. In that instance, Deloitte admitted to using the Azure OpenAI tool to help draft the report. The firm agreed to refund the Australian government nearly $290,000 Australian dollars.
This isn’t an isolated incident of a junior copywriter using ChatGPT to phone in a blog post. This is a pattern involving a major consultancy submitting government audits in two different hemispheres. The lesson is pretty stark: The logo on your letterhead isn’t going to protect you if the content is fiction. In fact, this could have long-term repercussions for the Deloitte brand.
But it doesn’t stop at consulting firms. Here in the US, we’ve seen similar failures in the public sector. There’s one from the Make America Healthy Again (MAHA) commission. They released a report with non-existent study citations to a presentation on the CDC website—that’s the Centers for Disease Control—citing a fake autism study that contradicted the real scientists’ actual findings.
The common thread here is a fundamental misunderstanding of the tool. For years, the mantra in our industry was a parroting of the old Ronald Reagan line: “Trust but verify.” When it comes to AI though, we just need to drop that “trust” part. It’s just verify. We have to remember that large language models are designed to predict the next plausible word, not to retrieve facts. When Deloitte’s AI invented a research paper or a court case, it wasn’t malfunctioning. It was doing exactly what it was trained to do: tell a convincing story.
And that brings us to the concept of the human in the loop. This phrase gets thrown around a lot in policy documents as a safety net, but these cases prove that having a human involved isn’t enough. You need a competent human in the loop. Deloitte’s Canadian report undoubtedly went through internal reviews. The Australian report surely passed across several desks. The failure here wasn’t just technological, it was a failure of human diligence. If you’re using AI to write content that relies on facts, data, or citations, you can’t simply be an editor. You must be a fact-checker.
Deloitte didn’t just lose money on refunds or potential reputational hits; they lost the presumption of competence. For those of us in PR and corporate communications, we’re the guardians of our organization’s truth. If we allow AI-generated confabulations to slip into our press releases, earnings statements, annual reports, or white papers, we erode the very foundation of our profession. Communicators need to update their AI policies. Make it explicit that no AI-generated fact, quote, or citation can be published without primary source verification. And you need to make sure that you have the human resources to achieve that. The cost of skipping that step, trust me, is a lot higher than a subscription to ChatGPT.
Neville Hobson: It’s quite a story, isn’t it really? I think you kind of get exasperated when we talk about something like this, because we’ve talked about this quite a bit. Most recently, in our interview with Josh Bernoff—which will be coming in the next day or so—where this very topic came up in discussion: fact-checking versus not doing the verification.
I suppose you could cut through all the preamble about the technology and all this stuff, and the issue isn’t that; it’s the humans involved. Now, we don’t know more than the Fortune article, I’ve seen the one in Entrepreneur magazine, and the link that you shared. Nowhere does it disclose detail about exactly what it was other than the citation. So we don’t know, was it prompted badly or what? Either way, someone didn’t check something. I don’t know how much you need to really hammer home the point that if you don’t verify what the AI assistant has responded to or the output to your input, then you’re just asking for this kind of trouble.
I did something just this morning, funnily enough, when I was doing some research. The question I asked came back with three comments linking to the sources. A bit like Josh—because Josh mentioned this in our interview—every instruction to your AI goes: “Do not come back with anything unless you’ve got a source.” And so I checked the sources, one of which just did not exist. The document concerned on the website of a reputable media company wasn’t there. Now, it could be that someone had moved it, or it did exist but it was in another location. But the trouble is, when these things happen, you tend to fall on the side of, “Look, they didn’t do this properly.”
So I’m not sure what I can add to the story, Shel, frankly. Your remarks towards the end about your reputation is the one that’s going to get hit. You look stupid. You really do. And your credibility suffers.
I found in Entrepreneur they quoted a Deloitte spokesperson saying, “Deloitte Canada firmly stands behind the recommendations put forward in our report.” Excuse me? Where’s your little humility there? Because you’ve been caught out doing something here. And they’re saying, “We’re revising it to make a small number of citation corrections which do not impact the report finding.” What arrogance they are displaying there. Not anything about an apology—or fine, let’s say they don’t need an apology—but a more credible explainer that at least gives them the sense that they empathize here, rather than this arrogant, “Well, we stand by it.” It’s just a little citation? It’s actually a big deal that you quote as something that either doesn’t exist or is a fake document. Exactly. So I don’t know what I can say to add anything more. But if they keep doing this, they’re going to lose business big time, I would say.
Shel Holtz: It didn’t exist. Yeah, I understand their desire to stand by the report. I have no doubt that they had valid information and made valid recommendations, but that’s hardly the point. The inaccuracies call all of the report into question, even if at the end of the day they can demonstrate that they used appropriate protocols and methodologies to develop their recommendations based on accurate information.
You still have this lingering question: “Well, you got this wrong, what else did you get wrong? What else did you turn over to AI that you’re not telling us about because you didn’t get caught?” Even if they didn’t do any of that, those questions are there from the people who are the ones who paid for this report. If I were representing a government that needed this kind of work, first of all, I would be hesitant to reach out to Deloitte. I would be looking at one of their competitors.
If I had a long-standing relationship with Deloitte, and even if I had a high degree of trust with Deloitte, I would still add a rider to a contract that says either you will not use AI in the creation of this report, or if you do, you will verify each citation and you will refund us X dollars—the cost of this report—for each inaccurate, invalid verification that you submit. I’d want to cover my ass if I were a client based on having done this not once, but twice.
Neville Hobson: Right. I wonder what would have happened if the spokesman at Deloitte Canada had said something like, “You’re absolutely right. We’re sorry. We screwed up big time there. We made a mistake. Here’s what happened. We’ve identified where the fault lay, it’s ours, and we’re sorry. And we’re going to make sure this doesn’t happen again.”
Shel Holtz: “Here’s how we’re going to make sure it doesn’t happen again.” Yeah, I mean, this is like any crisis. You want to tell people what you’re going to do to make sure it doesn’t happen again.
Neville Hobson: Yeah, exactly. So they say—and you mentioned—”AI was not used to write the report, it was selectively used to support a small number of research citations.” What does that mean, for God’s sake? That’s kind of corporate bullshit talk, frankly. So they use the AI to check the research citations? Well, they didn’t, did they? “Selectively used to support a small number of research citations…” I don’t know what that even means.
So I don’t think they’ve done themselves any favors with the way they’ve denied this and the way their reporting has spread out into a variety of other media, all basically saying the same thing: They did this work for this client and it was bad. Didn’t do a good job at all.
Shel Holtz: Yeah. So, I’m, as you know, finishing up work on a book on internal communications. It was originally 28 blog posts and I started this back in, I think, 2015. So a lot of the case studies have gotten old. So I did some research on new case studies and I used AI to find the case studies. And then I said, “Okay, now I need you to give me the links to sources that I can cite in the end notes of each chapter that verify this information.”
In a number of cases, it took me to 404s on legitimate websites—Inc, Fortune, Forbes, and the like. But the story wasn’t there and a search for it didn’t produce it. And I would have to go back and say, “Okay, that link didn’t work. Show me some that are verified.” And sometimes it took two, three, four shots before I got to one where I look and say, “It’s a credible source, it’s a national or global business publication or the Financial Times or what have you, the article is here and the article validates what was in the case study,” and that’s the one I would use. But it takes time, and I think any organization that doesn’t have somebody doing that runs the risk of the credibility hit that Deloitte’s facing.
Neville Hobson: Yeah, I mean, this story is probably not going to be front-page headlines everywhere at all. But it hasn’t kind of died yet. Maybe there’s going to be more in professional journals later on about this. But I wonder what they’re planning next on this because the criticisms aren’t going away, it seems to me.
Shel Holtz: No, and as the report noted, it’s not just the Deloittes of the world. It’s Robert F. Kennedy’s Department of Health and Human Services justifying their advisory board’s decisions to rewrite the rules on vaccinations based on citations that not only don’t exist, but that contradict the actual research that the scientists produced.
Neville Hobson: Well, there is a difference there though. That’s run by crazy people. I mean, Deloitte’s not run by crazy people.
Shel Holtz: Not as far as I know. That’s true. And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #491: Deloitte’s AI Verification Failures appeared first on FIR Podcast Network.

Dec 8, 2025 • 21min
ALP 290: Balancing skills and personality when hiring a new team member
In this episode, Chip and Gini discuss the complexities of hiring in growing agencies. They highlight the challenges of finding skilled, reliable employees who align with agency values.
Sharing personal experiences, Gini explains the pitfalls of hasty hiring and the benefits of thorough vetting and cultural fit. They stress the importance of a structured hiring process, including clear job roles, career paths, and appropriate compensation. They also underscore the value of meaningful interviews, proper candidate evaluations, and treating the hiring process as the start of a long-term relationship.
Lastly, Chip and Gini emphasize learning from past mistakes to improve hiring effectiveness and employee retention. [read the transcript]
The post ALP 290: Balancing skills and personality when hiring a new team member appeared first on FIR Podcast Network.


