The Media Copilot cover image

The Media Copilot

Latest episodes

undefined
Jan 26, 2024 • 15min

Reviving the Dead With AI, With Siggi Arnason

"Just imagine the whole society just crumbling over AI." People in Iceland don't have to imagine it. That quote from Siggi Arnason, CEO of OverTune, is describing the fallout from a viral video that his company's AI-powered technology helped create. The video was a comedy sketch that featured a recreation of a popular deceased Icelandic comedian, Hermann Gunnarsson. After it aired, over 90% of the country ended up seeing it, and in response, the country's parliament is fast-tracking legislation around deepfakes and the use of AI. Another effect of the controversy is that OverTune has gone viral. Arnason spoke to John Biggs on The Media Copilot podcast about the skit and the resulting firestorm. Arnason, a former musician and self-described "lover of cats," is unique in that he never wanted to be an AI influencer. But when his team built Iceland’s first deepfaked political comedy sketch, he knocked over a can of cod. Now his country is wrestling with the concepts of ownership, creativity, and the future of AI.  Join us in New York on February 1! We are planning our first meetup in Manhattan and we’d love to meet you! Sign up to our ⁠Meetup Group⁠ here and we’ll send you the details shortly. The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news. ⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠ ⁠⁠⁠Follow on X.⁠⁠⁠ Subscribe to the podcast on: ⁠⁠⁠Apple⁠⁠⁠ ⁠⁠⁠Spotify⁠⁠⁠ Music: ⁠⁠⁠Favorite⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License © AnyWho Media 2024
undefined
Jan 12, 2024 • 34min

How Media Can Thrive in the Age of AI, With Louise Story

When The New York Times filed its landmark lawsuit, accusing OpenAI of violating copyright by training its large language models (LLMs) on its journalism, some savvy observers had been expecting such a move for months. One of those people is Louise Story. Louise is a former Times staffer, spending several years as an investigative journalist before getting involved in strategy and building new formats for the paper (such as live video). She also led content and product strategy for The Wall Street Journal — including its approach to AI — so few people have a better perspective on how newsrooms regard technology platforms. She now offers that perspective as an independent consultant, helping guide media companies on digital strategy and how they can adapt to an AI-mediated future. I spoke to Louise for The Media Copilot podcast. We of course talk about the lawsuit and dissect the stakes for the players involved and the media. I was also excited to get her perspective on the infamous Sports Illustrated debacle and how incidents like it have added to the stigma of generative content. Of course, I couldn’t let her leaving without getting her to share some practical advice on how newsrooms can take their first steps into the world of GenAI. Join us in New York on February 1! We are planning our first meetup in Manhattan and we’d love to meet you! Sign up to our Meetup Group here and we’ll send you the details shortly. The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news. ⁠⁠Subscribe to the newsletter.⁠⁠ ⁠⁠Follow on X.⁠⁠ Subscribe to the podcast on: ⁠⁠Apple⁠⁠ ⁠⁠Spotify⁠⁠ Music: ⁠⁠Favorite⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License © AnyWho Media 2024
undefined
Jan 5, 2024 • 45min

Sniffing Out AI Writers, With Lee Gaul

When ChatGPT showed how easy it was to write an "original" academic paper that could get a passing grade, the need for some kind of AI detector was suddenly starkly clear. The market quickly responded, and GPTZero, created by 23-year-old Edward Tian, was an overnight sensation last spring. College professors now routinely check papers for AI authorship. In the media world, the need for such a tool was perhaps less urgent, since editors tend to have a tighter grip on how copy is produced, and few writers would risk their reputations trying to pass off synthetic articles as their own. That is, until the boondoggle with Sports Illustrated, where articles supplied by a third party appeared to have been written by AI (note: the company that supplied the articles claims they were human-written). The incident got widespread attention, and it underscored the need for AI detection in media, especially when you publish content at scale, from multiple sources. Even if your in-house editorial team is strictly human-driven, freelancers and syndication partners may not have gotten the memo. So do managing editors need to add "copy and paste article into AI detector" to the long list of editors' duties? They can, but another solution may be to build it into existing processes and tools, which is exactly why Copyleaks exists. The company began as a plagiarism detector and now markets itself as an AI detection company. It claims to be able to do detect synthetic text across models, in multiple languages, and in detail (i.e. showing which parts of a document are AI generated, as opposed to a simple Yes/No result). Lee Gaul is the enterprise sales director at Copyleaks, and he's this week's guest on The Media Copilot podcast. Our conversation goes beyond simple AI detection and explores the big-picture issues driving the demand for the service as well as the increased need for human judgment when machines enter the picture. The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news. ⁠Subscribe to the newsletter.⁠ ⁠Follow on X.⁠ Subscribe to the podcast on: ⁠Apple⁠ ⁠Spotify⁠ Music: ⁠Favorite⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License © AnyWho Media 2024
undefined
Dec 22, 2023 • 43min

Applying ChatGPT to Financial News, With Matt Martel

At a time when most newsrooms across the world are considering, studying and, in some cases, experimenting with generative AI, at least one publication has enthusiastically embraced the technology, building it into workflows and publishing "synthetic" content on the regular. BusinessDesk in New Zealand uses ChatGPT and other AI models to augment what it’s serving up to subscribers, using the tech’s generative capabilities to both monitor news events and create content around them almost instantly. After launching AI-powered articles and summaries in the spring, BusinessDesk is going further, using it to summarize lengthy reports and assist in news gathering. Matt Martel, general manager of BusinessDesk parent NZME, spoke to The Media Copilot about why the BusinessDesk newsroom jumped into the realm of generative AI so quickly, how it avoids the pitfalls of the tech without slowing things down, and the ways the company’s organizational structure made it so friendly to integrating GenAI into real-world workflows. You can hear the full version of this PREMIUM episode of The Media Copilot by ⁠becoming a paid subscriber⁠. The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news. Subscribe to the newsletter. Follow on X. Subscribe to the podcast on: Apple Spotify Music: Favorite by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License © AnyWho Media 2023
undefined
Dec 8, 2023 • 18min

Running Your Own Newsroom's LLM, with Viktor Shpak

Newsrooms can only get so far with pasting prompts into ChatGPT. Once you want to get more serious with generative AI, a media business should think seriously about running, fine-tuning, and perhaps even building their own large language model (LLM). There are a number of approaches to this, and it's easy enough to download a commercial or open-source model to run on your private cloud, or even your MacBook. But what are the factors to consider when rolling your own AI operation, and how expensive can it get? In this week's conversation, John Biggs talks with Viktor Shpak, lead developer for VisibleMagic, about what it takes to run your own LLM in the privacy of your own office. He also explores the future of AI-generated content and code, pointing out that the rising AI tide will — theoretically — lift all boats. We're grateful we had the chance to probe the mind of an extremely plugged-in developer. The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news. ⁠⁠⁠⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠Follow on X.⁠⁠⁠⁠⁠⁠⁠ Subscribe to the podcast on: ⁠⁠⁠⁠⁠⁠⁠Apple⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠ Music: ⁠⁠⁠⁠⁠⁠⁠Favorite⁠⁠⁠⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License © AnyWho Media 2023
undefined
Dec 1, 2023 • 40min

AI Journalism Goes Global, With Charlie Beckett

In the year since ChatGPT arrived on the scene, journalism has grappled with the ethics of generative AI. From robot-written articles to the proliferation of “fake” images, the problems the media needs to think through have been bubbling in the background for a long time, but they've been exacerbated by the scale that generative AI makes possible. One person who's spent a lot of time thinking about all the perils and promise that AI brings to journalism is Charlie Beckett. A professor in the Department of Media and Communications at the London School of Economics (LSE), Beckett is the founding director of Polis, the school’s international journalism think tank. He’s currently leading Polis’s Journalism and AI project, which hosts the JournalismAI Festival, starting on December 6. The festival promises to unite dozens of journalists who are innovating and using generative AI in  newsrooms all over the world. It'll take on topics like detecting bias in content, the role AI can play in covering elections, and how small and local newsrooms can leverage the tech to punch above their weight. In talking to Beckett, I was struck by the tone of optimism that emerged in our conversation. Even though we tackled thorny topics like the ethics of generative images in war and the recent generative-content brouhaha involving Sports Illustrated, it's clear his focus is on how this manifestly transformative technology can help the truth that journalists seek shine through. I hope you enjoy the discussion as much as I did. You can register for free for the JournalismAI Festival here. The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news. ⁠⁠⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠Follow on X.⁠⁠⁠⁠⁠⁠ Subscribe to the podcast on: ⁠⁠⁠⁠⁠⁠Apple⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠ Music: ⁠⁠⁠⁠⁠⁠Favorite⁠⁠⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License © AnyWho Media 2023
undefined
Nov 21, 2023 • 41min

Is This the End of OpenAI?

The past few days have turned the entire industry of generative AI upside-down. Before the weekend, OpenAI was sitting comfortably in pole position, riding high from a series of recent announcements designed to keep it there. Most of the world saw ChatGPT as the default starting place for anyone taking their first steps into AI, and the company’s models as setting the standard, with competitors fighting for scraps of mind share. Now we're in a completely different world. Ever since its board fired CEO Sam Altman in a surprise move Friday afternoon, the situation at OpenAI — and the marketplace for generative AI tools — has been in flux. There have been so many developments since Friday that it's been difficult to keep up (here’s a good summary), but the current state of affairs is a standoff between OpenAI's employees and the board. The staff wants Altman reinstated and the board to resign, or they're all going to follow Altman to Microsoft (far and away OpenAI's biggest investor), which offered him a job as CEO of a new AI subsidiary. Microsoft has said it would indeed hire the defecting staffers. On this week's Media Copilot podcast, John Biggs and I are joined by Peter Bittner from The Upgrade to discuss these possible scenarios for OpenAI and what they mean to customers… and competitors. Whatever happens, one thing has been made very clear: the field of generative AI will not be the same after this. The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news. ⁠⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠Follow on X.⁠⁠⁠⁠⁠ Subscribe to the podcast on: ⁠⁠⁠⁠⁠Apple⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠ Music: ⁠⁠⁠⁠⁠Favorite⁠⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License © AnyWho Media 2023
undefined
Nov 17, 2023 • 1h 15min

Staying One Step Ahead of ChatGPT, With Brennan Woodruff

In this week’s conversation, I talk to Brennan Woodruff of GoCharlie about how AI services based on content generation can contend with a ChatGPT-dominated world. Plus John Biggs and I break down the week’s news: YouTube’s guidelines for synthetic content, a new study rating the big models on hallucinations, and an inflection point on the thorny issue of fair use. John Biggs and I offer a crash course on using AI for marketing and media. Learn more about the 3-hour session here. When you’re running a startup, you’re already in a race. When you’re running an AI startup, you’re essentially in the New York City marathon. It’s already a slog, and you’re wall-to-wall with thousands of competitors of all stripes. Whether or not you succeed depends on the kind of race you’re running: Do you want to win the whole thing, beat your personal best, or be top in your category? GoCharlie appears to be aiming for the third option. The AI startup is one of many that specializes in creating marketing copy, images, and other material, but it differentiates itself by applying its own large language model (LLM) trained specifically for that use case. That would seem to give the young company an advantage, but now that OpenAI had made it easy for anyone to create task-specific GPTs with assistants — and is creating a platform to sell them — can GoCharlie get past this “extinction-level event” for AI startups? I spoke with co-founder Brennan Woodruff about GoCharlie, what it brings to the table to marketers and media people, and how AI entrepreneurs can stay in the race even when running alongside a ChatGPT that’s wearing rocket boots. In this week’s AI news that’s most relevant to media… What even is fair use anyway? Ed Newton-Rex, the VP of Audio at Stability AI — the creator of the Stable Diffusion image generator — very publicly resigned from the company, arguing strongly against the perspective, common among tech companies, that training AI models on copyrighted material constitutes fair use. Let’s put “Hail Hydra” at the end of every deepfake: YouTube kinda-sorta took a stand on deepfakes, introducing new requirements for creators to label “synthetic” content made to look realistic, but allowing a parody/satire exception. It’s a important step, though still leaves a lot up to YouTube’s human moderators. Also: anyone making bank off of songs made from cloned voices of various artists is on notice now that those artists can force synthetic songs to be taken down. Progress? Probably. But other platforms (a certain single-letter network comes to mind) will likely have different standards. Wait, people use Notion? This week Notion launched Q&A, an AI-powered feature that can scan all the material you’ve put on the service to inform answers to specific queries — essentially letting you have a conversation with your work. This is the dream of Google Bard’s feature that connects with all your Gmail and Google Docs, but Notion’s thingie probably has a better chance of giving useful answers since it probably won’t have every grocery list you’ve made since 2006 in there. The Hallucination Olympics: Rankings for which generative AI model hallucinates the most are out, and boy, Google’s Gemini upgrade can’t come fast enough — Google Palm, which powers Bard, was dead last. Perhaps not surprisingly, OpenAI’s models lead the pack, though some smart folks were able to get Llama 2 into the same category. Hallucinations will never go away entirely, but we’re optimistic that the robots will continue to get better at, you know, facts. Now if we could just to the same with bias… The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news. ⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠ ⁠⁠⁠⁠Follow on X.⁠⁠⁠⁠ Subscribe to the podcast on: ⁠⁠⁠⁠Apple⁠⁠⁠⁠ ⁠⁠⁠⁠Spotify⁠⁠⁠⁠ Music: ⁠⁠⁠⁠Favorite⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License © AnyWho Media 2023
undefined
Nov 10, 2023 • 32min

Putting AI Where Reporters Actually Work, With Ryan Restivo

There are thousands of generative AI tools for content, and some actually work well. Generative tools can create SEO headlines, social copy, document analysis, and lots more for reporters and editors, all ready to enhance your productivity. However, there are roadblocks to incorporating these tools in day-to-day work. Beyond the basic concerns about quality and hallucinations, often the workflow itself is the issue: Incorporating a new tool typically means another login, another browser window open, and a new app to get familiar with. Then, if you’re constantly copying and pasting from the tool to your CMS and back again, the gains in efficiency start to drop. In other words, GenAI has the best chance of being effective when it’s integrated into existing workflows. That’s the magic of YESEO, a tool developed by Ryan Restivo in partnership with the Reynolds Journalism Institute (RJI) at the University of Missouri. While there are any number of tools that will serve up SEO headlines for news stories, YESEO was created specifically for Slack, the collaboration platform found in almost every newsroom. I spoke to Ryan about developing YESEO — which he began before the general release of ChatGPT — how newsrooms can develop a pragmatic approach to generative AI tools, and what a reporter’s workflow looks like in a future world where GenAI tools are as common as spellcheckers. The Media Copilot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news. ⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠ ⁠⁠⁠Follow on X.⁠⁠⁠ Subscribe to the podcast on: ⁠⁠⁠Apple⁠⁠⁠ ⁠⁠⁠Spotify⁠⁠⁠ Music: ⁠⁠⁠Favorite⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License © AnyWho Media 2023
undefined
Nov 5, 2023 • 51min

How Media Can Survive in the Generative AI Era, with Brian Morrissey

It's still early days for generative AI, but the change it will inevitably impose on the news media is massive. If that sounds sounds scary to you, you should talk to someone. We'd recommend Brian Morrissey, author of The Rebooting newsletter and host of The Rebooting Show podcast, both of which get into the weeds of the media business. In this wide-ranging conversation, Brian and Pete Pachal attack the big questions around GenAI and media: What happens when AI becomes the dominant force in search and SEO traffic to news sites dries up? What does the publisher-audience relationship look like in an AI-mediated world? And how can media companies get ahead of the coming GenAI wave — and maybe even ride it? The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news. ⁠⁠Subscribe to the newsletter.⁠⁠ ⁠⁠Follow on X.⁠⁠ Subscribe to the podcast on: ⁠⁠Apple⁠⁠ ⁠⁠Spotify⁠⁠ Music: ⁠⁠Favorite⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License © AnyWho Media 2023

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner