CNN's John Sutter talks with Matt and Martin from well said about creating a synthetic Robert Smith. They say the new approach to voice cloning is very different from how it used to be done. It will take millions of training cycles but in the end, we should end up with a computer program that can look at any sentence and do a really good job guessing how Robert Smith would say that sentence. And by the way, if it took 80 hours of recordings to make Siri using that old concatenative approach, this new approach requires way less audio.
In Part 1 of this series, AI proved that it could use real research and real interviews to write an original script for an episode of Planet Money. Our next task was to teach the computer how to sound like us. How to read that script aloud like a Planet Money host.
On today's show, we explore the world of AI-generated voices, which have become so lifelike in recent years that they can credibly imitate specific people. To test the limits of the technology, we attempt to create our own synthetic voice by training a computer on recordings of former Planet Money host Robert Smith. Then we introduce synthetic Robert to his very human namesake.
There are a lot of ethical, and economic, questions raised by a technology that can duplicate anyone's voice. To help us make sense of it all, we seek the advice of an artist who has embraced AI voice clones: the musician Grimes.
This episode was produced by Emma Peaslee and Willa Rubin, with help from Sam Yellowhorse Kesler. It was edited by Keith Romer and fact-checked by Sierra Juarez. Engineering by James Willetts. Jess Jiang is our acting executive producer.
We built a Planet Money AI chat bot. Help us test it out: Planetmoneybot.com.
Help support Planet Money and get bonus episodes by subscribing to Planet Money+ in Apple Podcasts or at plus.npr.org/planetmoney.Learn more about sponsor message choices:
podcastchoices.com/adchoicesNPR Privacy Policy