Deep learning solved the problem of making music sound like human compositions long ago. To a large extent, AI is revealing to us how most human music making actually does work. The difference between jazz piano notes and bluegrass piano notes appears very quickly even to humanize. Getting everything generated by an AI, composition, and sound is what they were trying to do at Google Music LM.
On this show we explore three different AI and machine-generated music technologies; vocal emulators that allow you to deep fake a singer or rapper’s voice, AI-generated compositions and text-to-music generators like Google Music LM and Open AI’s Jukebox, and musical improvisation technologies. We listen to the variety of music these technologies generate, and two guitarists face off against an AI in improvised guitar solos.
Along the way, we talk to philosophers of music Robin James and Theodore Gracyk about what musical creativity is and whether machines are more or less creative than human musicians, and Barry gives his take on each of the technologies and what they mean for the future of musical creativity.
Learn more about your ad choices. Visit megaphone.fm/adchoices