10 musicians listened to 38 straight days' worth of music, 92 hours each, typing out their descriptions one by one before moving on to the next clip. They used a total of 370,000 words to describe all of the clips. Google ran the music and the words through a deep learning model with a goal of figuring out what words correlate with what musical sounds. The outcome, if the project went well, would be the ability for any of us to say anything to Google and it would generate brand new music based on our instructions.
On this show we explore three different AI and machine-generated music technologies; vocal emulators that allow you to deep fake a singer or rapper’s voice, AI-generated compositions and text-to-music generators like Google Music LM and Open AI’s Jukebox, and musical improvisation technologies. We listen to the variety of music these technologies generate, and two guitarists face off against an AI in improvised guitar solos.
Along the way, we talk to philosophers of music Robin James and Theodore Gracyk about what musical creativity is and whether machines are more or less creative than human musicians, and Barry gives his take on each of the technologies and what they mean for the future of musical creativity.
Learn more about your ad choices. Visit megaphone.fm/adchoices