Charlie Harding, music journalist and co-host of a popular podcast, joins David Pierce, Editor-at-large at The Verge, to explore the ubiquitous presence of Auto-Tune in modern music. They trace its origins back to the oil industry, discuss its evolution from a correction tool to an artistic staple, and analyze its impact on authentic musical expression. The duo reflects on the future of AI in music, debating how this technology might redefine creativity and authenticity, while also considering the resurgence of raw sounds in a tech-heavy landscape.
Auto-Tune, originally a corrective tool, transformed into a creative instrument that reshaped vocal production in various music genres.
Cher's 'Believe' popularized a pronounced use of Auto-Tune, establishing the 'Cher effect' that defined a new sound in pop music.
The future of music may see AI influencing vocal styles similarly to Auto-Tune, raising questions about authenticity and artistic expression.
Deep dives
Understanding Autotune's Origins
Autotune, a pivotal audio technology, was developed by Andy Hildebrand, a geologist who transitioned from the oil industry to music. He utilized wave seismology techniques from his previous work to create this software in the late 1990s as a pitch correction tool. Upon its launch in 1997, the tool was initially aimed to enhance vocal performances subtly, nudging out-of-tune notes back into pitch. However, its broad application was quickly discovered, influencing various music genres beyond its original intent.
Cher and the Birth of the Cher Effect
The first major breakthrough for autotune came with Cher’s 1998 hit 'Believe', which popularized the extreme use of pitch correction. This song was revolutionary as it applied autotune not just for subtle tuning but as a pronounced, digital effect that defined Cher's vocals. The rapid pitch correction created a robotic sound that became widely recognized as the 'Cher effect', leading to a shift in how vocals were produced in pop music. This marked the transformation of autotune from a corrective tool to a creative instrument, influencing many artists' sound palettes.
The T-Pain Revolution
T-Pain furthered autotune's integration into music with his 2005 hit 'I'm Sprung', which showcased its potential in R&B and hip-hop. Unlike Cher, T-Pain embraced autotune as a primary artistic tool, showcasing his vocal prowess through this technology and encouraging other artists to experiment with it. The rise of autotune around 2008 coincided with Kanye West's '808s & Heartbreak', further normalizing its use within rap, allowing rap artists to sing melodic hooks rather than rely solely on traditional vocal techniques. While initially viewed with skepticism, autotune's broad application led to an evolution in genre and new expectations for vocal performances in popular music.
Criticism and Acceptance of Autotune
Autotune has faced criticism for homogenizing vocal sounds and allowing untrained individuals to achieve stardom without traditional singing skills. Critics argue that the technological advancements remove the 'human' element from music, resulting in a lack of authenticity and emotional depth. Nonetheless, many contemporary artists creatively harness autotune to enhance their unique sound rather than mask their limitations. Ultimately, the widespread acceptance of autotune reflects evolving definitions of artistry, with listeners increasingly open to diverse vocal expressions and stylization.
Looking Ahead: The Future of Autotune and AI
As autotune continues to be a dominant force in music production, discussions arise about its future amid technological advancements like AI. The potential exists for AI to mimic or even generate unique musical styles, paralleling how autotune reshaped the soundscape of the late 20th and early 21st centuries. However, whether AI will create a distinct sound or remain a tool in the background remains to be seen. Artists may embrace the trend of producing more raw, unprocessed sounds as a counter to these advancements, ultimately leading to a more nuanced landscape of musical expression.
Popular music changes all the time, but there’s been one consistent element in practically everything released in the last two decades: Auto-Tune is everywhere. What started as a simple audio processing tool in the 1990s has become the dominant force in music. Artists are training to sing with Auto-Tune; songs sound like Auto-Tune. Like it or hate it, Auto-Tune is everywhere. And to be clear, most people like it.
On this episode of The Vergecast music journalist and Switched on Popco-host Charlie Harding tells us the story of Auto-Tune. (Disclosure: Switched on Pop is part of the Vox Media Podcast Network, as is The Vergecast.) It starts, of all places, in the oil and gas industry. It involves artists like Cher and T-Pain, spreads like wildfire throughout the music business, and quickly becomes so utterly ubiquitous that you probably notice when Auto-Tune isn’t used more than when it is.
As we barrel toward whatever the “AI era” of music will be, we also look for clues in Auto-Tune’s story that point to what’s coming next. We talk about the distinct sound that comes from tools like Suno and Udio, how artists will use and abuse AI, and whether we should be worried about what it all means. We haven’t yet found the “Believe” of the AI music era, but it’s probably coming.