

Librosa: Audio and Music Processing in Python with Brian McFee - TWiML Talk #263
May 9, 2019
Brian McFee, an assistant professor at NYU and the creator of the Librosa library, shares his journey in music technology and data science. He discusses the core functions of Librosa for audio processing, the challenges of beat tracking in music, and his experience developing a jazz search engine. McFee also highlights workflows for audio analysis using Python, showcasing essential tools like Fast Fourier Transforms and visualization techniques. His insights aim to make audio analysis more accessible and insightful for developers.
AI Snips
Chapters
Transcript
Episode notes
From Machine Learning to Music
- Brian McFee's interest in music began during his PhD at UC San Diego.
- He overheard fellow students discussing building a "Google for Music" and was intrigued.
Dissertation and Data Challenges
- McFee's dissertation focused on music recommender systems.
- His goal was to create a system where users provide an example, and the algorithm generates a playlist.
Complexity of Comparing Tags and Lyrics
- Comparing tags and lyrics for music similarity is complex due to differing text statistics.
- Lyrics contain stop words and natural language structure, while tags are more concise.