Most of what you hear about AI right now is in text, but Mikey Shulman (co-founder and CEO of Suno) would tell you that audio is a much more interesting medium to work with. How do you use AI to generate music? What makes audio data uniquely difficult to parse? And how do you build audio models that cater to unique, subjective human preferences on music?
Suno is building a future where anyone can make great music. In this episode of Barrchives, I sat down with Mikey (who, like his co-founder, is a musician) to talk about how they do what they do, from why they chose a transformer-based architecture to how they test new models when outputs are so subjective.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.