Music producer & sound engineer John Vitale is creating music to help people optimize brain states. After co- founding Focus@Will, where he designed music and soundscape channels for flow state he moved on to found Brain Music Labs Where he’s crafting new ways to use entrainment based music and media for reducing stress, anxiety, and cravings with partners like Total Brain and Felix.John is a great guy and I’m thrilled to be able to share this conversation about the importance and potential for audio as a technology frontier. Also, John created the intro you hear at the beginning of every Deep Future podcast and I’m super grateful to him for taking that on.Pablos: I’ve been playing with this 3D spatial audio lounge online.John: Which one?High Fidelity.John: I know Philip. I’m trying to think of the best use case for that and I was trying to get them out. In fact, Philip presented at Metal.Pablos: I didn’t go to that one. I heard him calling for them. I’ve been playing with High Fidelity a bunch.John: I’d love to get your take on, where do you see as a great use case for High Fidelity?Pablos: Where Philip is coming from is about trying to develop these tools to improve virtual reality experiences with a vision towards something like second life, but in VR, where you can just walk around and hang out with people. The audio substrate is a big deal. That’s a big part of it. I’m not telling you the things you know. The neurological cues around audio are a big deal. The way I think about it for all of human history, all of our conversations were zero-latency until the last century, face-to-face. One hundred percent spatially positioned. The sound was coming from where the speaker was sitting every time. That is not true on phones, on Zoom and on anything that’s mediated online. High Fidelity tries to use that to make you feel like the connection is more real. They’re busy trying to go further with this and develop it for VR. It’s going to be exciting to see where they get to. With the High Fidelity tool, as we know it now, we can wander around on a map and chat with people. It’s super compelling in ways. I’ve made friends in there, which I can’t say I’ve ever done on Zoom. It feels like hanging out. If you go to High Fidelity with headphones on and you close your eyes, and you’re there with a half dozen people, it’s like we’re at a dinner table. They’re all spatially positioned in the same spot. When they speak, you can hear them as if they were there. It’s a special experience I much prefer to Zoom.John: I like your dinner table view. You should have a dinner table there because they have different maps. What I liked is you could go there and go, “We’re going to all go see the DJ event.” You can go walk 100 feet from DJ like you’re at Burning Man and go, “That’s cool, but let’s go talk in the corner.” You and your friends go over there. That’s the magic of what’s going on.Pablos: That is one of the experiences that I love about it. In my view, there are a lot of places you could go with it and there’s a bunch of potentials. One important thing that came out of playing with it was understanding how much better audio can be online and how much we’ve given up because of the history of audio online maps to Telecom. They have a massive network and they’re trying to reduce bandwidth consumption because they’re trying to get more users on the same amount of spectrum. We don’t have those problems a lot of times. What we have is a problem where all the compression, bandpass filtering, and latency adds up to make you feel like you’re not fucking real when I’m talking to you on the phone. That’s not cool. My brain thinks you’re fake when I’m talking on Zoom. It’s eroding our relationships, not substantiating them. One of the metrics Philip Rosedale told me that they found was that the average Verizon call in America is 350 milliseconds of latency. Your brain can handle about 180 before you start to not feel real. We’re talking over each other and there’s a mismatch. Your brain is cycling on like, “Am I getting through?” I’m trying to have a conversation here and your brain is stuck going, “How do I connect with this person?” I’d be better off with tin cans and string. We’ve got fucking cell phone companies that have got us down into six kilobits for audio or something stupid.John: It takes a lot of nuance, which is important for us to have that connection and experience rather than data across like, “I can give you some information.”Pablos: “Tell me the information.” It’s like, “I can’t get a credit card number across.” On your expiration day, “How am I supposed to get across the fact that I fucking love you guys? Before you die, I want you to know I love you.” “What’s that again? Hold on. You’re breaking up.” It’s quite sad. High Fidelity is cool because it does give people a chance to hang out again online, especially in COVID. I’ve gotten to have some cool experiences there.John: I checked it out a few times. I got my little URL code with my little server. I got a bunch of people to show up and we’re all running around. I felt like, “It’s cool because now you can bombard any event with your friends and know that you can go disappear with them and have that conversation,” like you would end up having. I love your dinner table idea like that because we do Zoom dinners and everybody is cute into having to have the visual, but maybe just sitting around and having dinner with your friend so you can all chat in that audio spectrum is going to make you feel like you’re there.Pablos: The cool thing about High Fidelity to me is, of course, the audio substrate they’ve built. The wandering around the map thing, I’m a little less sold on. There are probably other user interfaces that we can hook up to the audio that would be more compelling. Even if it was just a dinner party. You drop in and it automatically places you in a seat around the table or in the audio space. You don’t even need anything on the screen. Just close your eyes. It’s freeing. John: You can be hanging out like, “I could have dinner, or I could be making dinner and be part of a dinner conversation. I don’t have to worry about how I look on Zoom and all that stuff.”Pablos: That’s what’s working. If you look at the Clubhouse, people are hanging out. It’s freeing to not have to worry about the video aspect of it because it’s not buying you anything. It’s making things worse. Something Philip was adamant about when I talked to him was that the video detracts from the experience and just being audio. I’ve used beta versions of High Fidelity and stuff that have a video in it, and I agree. Now I have the video back and it’s making it worse. I’d rather be on audio and have it be good. If you think about it with your headphones and in High Fidelity with your eyes closed, it’s roughly equivalent if we were hanging out at the dinner table with lights off. It’s damn close to that. That’s a real experience that we could have and have had. I don’t know how often you have dinner with lights off but I do all the time.John: There’s an easier connection point and people know. When you do conference calls versus Zoom calls, everybody will pop in for the conference call but in Zoom call, “I’ve got to make sure I’m there. I’m going to have to be present. Who’s going to be watching me?” There are 2 different things, 2 different flavors, and 2 different purposes.Pablos: Zoom and video conferencing, as we know it, sucks enough that a lot of times, making things worse. On the subconscious level, I can be here saying, “Cool. Awesome. Let’s do it,” but your brain is telling you, “This guy is a fucking cartoon character made up by the evil Disney corporation.” Your brain is not telling you the same thing as the person. I have a teleprompter. I have a DSLR aimed right behind your face, so when I’m talking to you on Zoom, I’m staring right in your eyes. I’m trying hard to connect. John: I have a couple of friends that have done that and it’s a big difference because if I see you coming through your teleprompter and DSLR lens, I am getting a much better 3D representation in everything, the right aperture, and depth of field.Pablos: I’ve got all that going. I look amazing on Zoom. If you don’t have it, your gaze is not into my eyes.John: It’s usually off-center because everyone is over here and you’re looking at the people but the camera is here.Pablos: Zoom won’t let you move ahead around. Ideally, you should be able to move your head under the camera. Zoom won’t let you do that. No tool lets you do that.John: You’ve got your prompter right here. I’ve got a couple of friends who have done the same thing. You’re looking right into that camera. Do you have it right above?Pablos: No, I have a lot of screens and shit, so I have a prompter here. When I use it, I’m looking right at it and then ignoring all the stuff. John: It makes a huge difference.Pablos: There are ways you could embed these cameras into displays.John: Those cameras that you clip on top, it talks about a whole in the market. A mini prompter that has those $160 cameras sit right on top of it, that’s a goldmine on Amazon. As soon as people see the difference of like, “This is how I come off as a projection when I’m not doing that.” It’s like, “I’m talking to you and I’m looking over here.”Pablos: Teams does it, but also FaceTime now has a feature, which will use deepfakes technology to shift your eyes. On a FaceTime call, it knows because Apple knows the geometry of the phone and everything. If you look in your iPhone settings under FaceTime, there’s a feature called eye contact. It’s established natural eye contact while on FaceTime.John: I never even saw it before.Pablos: People don’t even know it’s there. It’s on by default. FaceTime calls, and this is going back to those neurological cues, without even realizing it, they are better than Zoom.John: Apple is hip to connection. That’s why their photo programs and the videos they make for you are all about the human experience and connecting with your friends. They know those nuance differences that helped.Pablos: Let’s get on to some actual topics because that’s interesting stuff that we could talk about for seven hours. It would be super cool because I’m unlikely to have a conversation with anyone else who’s had the career you’ve had. How would you describe it? Is it audio engineering or producing music? I know music must be the unifying theme.John: As a producer and engineer, it started out with the love of music, and then everyone said, “You can’t do that. You’ve got to go to school and get a technical degree.” I went to school and got an Electrical Engineering, Computer Science degree. In it, I was like, “It’s technical. I want to go back to music.” I ended up in the film and music lab at school. I got a double major, then I got out and I opened a recording studio because MIDI and everything was happening. My trajectory is this recording engineer, then I realized it wasn’t an engineer. Engineers are in the weeds making the audio sound good, but a producer is teaming up with the engineer and composers. Other people may have something like, “I’m probably going to be better in the producer’s spot.” I learned the engineering part to get to producing because there’s an integration theme in this too.Pablos: Being an engineer, I always think of not necessarily music, but in anything, understanding the tools that help you to invent and create at the bounds of what’s possible. John: At that time, MIDI was bubbling.Is that in the ‘80s? John: Yes. I remember I went to this electronic music expo and hybrid arts in the first Digidesign product. It was called Sound Tools. I was looking at some waves on the screen and they needed somebody to figure out how to put a wave on the screen. Until then, we’re cutting 2-inch tape with razor blades. DATS came out and you couldn’t edit a DAT yet. This was the program edit a DAT with Sound Tools that now became Pro Tools in about 1.5 years. I was mind blown. I was like, “Good thing I went to tech school because I can understand what’s going on under the hood. Now I see where all these tools are going.” I want to make records more than ever because I can see that this is going to be an integration between humans and machines in the mid-‘80s, late ‘80s, early ‘90s.Pablos: I always thought if I had another concurrent life to live, music would be the coolest thing to work on for the reasons you’re talking about, not because I have anything to bring to music, but because this is one case where we got it right with computers early on. Partly because of MIDI. Everything can talk to everything. All these devices can talk to each other. Even though the analog gear got that corner in front of Jack or an XLR, you could plug anything into anything. I had a weird experience that I’ll never forget. It was 1983 with the first CDs. CDs, in those days, were 520 MHz.It blew our minds because my floppy disk could hold 128,000. The first 3 to 25-inch floppy disk could hold about 400,000. Having 500 MHz blew our minds on a disk. We had Walkmans with tapes in them, but the idea that the music could be digital that lived on a CD and it could hold so much. There was no compression. The computers were too slow, hot and expensive, so you couldn’t have any compression. Uncompressed audio on a CD and you probably remember this, the first ones were glass. That was a selling point for CDs. It was like, “The last forever.” I quickly figured it out. I can make it out of plastic and you have to buy new ones. At that time, I remember seeing like, “The music can be encoded digitally. A whole album can fit on a CD uncompressed.” We didn’t even have a notion of compression at that time because compression was too expensive. Computer chips couldn’t handle it. I wasn’t thinking of compression at all, but I was thinking, “Now I’ve got a metric. I know that a track uncompressed on a Duran Duran album is 50 MHz. I had a sense of a computer chip that could hold 128,000, about 0.5 cubic inches.” I started adding them up. I’m like, “Eventually, I’ll be able to put the track on a chip.” I added up that one album. At that time, it was going to take about a cubic foot of chips.We didn’t have the transistor density yet, so I’m like, “With a cubic foot of chips, I can hold one album.” Moore’s law had been described, but it certainly hadn’t penetrated my mind yet. I had some visceral sense of it by now. I knew that the 128,000 chip, last year was only 16,000, the same size. I have one friend in the entire town who knew enough about computers to appreciate the idea, but I said, “Someday, we’ll be able to have a song on a chip and you’ll be able to plug all the songs together that you want. That’ll be your Walkman with no moving parts.” I didn’t envision this thing, but it didn’t have compression in my head, so it could have been better. We would go even further now, of course. John: You can have a whole bandolier.Pablos: That’s exactly what I was imagining.John: All your favorite songs, you can queue it up, plug it in, and play it.Pablos: You just get the song that you want. You plug them in back to back because chips are like that anyway. It’s like Lego bricks. You’d have this solid-state music thing. It blew his mind and nobody else could have hung in there long enough to even comprehend. I was twelve years old envisioning the future of music and portable music. All I was trying to say is you’ve got to live out like one of my fantasy careers in life, which is you get to create with all these tools and plug all this stuff together. It seems like you hit it at the right time. The synthesizer had become a thing. The sampler had become possible in the late ‘80s to early ‘90s. Music was changing from a thing that was done with instruments to a thing that was done with studios. How did you see that play out? Did you play instruments before that?Yeah. I grew up playing guitar. I was lucky my parents saw that I could pick up a guitar and play it. They’re wonderfully like, “Let’s get some lessons.” At the lessons, I had a virtuosic jazz guitar playing wonderful teacher and I was like, “Can you teach me how to play this Eddie Van Halen song? That’s cool, but can you teach me Foxy Lady before I leave today?” Unfortunately, as a teenager, you just got to play what you know. I got enough theory to make it all work. I was not the best music student, and then later, I had to go back and learn a lot more theory.Pablos: At least you had a guitar. I’d like a violin and clarinet and you couldn’t play anything I wanted on that.John: I was lucky in the late ‘70s to early ‘80s playing guitar, so I had fuzz boxes. I went home and had the wah pedals. My poor parents had to deal with it. I would go to bus tables at my dad’s restaurants and he’s like, “What are you going to do with the money?” I’m like, “I’m going to buy a new four-track record.” He’s like, “What will you do?” They were supportive like, “If that’s what you think you want to do.”Pablos: That’s how my parents felt about the computer.John: Sooner or later, I had a mini hacked up recording studio in my basement. I had a little four-track test camcorder one, and then they have FSK code, which means that you could print that on track four. You could run the whole MIDI rig, which now became a whole virtual 99 tracks, whatever you wanted to do. That was the first thing I hacked together. I was like, “I can take the FSK code out of the drum machine, then the drum machine becomes the slave to the tape machine.” All of a sudden, it was functioning like a little mini-big studio. When I went in and tried to intern with the studios, they’re like, “How the heck are you doing these demos on these little cassettes?” I’m like, “I had this big virtual rig.” They’re like, “You have to come in here. We’re going to have you be the intern around here because you get some wack ideas how this is all fitting together.”Pablos: That was the thing. At that moment in time, anybody who knew how to work a computer was like, “Go work the computers,” then you’ve got to play with all those toys.John: Samplers were the mind-blower. All of a sudden, I got an Emax 2, which was the bomb because now you could skip the recording deck altogether with a sequence. You can make anything happen you want at all times. I remember going in and doing commercial spots for the local radio stations and having everything sampled on a keyboard, so they’d be like, “This is so-and-so.” As you hit each key, you can get everything you need to happen on cue. I had a whole cue system. This is before Pro Tools.I was hacking a sampler to become the digital recorder that would happen a couple of years later in the game. Necessity is the mother invention like, “Why do I want full-on reversing a tape and all that stuff?” I’m like, “I need everything in button pushes, so I can cue them exactly what I want them.” Knowing my outcome, I was grabbing tools and making stuff happen, and then you saw the tools. One of the beautiful things is that musicians are usually tech-savvy.Since the beginning of time, I have a feeling that science, math, and music are more related than we think. Look at the three generations of computer software and things that have been developed. Most of them by musicians because it’s pretty much the tools that they always wish they had. As coders go in, they’re like, “My coding project is going to be this. I never got a chance to have this, so I’m going to go make it.” They’re making all these killer, right now if you’re a musician, you have the cream of the crop set of tools like never before because all these couples of generations of dudes have been like, “Why didn’t they make this? I’m going to make this. I’m going to code this and make this happen.”Pablos: That’s what excites me about it. I look at it and see all these tools, and they all work together. I remember when I first saw Reason. It was such a genius UI to have every one of those things you used to have to plug into a rack. You couldn’t even afford them all anyway, and then they’re all there. You could just plug them together with cables on the screen. That was genius. I loved it. It made me want to do what you do at that time. That’s the origin, It starts with doing ads for radio, and then you keep going and get into making music, films, and everything you can do with music, as far as I can tell.John: I graduated from a 4-track to an 8-track, and then I had a little mini-studio. Of course, sometimes, you have these band rehearsal places. You’ll get a cube there and you’re paying a few $100 a month for it. There are sixteen bands on the floor. They’re all popping their heads and going, “What’s going on? You can make demos.” All of a sudden, I was making heavy metal records for a summer because all the bands up there were doing blast beats.You do a set of demos there. I was connected to 2 or 3 different studios in Michigan, which is an interesting story. Four or five different home studios and we have our own little community pod of producers. There’s Ben Grosse, who later becomes a big producer out here. Mark Bass and Jeff Bass are listening to the radio one night in Michigan and they’re like, “This guy can rap his ass off. We should go and record. We’ll pick him up, John.” It’s Marshall Mathers. It’s Eminem.John: I did some of Marshall’s first recordings in my project studio. I went to the trailer park and picked him up from the Moms. Everything you see in 8 Mile is a little bit different version of him. I’ve got a video of him somewhere in high school rapping fast, and then some singer singing these big hooks behind him. He was 14 or 15 years old at the time. My friends had the vision to be like, “This is something special,” and they stayed with it. In the late ‘90s, they worked with him for about seven years. The demo got heard by the right people and all of a sudden, it was all going out for Eminem.They heard nos for years like, “This isn’t going to happen. He’s a white rapper.” They’re diligent about it. You see how the technical chops and everything have to come together, but people still need to see the vision and still need to see where creatively things can happen that aren’t happening yet. Marshall would show up at my studio and he had two huge notebooks full of songs. At fifteen, he had 200 or 300 songs in his notebooks. The keyword is prolific. I’m like, “I learned some people are super prolific about what’s going on.”Pablos: A lot of times, it’s obfuscated. Nobody else knows that. Nobody else saw those notebooks. This is 30 years into his career or whatever. He probably has a lot more of those fat notebooks because that’s what it takes. It takes 300 attempts to get one good track. This is one thing I’m curious about because you’re right on the border of what would be considered a creative job. It’s about creating music. The creative element is part of what appeals to you. That’s why you want to produce and not just be doing the engineering work for someone else’s vision. There’s a creative aspect to it that appeals to you, but it’s all super technical. We’re surrounded by as much computer gear as I do almost. Your speakers are bigger than mine. The point is for a lot of people, in a role like yours, you have to be a businessman and you’re a contractor. You’ve probably never been an employee. You’re mostly doing contracting in your career.You’re unemployable. There is no job you can get by filling out an application. I don’t even have a resume. I know I’m unemployable. There’s no job for me, but the same for you. You have to learn about the business aspect of things. You have to learn all this technical crap. You have to figure out how to finance all this equipment or get access to it. In some sense, market yourself because you’ve got to get the next gig or job. It’s all that stuff that you have to learn and skills you’ve got to develop and substrate, so at the end of the day, be able to say, “Let’s go make some music.” You partly take all that for granted, but there’s probably some point at which in your career you felt like you’ve got to focus on the creativity more. For me, there were times when I definitely got to focus on the creativity more, and then over time, I got more responsibilities, obligations, emails, and other crap. Sometimes, to the point where I’m like, “I’m not doing anything that I need to be doing other than paying bills.” I just get the money, and then I spend the money. There’s no point to my existence. For you, in this industry, it’s times when you feel like you’ve got to focus on being creative. What percentage is getting to be creative for a guy like you?John: There’s a proportionality to scale too. If you’re in a bigger play, platform, or company play then you’re going to have X amount of assistance. You’re writing the cookbooks like, “Here’s how the theory works for focused music, and then here’s what we’re going to do. Here’s how we’re going to make it.” One analogy is I’m the managing editor of a magazine. I’m going to make the prototype of the magazine for the music channel, and then I’m going to hire some peeps that understand how that’s made because I don’t want to granularly make every note.I want people to have a good reference about what we’re going to be making, and the “why” becomes very important. Why are we making this? How are we making it? If you can instill that in the assistance, then that can scale but you are doing less of it. Somehow, that’s strangely not fulfilling. When you’re creatives in this biz, like record producers are pretty auteurist, they’re a little bit control freaky and a little OCD. When we see a vision, we want to do it all ourselves. It’s interesting to get into the software game where now it becomes MVP and you have to let go of all that. It’s like Will and I at Focus@Will. We come from a music background. You get one chance to play the hit record for the guy at the label who’s going to give you a couple of million dollars to develop the band.You’ll become like, “It’s got to be perfect for anybody who hears it,” but then in software, it’s like, “If you got an idea, put it out, and then you get feedback and you iterate.” It’s hard to go from your perfectionist plan of being a record producer to like, “Just get it out. Get it good enough.” I’m like, “That’s not good enough for anybody to hear.” They go, “It’s way good enough to get some feedback from our customers,” and then we’ll keep tuning it up. Those two worlds were very far apart. Now I’m realizing like, “That would be a great book for people to write. The Edge of Perfectionism and First MVP.” Those two concepts. If it’s a lever, how far are you shifting it to MVP? How far are you shifting it to perfectionism?Pablos: This is the same fundamental lesson that I’m always trying to beat CEOs over the head with, which is, “Here’s why you suck at innovation. No one hired you to innovate. You’re hired to do the exact same thing you did last year a little bit faster, cheaper, and better.”John: It’s 10% better.Pablos: Even 1% if you’re lucky. They’re about stability and predictable results. When you’re doing new stuff, you don’t know what’s going to work. By definition, you have to discover what’s going to work and you’ve got to try a lot of things. You’ve got to get a lot of shots on goal and plan on missing most of them. This is even bigger than what you described. This is the fundamental reason why the software is eating the world. The reason Silicon Valley has been able to take over many industries is not because we’re any good at any of them or understand them better, or whatever. We’re good at rapid iteration and we got that from software development. We’ll launch it. All afternoon, people are pissed off and emailing me about why it sucks. I’m like, “No problem. I’m going to make another version and launch it before I go to bed.” For more than fifteen years, we’ve been doing what we call rapid iteration or release cycles that are 4 or 5 or 6 versions a day. That’s your mobile apps and your web apps. It used to be eighteen months when it was on a floppy disk, shrink-wrapped.John: Disks of Microsoft stuff coming in there, they can’t do that every day.Pablos: Now, we have what’s called continuous deployment. If you’re at Facebook or Google or any of the major or even smaller web apps, you could build a feature, launch it to 1,000 users, and A/B test the shit out of them and see what works and what doesn’t. If it works, then you get 10,000 users. If that doesn’t break, you get 100,000 users. If that does a break, you get them all. I’m making up the numbers, but roughly, that’s the idea. That’s for every feature. You and I have different versions of Facebook on our phones because we’re both being A/B tested on different features we don’t even know exist. That’s how it works. You don’t remember ever installing the new version of Facebook. Not unless a couple of years. That’s because it’s constantly upgrading. The whole reason I’m describing that is like, “The winners in almost every industry are the ones who figure that out and get on board with that process.” Rapid iteration fails fast, works better than any amount of OCD, any amount of wisdom from on high, and any amount of prior success. Probably you can see, knowing these guys, the producers with OCD, you can see who’s part of the future and who’s not because the ones that are hanging on kicking and screaming, trying to do it the old way, we don’t have space for them. It’s hard for them.John: It’s an interesting paradigm. Being on both sides, one thing that I’m lucky enough to be able to integrate together is learning from software, quickener. I’d like to say for Brain Music Labs, we’re taking the design thinking approach to even music production, which is a little bit new because music production is generally an artsy approach driven by intuition and guys have music theory. I’m bringing in scientific frameworks like, “If you want to de-stress people, one of the big keys is the recipe books from guys who’ve done ten years of research on it. You do this with the tempo. You do this with the contour, melody, sound design, attacks of the instruments, and the tambours.”John: It makes it super mellow, so it doesn’t stress anybody out, but it puts him in a state of like, “Cool. There’s the cookbook.” Now as a producer, I know that I can take that framework and design thinking approach to it like, “What’s the MVP of that going to sound like and how many people am I going to test it on?” I’m going to get it that much better, so by V3, this is fucking solid music that is going to de-stress somebody. If I was intuitively doing that, I would have said, “It goes like this. There’s my CD. Check it out on Spotify.” A little bit more design thinking iterative approach is the real next wave of music for purpose in how it works.Pablos: Let’s back up because we’ve got three projects that I want to cover here. Focus@Will was the first big one where you were able to go in recognizing the man-made playlist on Spotify or an automatically generated playlist on Spotify had some point of diminishing returns in its ability to help you. If you’re trying to focus, de-stress, get out of that anxious mindset, or whatever you could use music for and you know that you can do it and it can help. The one size fits all thing that’s happening in Pandora and Spotify, and whatever with these recommendation engines wasn’t able to get as far as what you were able to do with a more deliberate product design. Can you describe Focus@Will and how it works?John: My best friend, Will Henshall, is an amazing, number one hit songwriter.Pablos: What’s a cool song that he wrote?John: He did I’ve Been Thinking About You from the ‘90s, which is the apex iconic song for many people who met and fell in love. He gets love notes from people like, “We fell in love with your song.” What I love about Will when I met him is he has this big vision. We’re riffing, “What do we do with our strange skillsets?” The music industry is in a weird place in 2009 and 2010. We come up with a concept about, “Can we do music that helps you focus better or work better?”We did deep-dive research on platforms, anything you can think of all the way back to Muzak. We’re like, “Why did many people do this before? What did Muzak do? What were their tricks?” Our investment pool brought us to an amazing scientist, Evian Gordon, who’s done all this integrative neuroscience. Evian had some interesting deep dives in his biggest brain database in the world about how energetically, if you can personify people and find out what people are energetically, then you know how much energy their non-conscious mind needs so that they’re not looking to distract themselves.Pablos: What’s an example of how you classify someone energetically?John: Think of a spectrum from left to right. On the left, you have a little bit more low energy and on the right, you have more high-energy people. You know who these people are in your sphere. Pablos would be little on the right side of the spectrum. When you walked in, I had yoga music and I’m like, “I’m not trying to calm down.” This is a total classic case. Your resonating frequency is probably a little bit more up-tempo. You could take a few questions borrowed from the Big Five personality test and figure out where you are on the energy spectrum.If we build you a playlist, not that it’s like, “John is going to make a playlist for you,” but scientifically, energetically, we’re going to play A, B, C, D, E, F, G. Those are going to have different energy levels for you. What you’re doing is tricking your unconscious mind from getting bored, so we can give you enough energy to keep you engaged but not too much to distract you. Once we know that, we can build a 120-minute list and you’re rocking.By the time you come out of that, you’re like, “I’ve been working for two hours.” The normal attention spans about eighteen minutes. We’ve done a tremendous value for everyone by guiding you energetically through maybe a genre that you’d like, but much more energetic play, personalized and customized to you rather than random. That’s the Focus@Will concept and there are 7 or 8 different major genres like classical, dance music, and chill out music.Pablos: You can make it work with any genre or with any of those?John: Yes. We figure out who you are, and then what kind of music you like and we’ll give you the best version of something that’s more scientifically designed.Pablos: Is there speed metal for yoga or something like that?John: We have an ADHD channel. It’s 180 beats per minute.Science, math, and music are more related than you might think.TweetPablos: I was going to say you should call it Apperall or something. It’s like Adderall music. I’m probably not the target customer. I’m blessed with not having any real anxiety. I don’t have a problem with focus or being calm or any of the things. I’m not specifically trying to be calm, but I could do it if I wanted. I’m lucky. I have a deep appreciation for that. I dated someone who struggled with anxiety a lot from a lot of different angles and directions. A lot of things could be tough for her and she had a lot of mechanisms she’d worked out to manage that. I started to appreciate what it would take to be able to live a life where you have those issues and have to do something about it. This is going to get me in a little trouble, but it seems to me like a lot of people are trying to live this well-balanced lifestyle. Their idea of balance seems to be finding the center and trying to stay there. If you think about balance, that’s like trying to balance a pencil on your fingertip. You can do it, but it’s precarious. Whereas, if you try to balance a barbell, it’s easy and you could do it forever because the weight is at the extremes. Balance through extremes is my idea of how to live a balanced life. I want to party all night and sleep all day. There could be variants like that.John: You should have been in the music industry.Pablos: It’s not that I want to party all night but I want to experience extremes and that’s working for me in a way and we’re telling people the wrong thing by saying, “Do yoga, eat organic, try to meditate for sixteen hours a day,” and all this stuff when we should have something like the antidote to yoga. There should be a class where you learn to fidget, hold multiple thoughts in your head at once, have a jackhammer going and some techno music, a baby crying and you’re trying to do the SAT. To me, that’s a valuable skill to learn. I’m trying to do a different thing. I’m curious because I used Focus@Will app for a bit, but I don’t think I did it right. Maybe partly because I’m not looking for the thing that was built for. The same toolkit you’re using to help someone focus, calm down, and not be anxious, you ought to be able to flip that because I know when I’m sitting down I want to put on The Crystal Method. I want to feel the adrenaline in my headphones. That’s what I’m after and you guys could do that too.Is there a market for that? John: There have been almost two million people through the Focus@Will system. We’ve got many users who have been around for years and bought lifetime accounts. We have a constant feedback loop where we’re always iterating and finding out what works best for people through surveys and interviews. Energetically, there are different ways to serve those things for sure. I want to point out that focus is definitely a little bit different. They’re related but Focus is its own game plan. It’s a little bit different than stress reduction, anxiety, or things like that. Those show up in other projects.Pablos: Focus is about focus.John: I would say that our big game has been understanding people a little bit more. It’s personifying, modifying, customizing, having them get personalized music by taking the onboard and finding out on that spectrum how much energy they need to be fed non-consciously to get them to stay in the game for 60 to 120 minutes. That’s the big win thing. If you can task closure and task persistence for one hundred twenty minutes, that means you can get the spreadsheet done, stuff you don’t like. If you’re in creative land and you want to pump it up, that’s a little bit of a different nuance, you can do that also. For you, we should do a fantastic custom channel. We get your references for what works and start building a list around that. That’s the work that I want to do with Focus. Work with some thought leaders and even got more nuanced into, “Here’s someone that runs a creative innovation, they’re going to be a little bit on the outside.”Pablos: I’m not in the box.John: There’s a spectrum and you’re a little off on this side. How do we do that? Those are a little bit more special use cases and I want to do that, and it’s doable. Our system has the technology to do that but we haven’t explored those nuances because we’ve been much more about the bell curve blanket approach and how we can help the most people.Pablos: For most people, it’s going to be amazing, better than a random Spotify focus playlist by a lot.John: It’s hard to get that messaging, believe it or not. People are like, “I have my Spotify account.” That’s going to be okay but it’s that one vocal that shows up in that playlist because somebody curated it because they liked it and it’s good but you’re out of the game.Pablos: You’re singing Mariah Carey when you’re supposed to be getting your taxes done. Your life is going to be so much better. You’re going to do taxes for hours on end without any interruption or hopefully something better. Moving past that, this has been a couple of years ago when you guys built that. John: It was in 2010 when we started Focus@Will. For the last few years, it’s been a live product for the masses.Pablos: What’s happening now? With Evian, you went on to do additional projects.John: Evian is the Founder and the Chief Medical Officer at Total Brain. Their mission is to help people with mental wellness. They want to be the mental wellness app. So you go there, take an assessment, and they can find out all the nuances of your mental health game. Maybe you tend to be a little bit on the depression side or maybe tend to be a little bit anxious. They have daily routines you can do that are made by experts that will help you solve all those issues and give you support.In that, Evian told me, “We know music is the biggest brain stimulation on the planet because you can zap it with beeps and blips but when you put your earbuds in and you’re feeding stuff to your brain, there’s nothing more powerful. He challenged me. “Could we find some frameworks that we know would reduce anxiety and stress?” I loved doing research, especially when you have someone like that guiding you because you get lost in all the rabbit holes. He and a couple of his science colleagues are throwing me about a hundred white papers in the summer of 2020. The beauty of COVID, normally, you’d be like, “I want to do that guys, but I’m busy.”Music is the biggest brain stimulation on the planet.TweetPablos: I’m not busy and I have a Focus app.John: “I’m going to read all these white papers.” Lo and behold, I’m sifting through and I find one that is so like knock your forehead and be like, “This seems so simple. Can this be real?” There was a big PR push around this track that Marconi Union did called Weightless back in 2013 and it tested 60% and 70% better than all the other tracks for reducing stress. There were some sound healers that work with the band to do this and I read some papers about how some people pick this apart. I did my own dissection of it. I’m like, “Here’s a model with 50 million YouTube views. I’m going to try that one as a first MVP.” First iteration. I dissect that song and it’s in the key of D. It has an interesting implied heartbeat, sure enough. It’s not a sample of a heartbeat but it’s in there and it goes from 67, maybe 70 beats per minute down to 58 or 60.Pablos: Over the course of the track?John: Their original was sixteen minutes, but someone did an edit and that was the one that blew up on YouTube. What’s really fucking interesting is that it’s the simplest thing but we don’t realize how the power of tempo entrains humans. DNA wise, historically, millions of years beating a drum around the fire. We are plasma antennas and when something wants to give us a pulse, we will lead and lock to it. This is why binaural beats work for EEGs but even more importantly, the simplest thing, what I’ve learned from reading all these papers if you want to drive something, the first thing you want to do is control the tempo. In a world where we have computer grids, pop music is all 180 beats per minute for a pop song. It never moves. It’s on a grid. We don’t even think about altering the tempos anymore because that was old school musicians when they didn’t have click tracks.Pablos: Our software doesn’t lend itself to that.John: No. It does good because, in film scoring, they had to make these elaborate, beautiful ways. It’s hacking some things again. I’m going to take a new timeline in my Logic rig. I’m going to make this thing go from 60 to 50. I’m going to put the implied heartbeat in as they did. I’m going to do a couple of other things. I don’t want to say sad, but theirs was a little bit on the melancholy, introspective side. What happens if I dress that up and make it from film scoring. I can do a series of chords or contour the melodies, which are maybe for having stress, what’s going to make you feel even cozier like you’re lying on that super cozy blanket?Your friends come over and they’re stressed out, you want to make them feel comfortable. How could I do that musically on top of that framework? I did a mini library and I pitched it to the Total Brain team. I was like, “I’ve got the science framework, I’ve got a creative approach, almost if I was going to do a film score and we got something interesting. What if we made a mini library for anti-anxiety that has these parts in it?” They’re like, “Let’s try it.” It took about 1.5 months. We composed about twenty tracks from scratch and we put them in their system. Using a 3D mic, I went out and recorded nature sounds in 3D scapes so you have the visceral real feel like you’re there.Pablos: I have questions about that too. First of all, you’re composing original tracks that have this mellowing property.John: Yes. A great way to put it.Pablos: One of the key things going on is that you’re slowly reducing the tempo over time. You’ve got something in there that kinda maps to a heartbeat so viscerally your mind and body can latch on to that, and you guys have been able to have people listen to this and prove that it slows their heartbeat.John: Here’s the beautiful part of the big play working with a mental wellness company like Total Brain, there are about a million people on their systems so the iteration happens.Pablos: Total Brain has a million or a bunch of people.John: I might be exaggerating a little bit but it’s a lot. It’s way up there. They have some large B2B clients. The beauty is the data comes in fast and the inside team there will send me metrics from the back end. Our engagement is up 35% to 40%. People are finding much more engagement and they’re twice as engaged so the music we find is this beautiful sticky gateway to get people to build a better habit to use something. Even if you can have this great mental wellness play, you have to have that comfortable thing that gets people to come. It’s like, “I had an experience. I liked it, I’m going to come back and keep doing this.” I’m taking myself out of the mix. Forget me doing it. Anybody doing music that has a good thoughtful framework will get better results than guessing.Pablos: It works for a lot of other things too. John: For me, I’ve got to play three different roles. I get to play scientist and learn it. I get to make it as a creative and I get to play with some people who are consistent and have big platforms to test it on. Those three things become important because now I feel I have a much bigger impact with the music.Pablos: For a lot of people, they don’t understand that that’s the context they need to get into. They have 1 or 2 of those things, but not all three and you’re not going to probably be as effective. This could be about anything, not only making music. It could be about anything. You need to be in a position where you’re getting a feedback loop going whether your stuff is working or not. A lot of times the feedback loop that people are using, “People said they liked it,” isn’t as good as if you could get the feedback loop of a shit ton of data that’s coming from a bunch of users and you see what they do voluntarily. First of all, let’s describe binaural beats because there’s a hype around it. I don’t think people understand what’s going on a lot of times. Can you take a stab at explaining that?John: I’ll do two parts. I’ll give you the basics about how they work and a little bit of my intuition on why I think I haven’t quite figured out why they are effective but I have a couple of hunches around it. Binaural beats are a phenomenon. It works like this. If I put a pitch in your left ear, and I slightly change it in your right ear, your brain is like, “What’s going on? These are two funky and different pitches. What’s up? I’m going to try and figure some sense into this.” What happens is, it blends the two and by doing that, it makes a ghost beat called the binaural beat that you hear in the middle of your head. As I vary those pitches, I can get that need to happen sooner and faster.The concept was the gentleman who discovered this was like, “This could be the thing that syncs both hemispheres of the brain. This could be the way that we figure out how humans can put it all together.” There’s been a lot of hype around it but there hasn’t been a great granule test. A lot is going on now so we’re going to see a lot of white papers on this in 2021 or 2022. My take on it and I’ve had my own experiences because it is trying to put the information together, it is using both parts of the brain, both sides. For me, when it does that, I do get less monkey chatter.In other words, if I have the binaural beat on, it is fudging my brain around enough where I don’t have the 30th and 35th thought I normally do. Maybe I only have 1 or 2 on the side going on but it seems to reduce a little bit because it’s making the brain go, “We’ve got to figure out what’s going on with these two pitches.” I know that’s a crude way to put it but it’s fun to do that for podcasts to get people interested in thinking, “The left and right, maybe there’s something that’s going to help me with my attention a little bit more.”Pablos: One of the things that I’ve seen people misunderstand and I know people swear by it, some of them aren’t using headphones. John: If you’re not using headphones, you’re not getting the method of binaural but you can use isochronic tones. Let’s call that a Ghost Beat. I like the Ghost Beat concept because you’re thinking, “My brain is putting together a left and a signal and making a ghost signal that’s pumping in the middle.” The other way to do that is with an isochronic tone and that’s a timed pulse. I can control that pulse, and sympathetically we’re in touch with tempos of music. The concept was they looked at, “Pablos is now reading a book and he’s active. Let’s look at his EEGs. His EEGs are about 12 hertz. He’s in 14 hertz. He’s cruising along. He’s got a lot of good stuff going on.” Let’s say Pablos shows up sleepy one day. What if we take that generator over there, put it through his headphones, and we put a fourteen-hertz pulse in there? Would that make Pablos brain start to wake up a little bit more like a cup of coffee? Two or three studies show that that’s an effective way to get your EEG is to start to regulate what’s coming in through the ears.Pablos: It sounds like if I sit here and I make motorcycle sounds, I’ll work hard but I haven’t gone anywhere. I don’t know if I should buy it. Just because it works on the output doesn’t mean putting it in is going to get the same result. That’s what I feel I was going on with the way people describe binaural beats.John: I like that analogy a lot, the motorcycle sounds. No one has put it back together input back to input-output where we know for sure that it does that. It’s been shown to reduce stress, if we take the EEG reading of someone who’s listening to a binaural beat and we start to lower it down to delta to sleep tones, it’s effective in stress reduction, but no one’s shown the banging result is super explicit that we can get people in these other states.We think there’s something there but we haven’t totally got the backing research to make sure that we can do everything we want. In terms of the main states, there have been enough studies that show 8 and 10 hertz is a good place to get people into a relaxed state and sleep all the way down to 4-hertz delta. It’s effective in getting people there but it’s the upper range that there’s a lot of soft science around it. It’s like, “You do this and it makes you see God through your third eye.” That’s cool but we haven’t quite proven that yet.Pablos: You’re saying, we have some cases where the binaural beats are effective may be on the low end, helping people get to sleep. Is that right?John: Yes. A little bit more from bringing things down more than going up in the papers that I’ve read. That’s by some of the research I’ve done and I’ve got some criteria where I’ve got to get back to the science team and go like, “This is heavy enough to start doing some tests.” They usually say, “Not quite,” but for a couple of those, they’ll say, “There’s enough research that shows that we can get people into a less anxious state, definitely a sleepy state.”Pablos: Certainly harmless as far as we know. It’s way better than taking meds.Pablos: It’s comparatively harmless, at least. Focus@Will isn’t playing with that but the stuff you’ve done with Total Brain does.John: For Focus@Will, I’ve used entrainment, we have our cafe channel so let’s say that you’re stuck at home, but you’re an extrovert and you want to feel those people around you like you’re sitting in a coffee shop. Our coffee shop, our water channels, and some of the things are laced with binaural beats that’ll keep you in that little extra awake state.Pablos: Is there any way for people to get access to the Total Brain project? You can go to TotalBrain.com and get it. They have a little bit more of a B2B play but they do take customers in. You have to go there and put your email in.Pablos: I heard you describe having figured out that you could optimally record a stream from 40 feet away. What’s the idea there?John: On the path of finding out better stress reduction, there are apps and platforms that do nature sounds. The first thing you do is a marketing analysis. What exists in the market? Are they doing well with a good framework? Can we do something even a little bit better and cooler? My angle on the nature sounds was a lot of these are licensed from Hollywood sound libraries and stuff. It’s better than nothing but I live here in Marina del Rey. If I go sit by the ocean, it’s a little bit different than putting on the headphones and hearing what’s there. Could I bridge that gap more?I looked into it, and now on the market are some 3D microphones and there are four different ways you can record ambisonics and binaural. Recording in binaural is different from stereo. We can get into the deep science of it, but I’ll summarize it like this. It better simulates how your head hears things the distance of your ears, the thickness of your skull, and the whole thing. They’ve dished it up into a little circuit in some of these microphones and it’s plug and play. You bring it and it’s going to feel much more immersive like you’re there.They need these things because they hook them to 360 cameras for virtual reality games and stuff. As the market explodes, we get to use the research from the audio, and now if I bring the microphone, the Zoom H3-VR, it’s super cool and easy to use. I bring this to the beach and I bring a couple of other recording instruments and microphones. I have some expensive mics. Lo and behold, for some reason, this thing makes you feel like you’re sitting there on the ocean.The game is, if you’re going to de-stress maybe for focus, you might want the exhilaration of being right in the waves 8 feet on the shoreline. For de-stress, I found that as you walk back and I walk back ten paces, place the microphone and record it for 15 to 20 minutes in all different places on the beach. There’s a sweet spot about 40 feet which normally people sit when I’m having a conversation with my friends a little bit away from the waves. When you put that on and put that in the background, you feel you’re in this spot, which feels real and comfortable and a little bit different than a regular recording. I started to do bird sounds in the morning and anything else I could find nature wise. I’m going to build a whole library of 3D relaxation scapes.Pablos: When I think about it from a physics perspective, if I’m on the beach and I get closer to the shore, I’ve got sound coming from closer to 180-degree beam spread or whatever you call that. Whereas if I back off 40 feet, I’m getting more of a 45-degree angle on the ocean. The beach can be wide so it might be 120 degrees spread or something but it’s still a guess at 40 feet away. There’s enough of it, that it makes a difference in the audio. Is it that? because if I had a stereo microphone, I’m up on the beach 40 feet away recording, and I turn it up a little bit, I’ve got roughly the same thing.John: Close, but neurologically our ears are real 360 radars. When you think of our DNA in thousands of years, “Can the tiger eat me? Can something jump 20 feet and get me?” We’re extra careful and super aware of anything within our 20 to 30-foot radius. As you get close to that shore, the bubbles and even the effervescence of the waves, you’re starting to look to your left and you’re like, “Do I need to react? Is something going on?”As you go a little further away, and you get into I call it a safety bubble. There’s about a 30 feet proximity and what I’m finding is stress even some focus stuff when as soon as you move around the area of the bubble, you get past the, “I don’t have to react.” You come down a little bit. Your vigilance comes down a little bit and you can go, “That’s a nice background sound but I don’t think I have to react to anything.” The other way it’s like, “Is the wave going to hit me, and do I start swimming? Am I going to get wet? What’s going on here?”Our ears are real 360° radars. They drive so much of our behavior but in a way that works so well that we take it for granted.TweetPablos: What I was wondering, which you’ve answered, I was imagining, is if you’re trying to make a cool 3D experience, why don’t you put one mic on the shore or 40 feet from the river let’s say, and put another mic 40 feet on the other side of the river, and make it feel like I’m in the fucking river? Why wouldn’t you do that? John: I love that. Let’s try it.Pablos: Maybe it’s not calming, probably better for my channel. John: Maybe they would be good for focus. At Focus@Will I’ll take those three places and that was for stress. If I was going to do a focus channel, I would put one in the effervescence and that would be the energy version and the low energy version might be 30 to 40 feet away and the lower energy version might be 60 or 70 feet which is the background. Neurologically, think about how proximity is a much bigger play in our audio experience than we realize. That’s why you’re going to learn more about 3D shooter games. We all have that. That’s part of the reason why they’re addictive and experiential because you’re feeling that. How can we take that and translate that to apps? Most of the apps now are still flat stereo imaging, but you’re going to see everything start to develop that. At Brain Music Labs, we want to be one of those companies that are helping people with apps and platforms to get the nuances into what they’re doing and the results they are trying to get.Pablos: This is what I don’t get. If all I’ve got is an ear on each side of my head, can my stereo headphones do surround sound as well as my Dolby Atmos Home Theater?John: Close. That’s why ambisonics and binaural, there are different ways you can mix in post-production so that standard headphones have a lot more nuance of ambisonics and have 3D spatial audio.Pablos: What does ambisonic mean?John: Ambisonic is a format and a type of the way that the channels are mixed and set up. As a post producer and mixer, you know that you’re mixing this in ambisonic styles so there’s a certain way that you’re going to use the panning devices and everything and how that’s going to all filter into stereo. Binaural is another way. You can mix for binaural, meaning you’re going to take any sound source and mix it into something that’s going to feel a lot more like the standard head. Stereo has been the standard for years but in the last few years, these other conventions have come in where I have options about how I want people to perceive and experience the sound. Ambisonic gives you the concept of a point 3D space or if I’m in a game, it’s tracking its head tracking.Pablos: I remember being in Dolby and they showed me these tools for audio engineers, where it was a map of a theater on screen. You could go and put sounds wherever you want it. You could go and put helicopters flying overhead so the sound goes here. It’s like an animation pathway for the sounds and computers would all turn that into whatever it needed to do. They have a theater where they can show all this stuff off. It’s incredible. I don’t understand it. I’m like, “I only have two ears. How do you make sounds come on top of me and below me?”John: You’re asking a question but you probably do understand. While using phase cancelation, you’re looking at the real physics model of how the human head and ears hear something. You can take something using the doppler effect and noise cancellation. That one sound, although it’s coming through two earbuds, how that translates, all the math that’s going on is it shows a fly that’s 2 feet in front of you, and all sudden it goes over your head and behind you. Nothing’s changed but they can make it happen and it is like an animation path, so you’ll see a joystick rather than a panning mode in those post-production tools. The concept is, “Can we hack some of the cool Hollywood audio post-production tricks and game audio post-production tricks, and make those better experiences for people in music for wellness or whatever we’re trying to do?” I think the answer is yes. We’re getting good results already.Pablos: I don’t know if you’ve talked about this much but one of the big problems in virtual reality and also by extension augmented reality is the pivotal thing that’s holding back the medium in some sense, which is, we don’t know how to tell stories in those environments yet. A lot of different ideas and we’re at the beginning of, in some sense of figuring out how you use that as a narrative storytelling environment. If you think about a book, if I’m the author, I’m in total control over the narrative. I know exactly what I’ve revealed so far. I know what you know, I know what you’ve seen, what characters you’ve been introduced to, the whole history, and everything. I’ve got control of that. For movies and TV series, it’s the same thing. All along the way, in a movie, the director is getting to control exactly what you see. He’s controlling the camera. In VR, the viewer controls the camera, not the director. The problem with that is if I’m trying to tell a story in that environment, I put you in it, you’re controlling the camera, and I’m the director, you might not be paying attention to the shit that is important or what’s going on over here, outside your field of view. This is also true of video games to some extent. This is partly why video games end up so far not being the best narrative storytelling environments even though they’re incredible environments. You can control everything you can put characters in and sets and it is low cost to do it. The notion of an interactive story has taken off because one of the problems is how you solve that and make sure that people don’t miss something important. It’s a computer so it can hang out, wait until you happen to notice that there’s a dead body on the sidewalk and go to the next part of the story. There are tricks like that. I remember hearing about some game developers getting fucking conniving about this. I don’t even know what the example is, maybe somebody can tell us because I don’t play these games, but I remember one example. You’re on the street, in a video game, and you’re looking down the street, but you’re supposed to be looking up at the UFO that’s about to land on you. They have a lot of tricks for getting you to look up and a lot of it is in audio, I don’t know what those are but one of the visual ones is subtly the street lamps would bend in and all point up. Your eye will be drawn to the vertex of where they’re pointing where that UFO would be at, so you look up even though you have no idea why you’re doing it. This is unfounded, but my suspicion is I’m equal to video to the visual input. The audio input is the frontier for where we’re going to find all those tricks. How do we subtly steer the attention of a human to where it needs to go? That’s where the answers are going to be, some of them anyway, for how we can take this new medium and turn it into a functional storytelling medium. Have you thought about that stuff at all? We’re going to need you to figure it out, John.John: There are people even way more brilliant than me that have got it going on. Traditionally, if you look at non-diegetic sounds in films, it’s coming from somewhere else often, it is preceding the visual that you’re going to see.Pablos: What does that mean? John: It means it’s not on camera. There are some slick directors from the ‘70s. If the scene is going to fade to black, right before it fades to black, you get an audio cue about what’s going to come up next. Chris Milk is a genius. Do you know Chris Milk?Pablos: No. John: I met him through Singularity University. You guys may have brushed. He has a fantastic company. In fact, I have his new video game here. Check it out. Supernatural is killer. Chris did a TED Talk and he talked about VR as the ultimate empathy machine so how do you not only tell a story but how do you have people feel the story out. When I met him, I went and googled him and checked him out. He has a powerful way to put in his TED Talks. It’s awesome. I highly recommend everybody in the show to check it out.Chris uses that concept in his VR, he did a couple of music videos for YouTube and some other people. In it, he guides you and he uses the reaction to natural sounds. It might be a female voice that whispers off camera. Even if you try not to, you would react and you would go there, or a baby sound. It’s something in our DNA that we’re programmed to react to. He uses tricks to get his narrative so you never miss the golden opportunity or crucial moment. Directors are already looking at ways to do that and it’s good that you brought that up, because sound is one of the best ways to do it, it’s not visually obstructive and it’s a great way to leave people. Our radar and our ears are trying so much of our behavior but it works well that we take it for granted.Pablos: Do you want to tell us a little bit about what the Brain Music Lab is about? John: It’s the culmination of everything we’ve been speaking about. If you go to the website, you try and write these great sentences that tell your mission and everything else. The concept of Brain Music Labs is understanding, from all our experience of working with all these different projects over the years and how the human experience is driven by the audio experience. We are tuned into our radar and our ears. How do we take that all that experience we have and turn it into something that works for any application?You’re going to do an app and it’s going to be for a workout, how do we do that better? The underlying tenets are to find the framework. We have such experience in Hollywood making all the stuff that we can make it a unique sticky experience and we can put it on a testing framework so the three would work. The design thinking process of going, “This is the result we’re looking for. Here’s how to make the coolest version of it that people are going to enjoy and here’s how we’re going to test it over a lot of people and iterate a couple of times.” Maybe buy V1, V2 and by V3, you’ve got something that’s unique, different, and better than you would have got going any other way.Brain Music Labs is tied into platforms, apps, and anybody who’s trying to make something that’s going to make a difference. Because we’re part of the Singularity ecosystem, we work with people who are helping to do something big and impactful and cool for humanity. We want to get in there and muck it up with people who want to make something work. We’re sure we’ve got enough experience with platforms, apps, games, so we can bring something unique to the situation.The human experience is, in some ways, driven by the audio experience.TweetPablos: What’s a good example? Who would be a good person for your project for you to collaborate with?John: I’m a ten year meditator and I have studied in many different ways and meditation apps are evolving in such beautiful ways. In VR, the beautiful thing is you put on goggles and you can control your outside experience. You’re in the goggles and you can talk about people who are easily distracted. You get goggles on them and all of a sudden, they become Jedi yogis. I’m interested in working with VR and AR companies. VR because there’s a big game to be won there in meditation. Maybe kids who are having problems learning, is there a way we can feed them everything they need to learn through the goggles, and would that help kids with autism and not be overloaded but give them the bubble that they need to play in? I’m interested in that. Wellness plays, because I have such a good solid framework and a head start on mental health music for distressing. For anybody in your ecosystem who wants to play at that level, you can get hold of us. We’d love to help you out.Pablos: I’ve been fascinated by the potential. It seems that VR gives us the ability to get the brain engaged in a way that our other virtual mediums haven’t. I’ve seen situations where people are using VR to treat PTSD, and things like that, where you can start to access the subconscious in a way that we don’t seem to be able to do with a screen, as we know it. Probably the audio aspect of that nobody is working on yet. I don’t know. That would be cool. John: Could you make it where maybe if you weren’t wearing the goggles, you could still get a lot of it, going back to High Fidelity. The beauty of that is maybe there’s training that happens full-blown, all senses in VR. There are other ways that maybe you can have your phone in your pocket, and you’re still able to do that beautiful audio experience. I would love to do that.Pablos: One of the things that is fascinating to me is we have a lot of cases where people don’t seem to be trying hard to take these technologies and use them to help and solve good problems. We have this incredible toolkit around audio, all this research that these guys you’ve met have been doing. To my knowledge, I know Dolby is doing a lot of research on audio and what they can do to make cooler experiences for people for sure. I know that these tools are capable of helping us with a lot of these issues that people struggle with around focus, anxiety, feeling calm, feeling like they’ve got self-control. We’ve been doing this since we’ve had Walkmans where you put on some Metallica or Red Hot Chili Peppers and go skate. For me, I put on The Crystal Method and I go snowboarding. I’m ready to hit it. I know that I could use music to amplify that adrenaline hit but you’re taking it in the other direction. I know I can also put on Enya and chill out. Maybe nobody can put Enya now that it’s in every mall. It was in every mall for 20 years and now we can’t do Enya anymore. I have PTSD from Enya, so you have the antidote. The point is, we know we can use music that way people are doing it in a haphazard way. If I had to summarize, you’re taking the research that we’ve been able to do the science about what’s working, piecing that together. Also building tools or apps that make it accessible for people to dive into the groove they need to be in, hold them there for that 120-minute window for focus, and maybe something else for other contexts. You’re making a fine-tuned experience now that helps people through the issue that they have. That’s a beautiful use of your background and experience but also a cool example of what these technologies that are under our nose could do if we applied them in a different way and in a positive way. You could still be using your skills for evil.John: Like ghetto rap records.Pablos: There are some awesome ghetto rap records now, I don’t know if you’ve been paying attention. With your help, we could make them make awesome ghetto rap records that make me also chill out. The next generation of reggae could be created here in this room. There’s a great illustrative example of that here and the work that you’re doing. I’m thankful for that because we’re looking for ways for people to get excited about using technology to solve problems for people and that’s exactly what you’re doing. John: This is classic SU incubator stuff. I saw a couple of your presentations on stage and it got me thinking deeper about the innovation integration and even taking a much more design thinking approach to it. Until you get into software, you’re coming a little bit more from the intuitive art game, which is great, because artists, often their intuition is very breakthrough. There’s something in there. I like the idea of pulling a deep framework where you know it had results to build on.Why build a house from scratch if you know that you can get the framework that’s working and build on that to make something cool around it? I think everyone benefits. Thanks for acknowledging that and it does come from people like you talking who are like, “I’ve seen the future, and here’s what you guys need to be thinking about.” When you’re in the audience, and someone is tuning you into those kinds of concepts, you’re like, “I could do what I’m doing but in that kind of capacity.” It’s important. Things like the podcast and stuff you’re doing at SU, there are hundreds of people like me in those audiences, and everyone is having their moment like, “I can do that with what I’m doing.” This is a great circle back to seeing the same presentation and going, “I can do this way cooler. Let me try what Pablos is doing here.”Pablos: That’s worth a lot. I try to do those talks selfishly because it’s a way to meet interesting people. I’m not sure that I’m convinced that I’m having any effect on the audience, but it’s great to hear that. Ideally, and even with the show, I’ve been lucky because I’ve gotten to build a different sense of perspective by working in a lot of weird projects over the years. If other people can learn from that, who didn’t get to do that, that would be optimal. I want other people to learn from my mistakes. Most people can’t even learn from their own mistakes but if there are a few enlightened people in the audience who can learn from my mistakes, that would be amazing. That’s the goal.John: I tell people, “All the mistakes I made, I can shorten your timeline. If I can shorten your timeline a little bit. You’re going to be so much happier. You might think I’m trying to talk too much but just listen to a company’s concepts. It might take a year or two off.” It doesn’t have to be a linear thing.Pablos: It’s like, “I already spent millions and millions of somebody else’s money to learn the thing that you need to know right now. Please, just let me tell you.” That’s how we become old curmudgeons. John: We’re at the age where all the sensors and everything is getting that much more granular. We could have got heartbeat, or we could have got things years ago when we started Focus@Will. Talking about some of these concepts, but none of the sensors were there yet. They’re like, “When the iPhone 12 comes out.” This was like when the iPhone 4 was out and we’re like, “That’s cool but that’s going to be a ways.” Now we’re there and there are micro granular things like your blood oxygen level. Apple and Samsung have put in a lot of money.Apple, in one of their keynotes, said that they want Apple to be known as the people who are helping you realize your best physical self. That’s almost like a medical device. Without getting too granular, think about what that means. The Apple Watch and other devices are going to become medical grade in the next year or two. Now the nuances of driving behavior through music can now be measured that much more. We’re both focusing. We’re trying like, “That’s close. It’s kind of good. What if there was that biometric? Is it HRV? Is it something else? Is it a combination of a few different things?”Very likely that we’re going to have it nuanced like, “I think that’s making Pablos feel a little bit turned off now.” We know when he’s excited and he went dim a little bit.” We have the granular expertise biofeedback rolling for your focus list, your stress list, or your creativity. That’s something that’s way out there. Can we measure things in your heart rate variability to tell us like, “He’s in super fucking creative mode?”Pablos: You can at least get heart rate data from an Apple Watch now. Does that feed into Focus@Will yet? John: Not yet.Pablos: Seems like it would be a place to start. John: Absolutely.Pablos: It’s not worth as much as an EKG.John: Heart rate is good. It’s triangulating a couple of things.Pablos: What did you mean when you said HRV?John: Heart Rate Variability. It’s not just how fast your heart is beating but they measure certain waves so that there are patterns. It’s counterintuitive. You would think that steady heart variability means that you’re stressed so you want a variable heart rate variability. Increasing in heart rate variability means there’s good stuff going on. It means you’re in a relaxed flow state.Pablos: This is what I think is cool about the thing that you’ve made with Total Brain which is, the user gets two knobs, volume for the music and volume for the binaural beats separately. You could do this in a variety of ways but the best way would be, if I am listening to music on Spotify, what I should get is not just a recording. I should get all the tracks and all the MIDI. Spotify ought to be mixing it for me on the fly. At the point of consumption, it’s mixing it so that I’m listening to it. It’s figuring out that the environment around me is noisy. I need a little more thump. I’m not going to be able to get the subtle stuff anyway, so drop that out. At some point, it’s like, “Pablos just started a conversation.” Don’t shut it off but go quiet and get rid of the lyrics. I don’t need any other words in my brain right now. Once he’s off the phone, get the lyrics back in but I can have a soundtrack to my life where it’s like my life is basically, I turn on the music on all the speakers on Sonos, my whole house is filled with music. After that, I’m like, “I sit down at my computer like I’m going to watch a YouTube video real quick.” It’s not very long. I’m going to watch a few seconds of it but now I need the speakers on my computer to do my computer instead of Sonos. While the rest of the house is doing Sonos, my computer is doing speakers for ten seconds of YouTube, and it stops. Now my computers are out of the Sonos loop. Next, I’m like, “A phone call.” Stop all the speakers in my office. Now there’s music in the bathroom downstairs in the living room, but my office has no music. After the phone call, you don’t start them up again. The music is gradually deteriorating from my experience until I just shut it all off, because somebody else came over, and I’m trying to have a conversation. It’s another two days before turning the music on again. It’s that thing so what I need is the computer to understand my life, and give me just the right amount of the right music at any given moment and then figure out that, “Pablos got a call from his buddy in high school, play the Top Gun soundtrack.” I need that and I feel like all the pieces are there, but nobody is trying to put it together.John: It’s a classic user experience design. There are guys who are having this conversation like, “No one’s been able to get in close.”Pablos: The video games, they probably have to do all that. Video games have to dynamically mix all the music for whatever is happening. John: They do. Will and I at Focus@Will talked about redoing our engine and putting it in as 5G. One of our original concepts was a MIDI player that composed music for you. If you like dance music or Crystal Method, we have a whole bunch of stuff that’s MIDI sitting in there. It’s a MIDI engine based on that music. It pumps out stuff like that all day long and just goes. You don’t have to worry about lyrics or anything. Spotify is the crudest first version of doing that to masses, but they’re serving up a flat file from a server. If you can get to the mix and it’s dynamically mixing because it knows what’s going on in your life. “He’s on the phone. No more vocals. Let’s take some of the melodies out. Let’s give him the bass and drums now. Keep going. The bass is there but it won’t get in his way.”Pablos: It gives me something to look forward to.Recorded October 25, 2020 in Santa Monica, CaliforniaImportant Links:High Fidelity
MIDI
Pro Tools
Focus@Will
Brain Music Labs
Total Brain
Chris Milk
Singularity University
TED Talk – Chris Milk: How Virtual Reality can Create the Ultimate Empathy MachineAbout John VitaleJohn Vitale is the Founder of BrainMusiclabs.com and BrainMediaLabs.com, where he is researching and applying how synchronized music and media enhances the brains cognitive functions. They have built entrainment based audio technology tools that relax the nervous system in the moment and with lasting benefits.These tools and libraries are available to license or white label for wellness providers of all levels.John’s Massively Transformative Purpose (MTP) is to make Health-Span habits easier to adopt, by creating science backed apps and courses that gamify wellness and make it immersive and fun.His Moonshot is to become the trusted source for entrainment-based experiences for health span, and license wellness solutions to major health care providers to positively impact 10 million lives by 2027.He is always excited to engage media & audio professionals to create music and visual content for their system that reduces stress, support better quality sleep, to be more productive, and expand the creative potential of our planet. He is currently involved with two clinical studies featuring his entrainment based sound therapy and visuals to reduce stress and reduce addictive cravings.John is a serial entrepreneur, founding and operating tech, media, sound, gaming and music production companies. He has 35 years experience in music, sound, and film production as well in the roles of founder, supervisor, producer, engineer, remixer, composer and digital audio freak.John has worked with major labels Warner Bros, Sony, BMG, and Universal Music, and well known artists like The B52ʼs, Red Hot Chili Peppers, Filter, Eminem, KD Lang, George Clinton, and The Romantics. He has directed music for major corporate product launches, live press events, and broadcast campaigns for GM, Toyota, Xbox, Fisker, Mercedes, and Lexus.John cut his developer chops in the trenches of digital audio and samplers creating OEM soundware and ROM blocks for Kurzweil, EMU, Zoom, Ensonique and Akai.Recorded on October 25, 2020