Idea Machines cover image

Idea Machines

Latest episodes

undefined
Mar 5, 2019 • 51min

New Things in Big Healthcare [Idea Machines #11]

In this episode I talk to Torben Nielsen about creating new products and systems in health insurance. We touch on the tension between insurer's well-founded risk aversion and trying new things, the process of insurance companies working with startups, and how to even know if things are working. Torben runs programs at Premera Blue Cross with both internal teams and external startups to build new products and systems. Premera is one of the largest health insurers in Alaska and the northwest US, so even small changes can impact many people. Torben spent many years working in healthcare and built his tech chops at Xerox and Lego. Much to my chagrin, we spent zero time talking about the latter because of time constraints. His official title is "VP of Innovation" which I do poke at a bit in the podcast. Outtro My major takeaways I'm starting to sound like a broken record on this, but in health insurance, like so many places, the process of creating new products and systems ultimately hinges on the opinion of a few decision makers. Startups trying to work with health insurance providers are often frustrated by the providers' speed. This conversation helped unpack why the providers move slowly and what they're trying to do to change that - I hope it works!   Resources https://www.linkedin.com/in/torbenstubkjaernielsen/ https://twitter.com/TorbenSNielsen https://en.wikipedia.org/wiki/Premera_Blue_Cross https://www.premera.com/Premera-Voices/All-Posts/Healthcare-must-innovate/ Questions What does being a VP of Innovation in a large org do? What are your incentives? Incentives in the system? Who are the players in the process of innovating within healthcare? Why is healthcare slow to change? I assume there must be good reasons. How would you deal with a situation where an innovation challenges the core of the company? Conflicts? Primero test kitchen How do you assess/quantify risks? What are expected ROI timelines? How should startups engage in partnerships in healthcare ecosystem? Hard Question. Are there moral limits on cost per treatment / monopolies to drug therapies? What have innovations in health insurance looked like in the past? Let's talk about the elephant in the room: from the startup world, working with insurance companies is notoriously dangerous because of getting stuck in pilots, Insurance companies are inherently a hedge against risk. Innovation has built in risk. How do you manage this conflict? It makes sense that Where do you see the biggest areas for innovation?
undefined
Feb 12, 2019 • 1h 10min

Medical (d)Evolution with Dr. Robert McNutt [Idea Machines #10]

In this episode I talk to Dr Robert McNutt about medical innovation, medical research and publishing, and patient choice. Robert has been practicing medicine for decades and has published many dozens of medical research papers. He is a former editor of JAMA - the Journal of the American Medical Association. He's created pain care simulation programs, run hospitals, sat on the national board of medical examiners, taught at the university of North Carolina and Wisconsin schools of medicine, and published dozens of articles and several books. On top of all of that he is a practicing oncologist. We draw on this massive experience with different sides of medicine to dig into how medical innovations happen and also less-than-positive changes. It's always fascinating to crack open the box of a different world so I hope you enjoy this conversation with Dr. Robert McNutt. Major takeaways The practice of medicine has changed significantly over the past several decades - there has an explosion of research and specialization. This proliferation has led to many innovations, but has also decreased the ratio of signal to noise in medical advice both for doctors and patients. For another perspective on the explosion of research, listen to my conversation with Brian Nosek. While it would be amazing to have a process that was based purely on very strict scientific method, health is so complicated that the ideal is impossible. That means, like so many imperfect system, that ultimately so much comes down to human judgement.  Notes Robert's Blog Robert's Book Tomaxin Case Study Observational Trials Dictaphones
undefined
Jan 13, 2019 • 52min

Hacking Politics with Craig Montouri [Idea Machines #8]

In this episode I talk to Craig Montouri about nonprofits and politics. Specifically their constraints and possibilities for enabling innovations. Craig is the executive director at Global EIR - a nonprofit focused on connecting non-U.S. founders with universities so that they can get the visas they need to build their companies in America. Craig's perspective is fascinating because contrary to the common wisdom that innovation happens by doing an end run around politics, he focuses on enabling innovations through the political system. It's eye opening conversation about two worlds I knew little about so I hope you enjoy this conversation with Craig Montouri. Key Takeaways: There is a lot of valuable human capital and knowledge left on the table both by the US immigration system and the university tech transfer system. Nonprofits need to find product-market just as much as for-profit companies making products. And just like the world of products, there's often a big difference between what people say their problems are and what their problems actually are. Political innovation is different than other domains for several reasons - it both has shorter and longer timelines than other domains and in contrast to the world of startups, politics needs to focus on downside mitigation instead of maximizing upside. Resources Global EIR Craig on Twitter(@craig_montouri) NPR piece on Global EIR
undefined
Dec 30, 2018 • 59min

Bypassing Systems with Gary Bradski [Idea Machines #9]

In this episode I talk to Gary Bradski about the creation of OpenCV, Willow Garage, and how to get around institutional roadblocks. Gary is perhaps best known as the creator of OpenCV - an open source tool that has touched almost every application that involves computer vision - from cat-identifying AI, to strawberry-picking robots, to augmented reality. Gary has been part of Intel Research, Stanford (where he worked on Stanley, the self driving car that won the first DARPA grand challenge), Magic Leap, and started his own Startups. On top of that Gary was early at Willow Garage - a private research lab that produced two huge innovations in robotics: The open source robot operating system and the pr2 robot. Gary has a track record of seeing potential in technologies long before they appear on the hype radar - everything from neural networks to computer vision to self-driving cars.  Key Takeaways Aligning incentives inside of organizations is both essential and hard for innovation. Organizations are incentivized to focus on current product lines instead of Schumpeterian long shots. Gary basically had to do incentive gymnastics to get OpenCV to exist. In research organization there's an inherent tension between pressure to produce and exploration. I love Gary's idea of a slowly decreasing salary. Ambitious projects are still totally dependent on a champion. At the end of the day, it means that every ambitious project has a single point of failure. I wonder if there's a way to change that. Notes Gary on Twitter The Embedded Vision Alliance Video of Stanley winning the DARPA Grand Challenge A short history of Willow Garage  
undefined
Dec 25, 2018 • 59min

Accelerating Biotech with Jun Axup [Idea Machines #5]

Link to this Episode in Overcast In this episode I talk to Jun Axup about accelerating biotechnology, how to transition people and technology from academia to startups, the intersection of silicon valley and biology, and biology research in general. Jun is a partner at IndieBio - a startup accelerator specializing in quickly taking biotechnology from academic research to products. She has both started companies and did a PhD focused on using antibodies to fight cancer. This experience gives her a deep understanding of the constraints in both the world of academia and equity-funded startups and what it takes to jump the gap between the two. Key takeaways: Biology is reaching a cusp where we can truly start to use it to do things outside the realm of traditional medicine and therapeutics. These new products fit more cleanly into the silicon valley startup ecosystem. The gap between research and products in people's hands is not just a technical gap, but a people one as well. Indiebio is built to address both - guiding both the research and the researchER out of the lab. While the capital overhead has come down, biology-based innovation still require different support systems than your standard computer-based innovations. Links Jun's Homepage IndieBio Flight from Science Langer Lab Case Study (Paywalled) No transcript this week - trying a different production flow. If you feel strongly, please let us know at info@ideamachinespodcast.com.
undefined
Dec 18, 2018 • 51min

Rethinking R&D with Adam Wiggins [Idea Machines #4]

My Guest this week is Adam Wiggins, the cofounder of Ink & Switch — an independent industrial research lab working on digital tools for creativity and productivity.   The topic of the conversation is the future of product-focused R&D, the Hollywood Model of work in tech, Ink & Switch’s unique organizational structure, and whether it can be extended to other areas of research.   Links Adam Wiggins’ Home Page Adam on Twitter Ink & Switch's Home Page A presentation on Ink & Switch's Structure Sloan Review Article on Applying Hollywood Model to R&D (Paywalled)   Transcript   How the idea came about Ben:  How did you come up with this idea? Like wait what what originated that I'm just really interested in the thought process behind there Adam: sure, you know, I think me and my partner's we come out of the sort of the startup kind of school of thought on Innovation, I think. There's a lot of way to think about there's the more academic research minded approach to Innovation. There's made which get a bigger companies. So yeah, we come out of very much from the yeah. I don't know what you want to call it ad Jolene startup y combinator or whatever that you know mix of elements is which is really about build a thing really quickly get it in front of customers minimal viable product innovate, but at least my thinking is that the startup model has been so successful in the last let's say decade. Particularly with the kind of mass production of the startup that you get through groups like y combinator such that I feel like the problems the space of problems that can be solved with that kind of, you know group of 25 25 year old Founders spending three months to build a thing not say it's let's say saturated. Yeah to some degree in that maybe the more interesting problems are like bigger or longer in scope. And so then we thought about okay. Well, what's a what's a model that is more possible for going after bigger things. And that's when I kind of fell down the rabbit hole of researching these Industrial Research Labs. I know that you spent a lot of time on as well, you know, these big famous examples like Bell labs and Xerox Parc and arpa and so forth. And of course many other examples when we thought okay, well, You know, we're not at the we're not in a position to you know, be setting up a multimillion-dollar research arm of a government or commercial institution. But what can we do on a smaller scale with a small Grant and it's kind of a scrappy band and people and that's kind of what led us to the Incan switch approach.    The Thought Process Behind the Model Ben: can you go one step further where it's  you have the constraint that you can't do  a straight-up corporate research lab, but I think there are a lot of unique ideas in terms of a model that are sort of just unique and. In that like how did you cope that Lee idea that like, okay, we're going to like have our principles. We're going to pull in people temporarily. We're going to  build this network that that seems sort of to come out of the blue. So what was what was the thought process behind that? Adam: Well, maybe it came out of the constraint of do it with very little money. And so part of that is we're trying to work on a big problem. Hopefully and I can talk about that if you want, but the in terms of the the model that we're using we came at it from do it with very little money and that in turn leads to okay. Your big costs are usually sort of like office space and then the people right, but if we can do these really short term projects, we called the Hollywood model and I can explain about that if you want the basically we have like a four or six or eight week project. You can bring in some experts on a freelance basis and you don't necessarily need to commit to paying salary is over the longer term and you couple that with no office. We have an all distributed team. We're not asking people they don't need to pick up. Move somewhere to even temporarily to work on a project. Right? And so we what we can offer them as a lot of flexibility. And so the I think there's certain there's benefits for the people to participate in these projects join, but from the lab point of view again, it was we were embracing this constraint of do it really really cheap. Yeah and that basically boiled down to very short projects people on a freelance basis only no office and that that's kind of what what led us there, but I think there actually is a lot. Benefits to doing things that way there's some big downsides as well but there's some benefits as well. So the constraint led us to the model you might say got a desire to work on a big problem in the same with a longer time Horizon like you would for a you know, a classic R&D lab, but trying to do that with a lot less money. Let us to this kind of short-term project model. The Hollywood Model in Tech Ben:  There are three things that I want to take into from that the three things are going to be  how the Hollywood model works and sort of the difference between the Hollywood model in Tech versus in Hollywood  and then like those those pros and cons and then it feels like there's a tension between working on a really big long term projects via very short term sort of Sprint demos. So. So let's let's start with the Hollywood model because in Hollywood I like after after I learned about. You doing that I sort of dug into it and it's it seems like the Hollywood model Works partially because all of Hollywood is set up so that even the best people work on this temporary basis. Whereas in Tech, it feels like you sort of have to get people who are in very special life situations in order to get the best people. So like, how do you how do you juggle that? Adam: Yeah, yeah, that is those are really good point. Well just to I guess briefly explain. The Hollywood model is please the idea. There is I actually lived in Los Angeles for a time and have a lot of friends who are trying to break into that industry and got a little exposure to that. I don't pretend to be an expert but and you can read about this online as well, which is that most movies are made by forming a an entity usually an LLC for the duration of the movie Project. You know, I might be a year or two. Here's whatever the shooting time is and everyone from the director or the camera people the whole cast the entire crew are all hired as essentially short-term contractors for whatever the duration of time their services are needed. But even someone like director who's there throughout. It's essentially a one or two-year gig for him it yeah, and everyone's fired right things right expanded and it's and it's an interesting accounting model because it means the sort of earnings from the movie in the and how that connects to the studio. And then the way the studio is invest is almost more like maybe Venture Capital invest in startups to some degree. So that's that's my understanding of it. So we kind of borrowed this idea for saying okay part of what we like about this is you get a situation. Any given person and a cameraman a crew member a member of the cast doesn't isn't guaranteed some long-term employment. They don't sign on for an indefinite thing. They sign up for the duration of the project. Right and the end everyone leaves. But what you see is that the same directors tend to hire the same crew the same. You probably noticed this most dramatically in directors that bring the same actors on to the same onto their future films because if working with them before worked, why wouldn't you bring them back? Right and so it's but it's it inverts the model of instead of we're going to keep working together by default. It's more every time a project ends. We're all going to disperse but the things that work will kind of bring back together again and just inverting the model in a subtle way. I. Produces better teams over the long term. But yeah, you get this sort of loose network of people who work and collaborate together to have more of an independent contractor gig mindset and I think that was yeah it was inspired by that and like you said, can we bring that to kind of Technology Innovation?    How do you incentivize the hollywood model? Ben: Most people in Tech don't do that. So, how do you sort of generate? How do you get the best people to come along for that model? Adam: That was definitely a big unknown going into it and certainly could have been a showstopper. I was surprised to discover how many great people we were able to get on board maybe because we have an interesting Mission maybe because me and some of the other. Core people in the team have you know just good networks good career Capital. Yeah, but actually it's that more people are in between spaces and you might guess so quite a lot to work with us on projects. Certainly. There's just people who are straight. You know, they made freelancing or some kind of independent Contracting be their business, right so that those folks are to work with a lot of folks that do open source things, you know, we work a lot of people from the DAT Community, for example, a lot of folks there. They actually do make a livelihood through some degree of freelancing in this space. So that's an easy one. But more common I think is you think of that. Yeah full-time salaried software engineer or product design or what have you and they. You know, maybe they do a new job every few years, but they're expecting a full employment salary HR benefits, you know the lunch on campus and the you know, the massages and you know yoga classes and so I was worried that trying to you know compete to get Talent like that when all we have to offer these very short term projects would be difficult. But as it turned out a lot of people are in some kind of in-between space. We're really interesting. Project with an interesting team good sort of in between things maybe a palate cleanser in a lot of cases turned out to be quite interesting. So we got a lot of people who are you know, they're basically looking for their next full-time gig but then they see what we have to offer and they go oh, you know, that's actually quite interesting and they can keep looking for the next job while they're working with us or whatever. Yeah their Habits Like do this thing is like an in-between thing onto the way that are to their next. Employment or we have situations like, you know one person we were able to get on the team with someone who is on Parental. Leave from their startup and so basically wanted to be like getting the mental stimulation of a project but couldn't really go into the office due to needing to take care of an infant, right? Um, and so by working with us was able to get some nice in that case part-time work and some mental stimulation and a chance to build some skills in the short term in a way that was compatible with. Needing to be home to for childcare. So the a lot of cases like that. I think so it granted, you know people that are looking for full-time gigs. We can't give them the best offer in the world. But there's a surprising number of people that are willing to take a weird interesting kind of cool learning oriented project in between there. May be more conventional jobs. Building from scratch with the Hollywood Model? Ben: Yeah. Because one of the things that I'm constantly thinking about what I'm asking these questions is how do we have more things using the same model in the world? Because I think it's a really cool model that not many people are using and so it's like what like could there be a world where there are people who just go from like one to the other and then would be an interesting shift in the industry to be a little more gig oriented or Independent. Contractor oriented versus the sort of the full-time job expectation that folks have now. Yeah and another sort of difference between I think Hollywood and Tech is that Hollywood you're always sort of Reinventing things from scratch. Whereas in tech there is code and and things that sort of get passed on and built on top of . Do you do you run into any problems with that or is it just because like every every experiment is sort of its own its own thing. You don't you don't have that problem. Adam: Yeah, the building on what came before is obviously really important for a lot of our projects. We were pretty all over the place in terms of platforms. And that was on purpose we built a bunch. Projects on the iOS platform we bought built from on the Microsoft Surface platform. We've done in various different web Technologies, including electron and classic web apps and so in many cases there is not a direct, you know, even if we had written a library to do the thing we needed in the other thing. We actually couldn't bring that over in that kind of build it all from scratch each time or or the the mic slate of it. I think is part of what makes it creative or forced to rethink things and not just rely on the. Previous assumptions that said. You know for certain tracks to research you might call it a big one for us is this world of like CR DTS and essentially like getting a lot of the value of getting a lot of capabilities that you expect from cloud Solutions real time collaboration Google Docs style of being able to do that and more peer-to-peer or less centralized oriented environment. And so we in an earlier project. We built a library called Auto merge just in JavaScript and it was being plugged into our electron app and. And in future projects, we wanted to build on top of that and we have done a number of subsequent projects some of which were but obviously they needed to like use the JavaScript runtime in some ways. So if we were doing another electron project, yes, you can do that but that and then another case, you know, we wanted to go with tablet thing. All right. Well that limits us because we can't use that library in other places. And in one case is we chose to build for example in the Chrome OS platform because we can get a tablet there and partially because we already had this investment in kind of. Script ecosystem through these libraries. But yeah again that comes with comes with trade-offs to some degree. So so we're always trying to balance build on what we made before. But also we're really willing to kind of start over or do the blank canvas because we really feel like at this. Level of early Innovation. What matters is the learning and what lessons you learn from past projects and you could often rebuild things in a fraction of the time in some cases we have actually done that is rebuilt an entire project sort of like feature complete from what on a completely different platform. But if you can skip past all the false turns and you know Discovery process and to build what you where you ended up it's often something that can be done in just a tiny fraction of the time or cost Knowledge Transfer in the Ink&Switch Model Ben: Got it. And  do you have a way of transferring learning between  different groups of temporary people that things like would be one tricky piece. Adam: Absolutely. Well an important thing here is we do have core lab members both. We have some principal investigators who are people that are around long-term and are the people that drive our projects and their, you know, carry a lot of those learnings both the Practical ones, but also like culture. Cultural elements and then a lot of the folks we work with they'll come back to work for a future project. But yeah, absolutely every given project is a new combination of people some existing people in the lab. They carry forward some of those learnings and then some people who are new and so we've had to do we tried a variety of approaches to kind of. Do a mental download or crash course and you know, none of it's perfect. Right because so much knowledge. Is that even though we take a lot of time to do a big retrospective at the end of our projects try to write out both raw notes, but also like a summarize here's what we learn from this project even with that and sharing that information with new people so much of what you learn is like tacit knowledge. It's somehow, you know more in your gut than in your head. And so to some degree we do count on the people that are more standing numbers that go project project in some cases. We do have to relearn small lessons each time. And again that that somewhat is a you know, if you start over from scratch and you kind of start from the same premises then you often discover some of the same same learnings. I think that's okay as long as we get a little faster each. Each time and then yeah combine that with learning documents and I don't know for example, we're actually the point now we have enough projects under our belt. We actually have a deck that is like here's all our past projects and kind of a really quick crash course summary, at least here's what they're called and least when people reference. Oh, yeah. That's the way we did things on Project number five right was called this and you can be like at least have some context for that. And so short answer is we haven't solved the problem but here's some things that at least have helped with that. Yeah, and how many projects have you done in total? Yeah. Well depends on exactly how you count. But when it comes to what we consider the sort of the full list called formal projects, which is we spend some time kind of wandering around in a in a period of time to call pre-infusion named after the the espresso machine for the sort of record time. You put in the water to kind of warm up the grounds. So the version of that and once we have basically a process where once principal investigator finds a project with egg, I think there's a really promising area and we should fund this. Okay. Now we're going to go actually hire experts that are specific to this area. We're going to commit to doing this for again six weeks or eight weeks something on that order. There's a project brief we present basically present that to our board to basically give like a thumbs up thumbs down. I'm so if you count stuff that has been through that whole process we've now done 10 projects cool. That's over the course of about three years. Ink&Switch Speed vs. Startup Speed Ben: Yeah, that's that's really good compared to. Like I start up where you do one project and takes three years. Adam:  I need to maybe feels sometimes it feels slow to me. But honestly, we spend as much time trying to figure out what it is that we want to do as actually doing it and then suspend a really good bit of time again trying to retrospect pull out the learnings actually figure out. What did we learn? You know, we usually come out with strong feelings and strong Instinct for kind of this work. This didn't work. We'd like to continue this. There's more to research here. This is really promising. This was a dead end but actually takes quite a bit of time to really digest that and turn it into something and then kind of the context shift of okay. Now, let me reorient and switch gears to a new project is really a whole skill, too. To be doing such a rapid turnover, I think and I think we've gotten decent at it over the last few years, but I think you get a lot better if you wanted to keep at it. Ink&Switch's Mission and Reconciling Long Term Thinking with Short Term Projects Ben: Yeah. And I've actually like to step back real fast to the bookmark in terms of  a the big picture long-term thinking like what is in your mind the real Mission here and B. How do you square these? Like, how do you. Generate  a long-term result from a whole bunch of short term projects. Adam:  right. Yeah, really cool problem. Absolutely. Yeah. Yeah and one again, I don't pretend to have answers to we're still in the middle of this experiment will see if it actually actually works. Yeah, let me start by just briefly summarizing our our mission or a theme. I like to think of it a little bit right like typically these and these great examples of successful Industrial Research Labs, you know for Bell Labs or theme was this Universal connectivity that has Bell had this growing Communications Network and they wanted to like solve all the problems that had to do with trying to tie together an entire nation with Communications technology or Xerox Parc. Of course, they had this office of the future idea. It's. How many papers and copier what is it going to be? I think you need a theme that is pretty broad. But still you're not just doing a bunch of random stuff that people there, you know think it's cool or interesting new technologies. It's tied together in some way. So for us our theme or a research area is Computing for productivity and creativity. Sort of what the digital tools that let us do things like write or paint or do science or make art are going to look like in future and we were particularly drawn to this and. And our investors were drawn to this because so much of the brain power and money and general Innovation horsepower in Silicon Valley certainly the tech industry broadly and even to some degree in Academia computer interaction research and so on it really pointed what I would call consumer technology. Right, it does social media It's Entertainment. It's games. It's shopping. Yeah, and and that's really a phenomenon just the last five or ten years, right the successful smartphones the fact that sort of computing has become so ubiquitous and mass-market its health and fitness trackers yet wearable, and you know, that's all great, but. I think that the more inspiring uses the more interesting uses of computers for me personally. I things that are about creativity there about self-improvement there about productivity and when you look at what the state of I'm going to look like a spreadsheet, right if you look at Excel in 1995 and you compare that to Google Sheets in 2018 the kind of looks the same. Yep, you know, it's at a Google Sheets as real-time collaboration, which is great. Don't get me wrong. But basically the same kind of program, right? Yeah. And I think you can say that same thing for many different categories Photoshop or presentation software note-taking software that sort of thing. There's some Innovation to give me to go get me wrong, but it just feels very out of balance how much again of that Innovation horsepower of our industry broad. They could go into Super Side. So for us the theme is around all right. We look forward five or ten years to what we're using to be productive or created with computers. What does it look like and you know, the reality is desktop operating systems or more and more kind of advanced mode because that's not where apple or Microsoft revenue is anywhere. But at the same time I don't think it's you know, touch platform particularly, you know are built around phones and consumer Technologies and sort of the pro uses of them tend to be kind of attack on afterthought. And so it sort of feels like we're in a weird dead end which is like what are we going to be doing 10 years from now to yeah do a science paper or write a book or make a master thesis or write a film script? It's hard to picture and but actually picturing it is that's that's sort of our the job of our research here. Ben:  and  that is a really long term project because you sort of need to go  back down the mountain a little bit to figure out what the what the other mountain is. Adam: Absolutely. Yeah the local Maxima of some kind and so maybe you need to yeah be a little. Out of the out of the box and go away from basically make things worse before they get better. Aside on AI Enabled Creativity Tools Ben: Yeah, just aside on that. Have you been paying attention to any of the sort of like a I enabled creativity tools? This is just been on my mind because Neurosis is coming up and there's some people who have been doing some like pretty cool stuff in terms of like. Enhance creativity tools were like maybe you start typing and then it starts completing the sentence for you and and or like you sort of like draw like a green blob and it fills in a mountain and then you sort of like just adjust it. Have you been paying any attention to those tools at all? Adam: Yeah. Absolutely. Some of the follow sir pokes on Twitter that post really interesting things in that vein that hasn't been an area of research for us partially because maybe we're a little contrarian and we like to kind of look where. You are looking and I feel like Ai, and that kind of Realm of things is very well. Or I should say a lot of people are interested in that that said yeah, I think to me one of the most interesting cases with that is usually we talk about with like generative design or things like. Sot great Target Range Loop last year by an architect who basically uses various kind of solvers we plug in like here's the criteria we have for like a building face, you know, we need the window has to be under the, you know can't because of the material dimensions and the legal things and whatever it can't be here's the constraints on it. But here's what we want out of the design. You can plug that in and the computer will give you sort of every possible permutation. And so it's a pretty natural step to go from there to then having some kind of. Algorithm whether it be here a stick or something more learning oriented, which is then try to figure out from that superset of every possible design satisfies the constraint which of them are actually sort of the best in some sense or fit what we said that we like before where we use, you know, the client or the market or whatever it is you're looking for. So I think there's a lot of potential there as I think it was more of an assistive device. I get a little skeptical when it gets into the like let's get the computers to do our thinking for us. Yeah realm of things. I would say, you know, I think you see with the fit and of the sort of auto complete version of this, but but yeah, but then but then maybe I you know, I love that artisanal Craftsman, you know, some kind of unique vibe that humans bring to the table and so yeah tools as. Assisting us and helping us and working in tandem with us and I think yeah, there's one probably a lot of potential for a eye on that that said that's not an area where researching. Ben: Yeah. I just I wanted to make sure that was on your radar because like that's that's something that I pay a lot of attention to him very excited about. More Reconciling Long Term and Short Term Ben: Yeah, and so for the long-term Vision,  the thing that I always worry about in the modern world is that we are so focused on  what can you do in a couple months these little Sprint's that if there's a long-term thing you just wouldn't be able to get there with a bunch of little projects. So I'm really interested in like how you resolve that conflict. Adam: Yeah, well you could say it's one of the biggest Innovations in Innovation, which I know is the area your study medication to get into this iterative mindset this what he called agile whether you call it. Yeah, iterative that the idea of kind of breaking it down into small discrete steps rather than thinking in terms of like I don't know we're going to go to the moon and let's spend the decade doing that. But instead think of and I didn't even see that difference in something like the space program right the way that the modern. Space exploration stuff that's going on is much more in terms of these little ratcheting steps where one thing gets you the next rather than that one big Mega project. It's going to take a really long time the super high risk and super high beta so I in general. I think that's a really good sort of shift that's happened. But yes, it does come at the expense of sometimes there are jumps you can or need to make that are not necessarily smaller steps. And so I certainly don't propose to have the answer to that. But at least for what we're doing the way I think of it is, you know, starting with a pretty Grand Vision or a big Vision or a long time Horizon. If nothing else and trying to force yourself first and foremost into the bigger thinking right? But then going from there to okay, if that's you know, where we want to go. What is the first step in that direction? What is the thing that can give us learning that will help us get there and one of the metaphors I always love to use for I guess research in general or any kind of Discovery oriented process is the other Lewis and Clark expedition, you know, this was commissioned by the Thomas Jefferson was president at the time and it to me was really crazy to read about. Holly you know they hadn't explored the interior of the continent they believe there might still be willing - running around and actually one of the things Thomas Jefferson Wander from the he's like, I really loved a, you know get a while you're out there they just had no idea they knew that the Pacific Ocean was on the other side that had ships go around there. But other than that it was this dark interior to the continent, but they sat out you know that expedition set out with the goal of reaching the Pacific Ocean and find out. What's on the way right and they did they took their best guess of what they might encounter on the way and put together Provisions in the team to try to get there. But then the individual sort of you might wait, you might call the iterative decisions. They need to make along the way to be go up this mountain rage. And we divert this way to be cut across this River. Do we do we follow this for a while do we try to befriend these tribes people who run away etcetera. Those are the sort of the iterative steps for the important thing is keeping in mind that long-term strategic goal. Um and defining that goal in such a way that it doesn't say go west, you know, it's not a set of directions to get there because you can't know that you have to start with here's what our vision is. Let's connect the two coasts of this country and then we're going to take whatever whatever iterative steps seem to be most promising to lead us in that direction. Also realizing that sometimes the most iterative step leads us in a way even away from our goal. So hopefully that's what we're trying to do it in can switch is picking individual projects. We hope carve off a piece of the bigger thing that we think will increase our learning or build our Network or just somehow illuminate some part of the this problem that we want to we want to understand better again, what is the future of you know, productive and creative Computing and then hopefully over time those will add up in the trick is not to get lost for me. I think the trick is not to get too lost in. Detail of the project right? And that's where the Hollywood model is. So important because you got to end the project and step away to truly have perspective on it and to truly return to looking at the bigger thing and that's what you don't get in my experience working in a startup that has operations and customers and revenue that you know goals. He need to hit us according to those things which are absolutely you know, the right way to run a business but then. Keeping that that that bigger picture view and that longer term mindset is very difficult. If not impossible in that setting. So that's our approach. Anyways, see how about in longer term? Loops around Loops: The Explicitly Temporary Nature of the Whole Lab Ben:  and in terms of of your approach and ending things is it true that you're actually going to  at the end of a certain amount of time. You're going to step back and look reevaluate the whole. Is it like you're sort of doing like loops loops around Loops Adam:  indeed? Yes. So individual projects have this sort of you know, we'll end it and and step back and evaluate thing. And then yeah, the whole thing is basically, you know, we have a fixed Grant when that's how it's out and right and it's up to us to deliver invest to investors the learning you might call the intellectual property. We're not patenting things or whatever, but the. See protect the things that offer commercial potential and could potentially be funded as startups. Basically. Yeah, that's you know, that's that's what we'll do. And actually that will happen next year. Wow. And when that does happen will hopefully do you know will do the same process? Like you said that the the bigger loop on the smaller Loop which that we've done on the smaller Loops which is retrospect at the end write down everything we've learned and then we do go ahead and let the team. Part of that may be when you've done put all this hard work and getting a team together, but my experience is that if there's really some great opportunity there. You'll recall us it in some new form. What Comes out of the Lab? Ben:  I can see it's  going multiple ways. Where you. You ended and then you could either say  there's another five-year research thing in this or there's some number of sort of more traditional startups to come out of that to try to capture that value are those sort of the the two options. What what do you see as the possibilities that come out of this? Adam: Yeah, those are those are both pretty key outcomes and they're not mutually exclusive right so it could be that we say, all right, great, you know we generated sort of five interesting startup options one of them. You know an investor decided to pick that up and you know, maybe take a team that is based on some of the people in the lab that worked on that and those folks are going to go and essentially work on commercializing that or making a go to market around that but then some other set of people who were involved in things and want to come back to this. He's promising tracks research and we're going to take another grant that has another time duration, I think. The obviously money is your ultimately all of them in limiting factor it yeah in any organization, but but I like the time boxes. Well, I think we use again we use that for our short-term projects and and some degree. We used it for the lab overall. I think thinking that it's like that Star Trek, you know, what is it or three-year Mission our five-year Mission, whatever it is. It's something about the time box that kind of creates clarity. Yeah, maybe in some is and yeah, you might decide to do another time box another chunk of time. In other chapter actually investors do this as well. If you look at something like the way that Venture struck funds are structured they often have sort of multiple. Entities which are you know, it's fun one fun to fund three. Yeah, right and those different funds can have different kind of buy-ins by different partners. They have different companies in their portfolio, even though there is like a continuous. I don't know if you want to call a brand or culture or whatever the ties them all together and I think that approach of like having these natural chapter breaks time or money based chapter breaks in any work is like a really useful and valuable thing for. Productivity and I don't know making the most of the time. Human Timescales - 4-5 Years Ben:  I completely by that. I have this theory that human lives are kind of divided up in like these roughly five year chunks. We're like that's that's the amount of time that you can do the sort of the exact same thing for the most time and if you if you like if you don't have. You can reevaluate every five years but it's like you look at like school. It's like you really like maybe it's like five years plus or minus like to but beyond that it's really hard to like sustained. Intense and tension on the same thing. So that makes that makes a lot of sense   Adam: agree with that. I would actually throw out 4 years as a number which does I think Max match the school thing it also matches the vesting schedules are usually the original vesting schedule and most startups is a four-year window. And if I'm not mistaken, I think that is the median length of marriages think there's something around. Well, you know, maybe it's something around, you know, there's renewal in our work life is what we're talking about here. But there's also renewal in a personal life, right? And if you're yeah if your employee at a company. Maybe something around for years as a feels like the right Tour of Duty. No not say you can't take on another Tour of Duty and maybe with the new role or different responsibilities, but there's something about that that seems like a natural like you said sustained attention, and I think there's something to goes about as well as inventing. Or Reinventing yourself your own personal identity and maybe not connects to you. Marry. Someone for years goes by you're both new people. Maybe those two people aren't compatible anymore. Yeah. I don't know. Maybe that's figure that's reaching a little bit far. I mean the other yeah Investors, Grants, and Structuring Lab Financing to Align Incentives Ben: that makes a lot of sense and you mentioned investors a couple of times but then also that it's a grant so  how did you something something that I'm always interested in is sort of like how to. He's up. So the incentives are all aligned between the people like putting in the money the people doing the work and people setting the direction and so like how did you structure that? How did you think about sort of coming up with that structure? Adam: Yeah, I've used maybe investors and Grant sort of a little Loosely there again, the model we have is a little different. So when I you know went to pitch the private investors on what we were going to do with this, I basically said look. Me and my partner's we had been successful in the past producing commercial Innovations. We want to look now at something that's a little bigger a little longer term and wouldn't necessarily fit as cleanly into some of the existing funding models including things like the way that the academic research is funded and certainly Venture funding and so take a little gamble on us. Give us a pics. Amount of money a very small amount of money by some perspectives to deliver not profits, but rather to deliver again this kind of concept of learning intellectual property in the loose sense not in the legal sense, but in the sense of intellectual Capital, maybe might be another way to put it and more explicitly. Yeah spin out potential right but the but but no no commitment to make any of these things. It's just we've evaluated all of these opportunities. Here's what we think the most promising ones are and that includes both. Let's call it the validated findings. We think there's a promising opportunity here at technology. That's right to you know, serve serve some marketing users well, but also some things that we got negative findings on we said well look we think there's a really interesting Market of users to serve right here the technology that would be needed kind of isn't ready yet and still five years out or maybe the market is actually tough for an early not very good for sort of early adopter type products and so in some way that would be valuable to. There's as well to have this information on why actually is it is not wise to invest in a particular Market a particular product opportunity. So that was that was what we asked for and promised to deliver and obviously we're still in the middle of this experiment so I can't speak to the whether they're happy with the results. But at least that's the that's the deal that we set up.   Tension between Open Knowledge Sharing and Value Capture Ben:  I just I love the idea of investment not. Necessarily with a monetary return and it's like I wish there were more people who would think that way and. In terms  of incentives. There's also always the question about value capture. So you you do a really good job of putting out into the world just like all like the things that you're working on and so it's like you have all those the great articles and like the code. Do you  hold anything back specifically for for investors? So that because I mean it would make sense, right because you need to capture value at some point. So it's like there's there's got to be some Advantage. So like how do you think about that? Adam: Yeah. I don't have a great answer for you on that, you know, certainly again, you know, there's conventional conventional ideas there around yet Trade Secrets or patents or that sort of thing, but I kind of. Personally, I'm a little bit more of a believer in the maybe comes back to that tacit knowledge we talked about earlier, which is you can in a way. I feel like it's almost misleading to think that if you have the entire project is open source that somehow you have everything there is to know I feel like the code is more of an artifact or an output. Yeah of what you learn and the team of people that made that and the knowledge they have in their minds and and again in. To some degree in there. There are sort of hearts and souls. Yeah is actually what you would need to make that thing successful. Right? And I think a lot of Open Source people who work on open source for a living rely on that some degree, which is you can make a project that is useful and works well on its own but the person who made that and has all the knowledge about it. They have a they have a well of. They have a lot of the resources that are really valuable to the project. And so it's worth your while to for example go hire them. And so that's that's the that's the way I think that we think about in The Way We pitched it to investors. If I were to do this again, I might try to look for something a little more concrete than that a little more tangible than that. The other part of it that I think is. Pretty key. Is that the networking? Yeah, and so you could say okay. There's the knowledge of the people who worked on in their heads. It may be that that kind of ties together. But there's the knowledge we transferred directly by like here's a here's a document that tells you everything we learned about this area where we think the opportunities are but then it's also by the way, we had a bunch of people to work on this some of whom are now in some cases where we were pushing the envelope on a particular Niche e sub technology. We end up with people on the team who are in many cases of the world's experts or we're in touch with the few experts in the world on a particular topic and we have act we have that network access. And so if someone wants to go and make a company they have a very easy way to get in touch with those people not the really impossible for someone else to take that. Bundle of information or take even a code based on GitHub and pick through the contributors list try to figure out who worked on it and go contact them. You know, I think that's possible Right, but I think it's quite different. You would be the pretty substantial disadvantage. There's someone that actually had the worm Network and the existing working collaboration. Extending the Ink&Switch Model to Different Domains Ben: Yes, the I like that and in terms of using the model in different places. Have you thought about how well this applies to  other really big themes the things that you're working on our nice because it's primarily software like the capital costs are pretty low. You don't need like a lab or equipment. Do you think that there's a way to get it to work for maybe in biology or other places where  there's higher friction. Adam: Yeah, I think the fact that we are in essentially purely in the realm of the virtual is part of what makes the sort of low cost. By all remote team and not asking people to relocate that's what as part of what makes that possible. We do have some cost. We've certainly purchased it quite a bit of computing Hardware over the course of the of the course of the lab and ship those to whoever needs it. But that said, you know, we can do that. I think this model would best apply to something that was more in the realm of knowledge development and not in the realm of you have to get your hands physically on something. Whether that's a DNA sequencer or a hardware development or something of that nature, but on the other hand as certainly as cameras get a more of the quickest and high-speed internet connections get better and certainly we've learned a lot of little tricks over the time. I think we were talking about the start of the call there about. Our use of document cameras is basically screen screen sharing for tablets doesn't work great because you can't see what the users hands are doing. So we learned pretty quickly that you got to invest in document cameras or something like that in order to be able to kind of effectively demo to your teammates. One of the quick or as a kind of a sidebar but related to that is one of the learnings we had in making the distributed team thing work is you do have to get together in person periodically so we can to support early team Summits got it. Making Watercooler Talk and Serendipity  Work with a Distributed Team Ben:  I was actually literally just thinking about that because one of the things that I always hear about. Great research places like whether it's like Bell Labs or DARPA is sort of like the the water cooler talk or the fact that you can just sort of like walk down the hall and like really casually hop into someone's office. And that's the problem with distributed teams that I haven't seen anybody saw well, so so you just do that by bringing everybody together every once in a while. Do you think that generates enough? Adam: Yeah. I mean the to your right like. That problem is very big for us. And there's there's a number of benefits we get from the distributed team, but there's also a number of problems. We haven't solved and so I'm not sure how this would balance against the sort of the spending the same amount of money on a much shorter term thing where people could be more in person because that water cooler talk you get some of with a slack or whatever but. It's just not the same as being co-located. So yeah, the the one of the mitigating things we have that I think is works pretty well as about about quarterly or so. We got everyone together and it's actually kind of fun because because we don't have to go any place in particular. There's no central office. We try to pick a different city each time someplace that's creative and inspiring we tend to like interesting Bohemian Vibe, you know in some cases urban city Center's been in some cases more historic places or more in nature. Ideally someplace close to International Airport that it wouldn't fly into and for really a fraction. I mean offices are so expensive. Yeah, and so our fraction of the price of maintaining an office. Actually fly everyone to some pretty interesting place once a once a quarter and so for a week, we have like a really intense period where we're all together in the same physical space and we're working together. We're also getting the human bonds more that casual conversation and we tend to use that time for like a lot of design sketching and kind of informal hackathons are also some bigger picture. Let's talk about the some of the longer term things lift our gaze a little bit and that helps a lot. Again, it is not as is demonstrably not as good as being co-located all the time, but it gets you I don't know 30 to 40 percent of the way there for, you know a fraction of the cost. So yeah over the over the longer term again, I don't know how that would Stack Up Against. Collocated team, but that's one good thing to getting product review so far. Where to find out more Ben:  I see that we're coming up on time and I want to be very respectful of your time.  I'm going to make sure people know about the website and your Twitter. Is there anything else any other places online that people should learn more about and can switch to learn more about you and what you're working on. Adam: Ya know the the website and the Twitter is basically what we got right now. We've been really quiet in the beginning here not because you know, I'm a big believer in that, you know that science approach of Open Access and you know, it's about sharing what you've learned so that humanity and can build on each other's learnings that said it, you know, it's a lot of work to to package up your ideas, especially when they're weird and fringy like ours are in a way that's consumable to the outside world. So we're trying to do a lot more of that. All right now and I think you're starting to see that little bit to our to our Twitter account where in including publishing some of our back catalogue of internal memos and sketches and things which again very itchy things you got to be really into whatever the particular thing is to find find interest in our internal memo on something as well as taking more time to put together demo videos and longer articles that try to try to capture some of the things we've learned some of the philosophies that we have some of the technologies that were. So yeah, there's she spots a great Thinking About  Extending The Model Ben: So freaking cool.  the. That I'm doing is just putting together the ideas and  trying to almost make a more generic description of what you're doing so  say like, oh, what would this look like if it goes into biology or it goes into something? What would this look like for nanotech? could you do the distributed team  using University resources? Right? Like could you partner with a whole bunch of universities and  have someone in different places and they just like go in and use the lab when you need to I don't know like that's  one Bay action item based on learning about this is like oh, yeah. I think I think it could work. Adam: That sounds great. Well, if you figure something out, I'd love to hear about it. I will absolutely keep you in the loop. Ben: awesome. Cool. Well, I really appreciate this.  I'm just super excited because these new models and  I think that you're really onto something. so I really appreciate you  bringing me in and going into the nitty gritties. Adam: Well, thanks very much. Like I said, it's still an experiment will we get to see? But I feel like I feel like they're more Innovation models than just kind of start up. Corporate R&D lab and Academia. Yeah, and if you believe like I do that technology has the potential to be an enhancement for Humanity then you know Finding finding new ways to innovate and a new types of problems and you new shapes of problems potentially has a pretty high high leverage impact the world.  
undefined
Dec 8, 2018 • 58min

Changing How We Do Science with Brian Nosek [Idea Machines #3]

My guest this week is Brian Nosek, co-Founder and the Executive Director of the Center for Open Science. Brian is also a professor in the Department of Psychology at the University of Virginia doing research on the gap between values and practices, such as when behavior is influenced by factors other than one's intentions and goals. The topic of this conversation is how incentives in academia lead to problems with how we do science, how we can fix those problems, the center for open science, and how to bring about systemic change in general. Show Notes Brian’s Website Brian on Twitter (@BrianNosek) Center for Open Science The Replication Crisis Preregistration Article in Nature about preregistration results The Scientific Method If you want more, check out Brian on Econtalk Transcript Intro   [00:00:00] This podcast I talked to Brian nosek about innovating on the very beginning of the Innovation by one research. I met Brian at the Dartmouth 60th anniversary conference and loved his enthusiasm for changing the way we do science. Here's his official biography. Brian nozik is a co-founder and the executive director for the center for open science cos is a nonprofit dedicated to enabling open and reproducible research practices worldwide. Brian is also a professor in the department of psychology at the University of Virginia. He's received his PhD from Yale University in 2002 in 2015. He was on Nature's 10 list and the chronicle for higher education influence. Some quick context about Brian's work and the center for open science. There's a general consensus in academic circles that there are glaring problems in how we do research today. The way research works is generally like this researchers usually based at a university do experiments then when they have a [00:01:00] result they write it up in a paper that paper goes through the peer-review process and then a journal publishes. The number of Journal papers you've published and their popularity make or break your career. They're the primary consideration for getting a position receiving tenure getting grants and procedure in general that system evolved in the 19th century. When many fewer people did research and grants didn't even exist we get into how things have changed in the podcast. You may also have heard of what's known as the replication crisis. This is the Fairly alarming name for a recent phenomena in which people have tried and failed to replicate many well-known studies. For example, you may have heard that power posing will make you act Boulder where that self-control is a limited resource. Both of the studies that originated those ideas failed to replicate. Since replicating findings a core part of the scientific method unreplicated results becoming part of Cannon is a big deal. Brian has been heavily involved in the [00:02:00] crisis and several of the center for open science is initiatives Target replication. So with that I invite you to join my conversation with Brian idzik.   How does open science accelerate innovation and what got you excited about it?   Ben: So the  theme that  I'm really interested in is  how do we accelerate Innovations? And so just to start off with I love to ask you sort of a really broad question of  in your mind. How does having a more open science framework help us accelerate Innovations? And I guess parallel to that. Why what got you excited about it first place. Brian: Yeah, yeah, so that this is really a core of why we started the center for open science is to figure out how can we maximize the progress of science given that we see a number of different barriers to or number of different friction points to the PACE and progress of [00:03:00] Science. And so there are a few things. I think that how. Openness accelerates Innovation, and I guess you can think of it as sort of multiple stages at the opening stage openness in terms of planning pre-registering what your study is about why you're doing this study that the study exists in the first place has a mechanism of helping to improve Innovation by increasing The credibility of the outputs. Particularly in making a clear distinction between the things that we planned in advance that we're testing hypotheses of ideas that we have and we're acquiring data in order to test those ideas from the exploratory results the things that we learn once we've observed the data and we get insights but there are necessarily more uncertain and having a clear distinction between those two practices is a mechanism for. Knowing the credibility of the results [00:04:00] and then more confidently applying results. That one observes in the literature after the fact for doing next steps. And the reason that's really important I think is that we have so many incentives in the research pipeline to dress up exploratory findings that are exciting and sexy and interesting but are uncertain as if they were hypothesis-driven, right? We apply P values to them. We apply a story upfront to them we present them as. These are results that are highly credible from a confirmatory framework. Yeah, and that has been really hard for Innovation to happen. So I'll pause there because there's lots more but yeah, so listen, let's touch on that.   What has changed to make the problem worse?   Ben: There's there's a lot that right there. So you mentioned the incentives to basically make. Things that aren't really following the scientific method follow the clicker [00:05:00] following the scientific method and one of the things I'm always really interested in what has changed in the incentives because I think that there's definitely this. Notion that this problem has gotten worse over time. And so that means that that something has has changed and so in your mind like what what changed to make to sort of pull science away from that like, you know sort of ice training ideal of you have your hypothesis and then you test that hypothesis and then you create a new hypothesis to this. System that you're pushing back against. Brian: You know, it's a good question. So let me start with making the case for why we can say that nothing has changed and then what might lead to thinking something has changed in unpacking this please the potential reason to think that nothing has [00:06:00] changed is that the kinds of results that are the most rewarded results have always been the kinds of results that are more the most rewarded results, right? If I find a novel Finding rather than repeating something someone else has done. I'm like. To be rewarded more with publication without latex cetera. If I find a positive result. I'm more likely to gain recognition for that. Then a negative result. Nothing's there versus this treatment is effective, which one's more interesting. Well, we know which ones for interesting. Yeah. Yeah, and then clean and tidy story write it all fits together and it works and now I have this new explanation for this new phenomenon that everyone can can take seriously so novel positive clean and tidy story is the. They'll come in science and that's because it breaks new ground and offers a new idea and offers a new way of thinking about the world. And so that's great. We want those. We've always wanted those things. So the reason to think well, this is a challenge always is [00:07:00] because. Who doesn't want that and and who hasn't wanted that right? It turns out my whole career is a bunch of nulls where I don't do anything and not only fits together. It's just a big mess right on screen is not a way to pitch a successful career. So that challenge is there and what pre-registration or committing an advanced does is helps us have the constraints. To be honest about what parts of that are actual results of credible confrontations of pre-existing hypotheses versus stuff that is exploring and unpacking what it is we can find. Okay, so that in this in the incentive landscape, I don't think has changed. Mmm what thanks have changed. Well, there are a couple of things that we can point to as potential reasons to think that the problem has gotten worse one is that data acquisition many fields is a lot easier than it ever was [00:08:00] and so with access more data and more ways to analyze it more efficient analysis, right? We have computers that do this instead of slide rules. We can do a lot more adventuring in data. And so we have more opportunity to explore and exploit the malays and transform it into things signal. The second is that the competitive landscape is. Stronger, right there are fewer than the ratio of people that want jobs to jobs available is getting larger and larger and larger and that fact and then competitiveness for Grants and same way that competition than can. Very easily amplify these challenges people who are more willing to exploit more researcher degrees of freedom are going to be able to get the kinds of results more easily that are rewarded in the system. And so that would have amplify the presence of those in people that managed to [00:09:00] survive that competitive firm got it. So I think it's a reasonable hypothesis that people that it's gotten worse. I don't think there's definitive evidence but those would be the theoretical points. At least I would point to for that. That makes a lot of sense. So you had a just sort of jumping back. You had a couple a couple points and we had we have just touched on the first one.   Point Number Two about Accelerating Innovation   Ben: So I want to give you that chance to oh, yeah go back and to keep going through that. Brian:  Right. Yeah. So accelerating Innovation is the idea, right? So that's a point of participation is accelerating Innovation by by clarifying The credibility of claims as they are produced. Yes, we do that better than I think will be much more efficient that will have a better understanding of the evidence base as it comes out. Yeah second phase is the ability is the openness of the data and materials for the purposes of verify. Those [00:10:00] initial claims right? I do a study. I pre-registered. It's all great and I share it with you and you read it. And you say well that sounds great. But did you actually get that and what would have happened if you made different decisions here here and there right because I don't quite agree with the decisions that you made in your analysis Pipeline and I see some gaps there so you're being able to access the materials that I produced in the data that came from. Makes it so that you can one just simply verify that you can reproduce the findings that I reported. Right? I didn't just screw up the analysis script or something and that as a minimum standard is useful, but even more than that, you can test the robustness in ways that I didn't and I came to that question with some approach that you might look at it and say well I would do it differently and the ability to reassess the data for the same question is a very useful thing for. The robustness particularly in areas that are that have [00:11:00] complex analytic pipelines where there's are many choices to make so that's the second part then the third part is the ReUse. So not only should we be able to verify and test the robustness of claims as they happen, but data can be used for lots of different purposes. Sometimes there are things that are not at all anticipated by the data originator. And so we can accelerate Innovation by making it a lot easier to aggregate evidence of claims across multiple Studies by having the data being more accessible, but then also making that data more accessible and usable for. Studying things that no one no one ever anticipated trying to investigate. Yeah, and so the efficiency gain on making better use of the data that already exists rather than the Redundant just really do Revenue question didn't dance it your question you did as it is a massive efficiency. Opportunity because there is a lot of [00:12:00] data there is a lot of work that goes in why not make the most use of it began?   What is enabled by open science?   Ben: Yeah that makes a lot of sense. Do you have any like really good sort of like Keystone examples of these things in action like places where because people could replicate the. The the study they could actually go back to the pipeline or reuse the data that something was enabled. That wasn't that wouldn't have been possible. Otherwise, Brian: yeah. Well, let's see. I'll give a couple of local mean personal examples just to just to illustrate some of the points, please so we have the super fun project that we did just to illustrate this second part of the pipeline right this robustness phase of. People may make different choices and those choices may have implications for the reliability results. So what we did in this project was that we get we acquired a dataset [00:13:00] of a very rich data set of lots of players and referees and outcomes in soccer and we took that data set and then we recruit a different teams. 29 in the end different teams with lots of varied expertise and statistics and analyzing data and have them all investigate the same research. Which is our players with darker skin tone more likely to get a red card then players with lighter skin tone. And so that's you know, that's a question. We'll of Interest people have studied and then we had provided this data set. Here's a data set that you can use to analyze that and. The teams worked on their own and developed an analysis strategies for how they're going to test that hypothesis. They came up with their houses strategy. They submitted their analysis and their results to us. We remove the results and [00:14:00] then took their analysis strategies and then share them among the teams for peer review right different people looking at it. They have made different choices. They appear each other and then went back. They took those peer reviews. They didn't know what each other found but they took. Because reviews and they wanted to update their analysis they could and so they did all that and then submitted their final analyses and what we observed was that a huge variation in analysis choices and variation in the results. So as a simple Criterion for Illustrated the variation results two-thirds of the teams found a significant. Write P less than 0.05 standard for deciding whether you see something there in the data, right and Atherton teams found a null. So the and then of course they debated amongst each other which was analysis strategy was the right strategy but in the end it was very clear among the teams that there are lots of reasonable choices that could be made. And [00:15:00] those reasonable choices had implications for the results that were observed from the same data. Yeah, and it's Standard Process. We do not see the how it's not easy to observe how the analytics choices influence the results, right? We see a paper. It has an outcome we say those are what the those fats those the outcomes of the data room. Right, but what actually the case is that those are the outcomes the data revealed contingent on all those choices that the researcher made and so that I think just as an illustrative illustrative. So it helps to figure out the robustness of that particular finding given the many different reasonable choices. That one would make where if we had just seen one would have had a totally different interpretation, right either. Yeah, it's there or it's not there.   How do you encode context for experiments esp. with People?   Ben:  Yeah, and in terms of sort of that the data and. [00:16:00] Really sort of exposing the the study more something that that I've seen especially in. These is that it seems like the context really matters and people very often are like, well there's there's a lot of context going on in addition to just the procedure that's reported. Do you have any thoughts on like better ways of sort of encoding and recording that context especially for experiments that involve? Brian: Yeah. Yeah. This is a big challenge is because we presume particularly in the social and life sciences that there are many interactions between the different variables. Right but climate the temperature the time of day the circadian rhythms the personalities whatever it is that is the different elements of the subjects of the study whether they be the plants or people or otherwise, yeah. [00:17:00] And so the. There are a couple of different challenges here to unpack one is that in our papers? We State claims at the maximal level of generality. We can possibly do it and that that's just a normal pattern of human communication and reasoning right? I do my study in my lab at the University of Virginia on University of Virginia undergraduates. I don't conclude in the. University of university University of Virginia undergraduates in this particular date this particular time period this particular class. This is what people do with the recognition that that might be wrong right with recognition. There might be boundary conditions but not often with articulating where we think theoretically those boundary conditions could be so in one step of. Is actually putting what some colleagues in psychology of this great paper about about constraints on [00:18:00] generality. They suggest what we need in all discussion sections of all papers is a sexually say when won't this hold yeah, just give them what you know, where where is this not going to hold and just giving people an occasion to think about that for a second say oh. - okay. Yeah, actually we do think this is limited to people that live in Virginia for these reasons right then or no, maybe we don't really think this applies to everybody but now we have to say so you can get the call it up. So that alone I think would make a huge difference just because it would provide that occasion to sort of put the constraints ourselves as The Originators of findings a second factor, of course is just sharing as much of the materials as possible. But often that doesn't provide a lot of the context particularly for more complex experimental studies or if there are particular procedural factors right in a lot of the biomedical Sciences there. There's a lot of nuance [00:19:00] into how it is that this particular reagent needs to be dealt with how they intervention needs to be administered Etc. And so I like the. Moves towards video of procedures right? So there is a journal Journal of visualized events jove visualized experiments that that that tries to that gives people opportunities to show the actual experimental protocol as it is administered. To try to improve it a lot of people using the OSF put videos up of the experiment as they administered it. So to maximize your ability to sort of see how it is that it was done through. So those steps I think can really help to maximize the transparency of those things that are hard to put in words or aren't digitally encoded oil. Yeah, and those are real gaps   What is the ultimate version of open science?   Ben: got it. And so. In your mind what is sort of like the endgame of all this? What is it? Like what [00:20:00] would be the ideal sort of like best-case scenario of science? Like how would that be conducted? So I say you get to control the world and you get to tell everybody practicing science exactly what to do. What would that look like? Brian: Well, if it if I really had control we would just all work on Wikipedia and we would just revising one big paper with the new applicants. Ask you got it continuously and we get all of our credit by. You know logging how many words that I changed our words that survived after people have made their revisions and whether those words changed are on pages that were more important for the overall scientific record versus the less important spandrels. And so we would output one paper that is the summary of knowledge, which is what Wikipedia summarizes. All right, so maybe that's that's maybe going a little bit further than what like [00:21:00] that we can consider. The realm of conceptually possible. So if we imagine a little bit nearer term, what I would love to see is the ability to trace the history of any research project and that seems more achievable in the sense that. If a every in fact, my laboratory is getting close to this, right every study that we do is registered on the OSF. And once we finish the studies, we post the materials and the data or as we're doing it if we're managing the materials and data and then we attach a paper if we write a paper at the end preprint or the final report so that people can Discover it and all of those things are linked together. Be really cool if I had. Those data in a standardized framework of how it is that they are [00:22:00] coded so that they could be automatically and easily integrated with other similar kinds of data so that someone going onto the system would be able to say show me all the studies that ever investigated this variable associated with this variable and tell me what the aggregate result is Right real-time meta-analysis of the entire database of all data that I've ever been collected that. Enough flexibility would help to really very rapidly. I think not just spur Innovations and new things but to but help to point out where there are gaps right there a particular kinds of relationships between things particular effects of predict interventions where we know a ton and then we have this big assumption in our theoretical framework about how we get from X to y. And then as we look for variables that help us to identify whether X gets us to why we feel there just isn't stuff. The literature has not filled that Gap. So I think there are huge benefits for that [00:23:00] kind of aggregate ability. But mostly what I want to be able to do is instead of saying you have to do research in any particular way. The only requirement is you have to show us how you did your research and your particular way so that the marketplace of ideas. Can operate as efficiently as possible and that really is the key thing? It's not preventing bad ideas from getting into the system. It's not about making sure that the different kinds of best things are the ones that immediately are through with not that about Gatekeepers. It's about efficiency in how it is. We call that literature of figuring out which things are credible which things are not because it's really useful to. The ideas into the system as long as they can be. Self-corrected efficiently as well. And that's where I think we are not doing well in the current system. We're doing great on generation. [00:24:00] We're General kinds of innovative ideas. Yeah, but we're not is parsing through those ideas as efficiently as it could decide which ones are worth actually investing more resources in jumping. A couple levels in advance that   Talmud for Science   Ben:  that makes a lot of sense and actually like I've definitely come across many papers just on the internet like you go and Google Scholar and you search and you find this paper and in fact, it has been refuted by another paper and there's no way to know that yeah, and so. I does your does the open science framework address that in any way? Brian:  No, it doesn't yet. And this is a critical issue is the connectivity between findings and the updating of knowledge because the way that like I said doesn't an indirect way but it doesn't in the systematic way that actually would solve this problem. The [00:25:00] main challenge is that we treat. Papers as static entities. When what their summarizing is happening very dynamically. Right. It may be that a year later. After that paper comes out one realizes. We should have analyze that data totally different. We actually analyzed it wrong is indefensible the way that we analyzed it. Right right. There are very few mechanisms for efficiently updating that paper in a way that would actually update the knowledge and that's something where we all agree. That's analyze the wrong way, right? What are my options? I could. Retract the paper. So it's no longer in existence at all. Supposedly, although even retracted papers still get cited we guess nuts. So that's a base problem. Right or I could write a correction, which is another paper that comments on that original paper that may not itself even be discoverable with the original paper that corrects the analysis. Yeah, and that takes months and years. [00:26:00] All right. So the really what I think is. Fundamental for actually addressing this challenge is integrating Version Control with scholarly publishing. So that papers are seen as Dynamic objects not static objects. And so if you know what I would love to see so here's another Milestone of this if we if I could control everything another Milestone would be if a researcher could have a very productive career with. Only working on a single paper for his or her whole life, right? So they have a really interesting idea. And they just continue to investigate and build the evidence and challenge it and figure, you know, just continue to unpack it and they just revise that paper over time. This is what we understand. Now, this is where it is. Now. This is what we've learned over here are some other exceptions but they just keep fine-tuning it and then you get to see the versions of that paper over its [00:27:00] 50-year history as that phenomenon got unpacked that. Plus the integration with other literature would make this much more efficient for exactly the problem that you raised which is we with papers. We don't know what the current knowledge base is. We have no real good way except for these. These attempts to summarize the existing literature with yet a new paper and that doesn't then supersede those old papers. It's just another paper is very inefficient system.   Can Social Sciences 'advance' in the same way as the physical sciences?   Ben: Ya know that that totally makes sense. Actually. I just I have sort of a meta question that I've argued with several people about which is do you feel like. We can make advances in our understanding of sort of like [00:28:00] human-centered science in the same way that we can in like chemistry or physics. Like people we very clearly have like building blocks of physics and the Builds on itself. And there's I've had debates with people about whether you can do this in. In the humanities and the social sciences. What are your thoughts on that? Brian:  Yeah. It is an interesting question and the. What seems to be the biggest barrier is not anything about methodology in particular but about complexity? Yeah, right, if the problem being many different inputs can have similar impact cause similar kinds of outcomes and singular inputs can have multivariate outcomes that it influences and all of those different inputs in terms of causal elements may have interactive effects on the [00:29:00] outs, so. How can we possibly develop Rich enough theories to predict the actions effectively and then ultimately explain the actions effectively of humans in a complex environments. It doesn't seem that we will get to the beautiful equations that underlie a lot of physics and chemistry and count for a substantial amount of evidence. So the thing that I don't feel like I under have any good hand along with that is if it's a theoretical or practical limit right is it just not possible because it's so complex and there isn't this predicted. Or it's just that's really damn hard. But if we had big enough computers if you had enough data, if we were able to understand complex enough models, we would be able to predict it. Right so is as a mom cycle historians, right? They figure it out right the head. [00:30:00] Oxidizing web series righty they could account for 99.9 percent of the variance of what people do next and but of course, even there it went wrong and that was sort of the basis of the whole ceilings. But yeah, I just don't know I don't have a way to. I don't yet have a framework for thinking about how is it that I could answer that question whether it's a practical or theoretical limit. Yeah. What do you think? Ben:  What do I think I think that it's great. Yeah, so I usually actually come down on the I think it's a practical limit now how much it would take to get there might make it effectively a theoretical limit right now. But that there's there's nothing actually preventing us from like if you if you could theoretically like measure everything why not? I [00:31:00] think that is just with again. It's like the it's really a measurement problem and we do get better at measuring things. So that's the that's that's where I come down on but I.   How do you shift incentives in science?   Yep, that's just purely like I have no good argument. going going back to the incentives. It seems to me like a lot of what like I'm completely convinced that these changes would. Definitely accelerate the number of innovations that we have and so and it seems like a lot of these changes require shifting scientists incentives. And so and that's like a notoriously hard thing so we both like how are you going about shifting those incentives right now and how might they be shifted in the future. [00:32:00] Brian: Yeah, that's a great question. That's what we spend. A lot of our time worrying about in the sense of there is very little at least in my experience is very distal disagreement on the problems and the opportunities for improving the pace of Discovery and Innovation based on the solutions. It really is about the implementation. How is it that you change that those cultural incentives so that we can align. The values that we have for science with the practices that researchers do on a daily basis and that's a social problem. Yeah, there are technical supports. But ultimately it's a social problem. And so the the near term approach that we have is to recognize the systems of rewards as they are. And see how could we refine those to align with some of these improved practices? So we're not pitching. Let's all work on [00:33:00] Wikipedia because that's that is so far distant from. What they systems have reward for scientist actually surviving and thriving in science that we wouldn't be able to get actually pragmatic traction. Right? So I'll give one example of can give a few but here's the starting with one of an example that integrates with current incentives but changes them in a fundamental way and that is the publishing model of registered reports. Sophie in the standard process right? I do my research. I write up my studies and then I submit them for peer review at the highest possible prestigious Journal that I can hoping that they will not see all the flaws and if they'll accept it. I'll get all the do that process me and I understand it anyway - journal and the P plus Terminal C and eventually somewhere and get accepted. The register report model makes one change to the process and that is to move. The critical point of peer review [00:34:00] from after the results are known and I've written up the report and I'm all done with the research to after I've figured out what the question that I want to investigate is and what the methodology that I'm going to use so I don't have an observed the outcomes yet. All I've done is frame question. An articulated why it's important and a methodology that I'm going to just to test that question and that's what the peer reviewers evaluate right? And so the key part is that it fits into the existing system perfectly, right? The the currency of advancement is publication. I need to get as many Publications as I can in the most prestigious Outlets. I can to advance my career. We don't try to change that. Instead we just try to change. What is the basis for making a decision about publication and by moving the primary stage of peer reviewed before the results are known does a fundamental change in what I'm being rewarded for as the author [00:35:00] right? Yeah, but I'm being rewarded for as the author in the current system is sexy results, right get the best most interesting most Innovative results. I can write and the irony of that. Is that the results of the one thing that I'm not supposed to be able to control in your study? Right? Right. What I'm supposed to be able to control is asking interesting questions and developing good methodologies to test those questions. Of course that's oversimplifying a bit. There are in there. The presumption of emphasizing results is that my brilliant insights at the outset of the project are the reason that I was able to get those great results, right, but that depends on the credibility of that entire Pipeline and put that aside but the moving it to at the design stage means that my incentive as an author is to ask the most important questions that I can. And develop the most compelling and effective and valid methodologies that I can to test them. [00:36:00] Yeah, and so that changes to what it is presumably we are supposed to be being rewarded for in science. The other thing that it changes in the there's a couple of other elements of incentive changes that it has an impact on that are important for the whole process right for reviewers instant. It's. When I am asked to review a paper in my area of research when I when all the results are there, I have skin in the game as a reviewer. I'm an expert in that area. I may have made claims about things in that particular area. Yeah, if the paper challenges my cleanse make sure to find all kinds of problems with the methodology. I can't believe they did this is this is a ridiculous thing, right? We write my paper. That's the biggest starting point problem challenge my results all well forget out of you. But the amount of course if it's aligned with [00:37:00] my findings and excites me gratuitously, then I will find lots of reasons to like the paper. So I have these Twisted incentives to reinforce findings and behave ideologically as a reviewer in the existing system by moving peer review to the design stage. It fundamentally changes my incentives to right so say I'm in a very contentious area of research and there's only ten opponents on a particular claim when we are dealing with results You can predict the outcome right it people behave ideologically even when they're not trying to when you don't know the results. Both people have the same interests, right? If I truly believe in the phenomenon that I'm studying and the opponents of my point of view also believe in their perspective, right then both want to review that study and that design and that methodology to maximize its quality to reveal the truth, which I think I [00:38:00] have and so that alignment actually makes adversaries. To some extent allies and in review and makes the reviewer and the author more collaborative, right the feedback that I give on that paper can actually help the methodology get better. Whereas in the standard process when I say here's all the things you did wrong. All the author has this to say well geez, you're a jerk. Like I can't do anything about that. I've already done the research and so I can't fix it. Yeah. So the that shifts earlier is much more collaborative and helps with that then the other question is the incentives for the journal right? So in the. Journal editors have strong incentives of their own they want leadership. They want to have impact they don't want the one that destroyed their journal and so [00:39:00] the incentives and the in the existing model or to publish sexy results because more people were read those results. They might cite those results. They might get more attention for their Journal, right? And shifting that to on quality designs then shift their priorities to publishing the most rigorous research the most rust robust research and to be valued based on that now. Yeah, so I'll pause there there's lots of other things to say, but those I think are some critical changes to the incentive landscape that still fits. Into the existing way that research is done in communicated.   Don't people want to read sexy results?   Ben: Yeah. I have a bunch of questions just to poke at that last point a little bit wouldn't people still read the journals that are publishing the most sexy results sort of regardless of whether they were web what stage they're doing that peer review. Brian:  Yeah. This is a key concern of editors and thinking about adopting registered reports. [00:40:00] So we have about a hundred twenty-five journals that are offering this now, but we continue to pitch it to other groups and other other ones, but one of the big concerns that Hunters have is if I do this then I'm going to end up publishing a bunch of no results and no one will read my journal known will cite it and I will be the one that ruined my damn door. All right. So it is a reasonable concern because of the way the system works now, so there's a couple answers to that but the one is empirical which is is it actually the case that these are less red or less cited than regular articles that are published in those. So we have a grant from the McDonald Foundation to actually study registered reports. And the first study that we finished is a comparison of articles that were done as register reports with this in the same published in the same Journal. [00:41:00] Articles that were done the regularly to see if they are different altmetrics attention, right citation and attention and Oppa in media and news and social media and also citation impact at least early stage citation impact because the this model is new enough that it isn't it's only been working for since 2014. In terms of first Publications and what we found in that is that at least in this initial data set. There's no difference in citation rates, and if anything the register report. Articles have gotten more altmetric impact social media news media. That's great. So at least the initial data suggests that who knows if that will sustain generalize, but the argument that I would make in terms of a conceptual argument is that if Studies have been vetted. In terms of without knowing the results. These are important results to know [00:42:00] right? So that's what the actors and the reviewers have to decide is do we need to know the outcome of this study? Yeah, if the answer is yes that this is an important enough result that we need to know what happened that any result is. Yeah, right. That's the whole idea is that we're doing the study harder find out what the world says about that particular hypothesis that particular question. Yeah, so it become citable. Whereas when were only evaluating based on the results. Well, yeah things that Purity people is that that's crazy, but it happened. Okay, that's exciting. But if you have a paper where it's that's crazy and nothing happened. Then people say well that was a crazy paper. Yeah, and that paper would be less likely to get through the register report kind of model that makes a lot of sense. You could even see a world where because they're being pre-registered especially for more like the Press people can know to pay attention to it. [00:43:00] So you can actually almost like generate a little bit more height. In terms of like oh we're not going to do this thing. Isn't that exciting? Yeah, exactly. So we have a reproducibility project in cancer biology that we're wrapping up now where we do we sample a set of studies and then try to replicate findings from those papers to see where where can we reproduce findings in the where are their barriers to be able to reproduce existing? And all of these went through the journal elife has registered reports so that we got peer review from experts in advance to maximize the quality of the designs and they published instead of just registering them on OSF, which they are they also published the register reports as an article of its own and those did generate lots of Interest rule that's going to happen with this and that I think is a very effective way to sort of engage the community on. The process of actual Discovery we don't know the answer to these [00:44:00] things. Can we build in a community-based process? That isn't just about let me tell you about the great thing that I just found and more about. Let me bring you into our process. How does were actually investigating this problem right and getting more that Community engagement feedback understanding Insight all along the life cycle of the research rather than just as the end point, which I think is much more inefficient than it could be.   Open Science in Competitive Fields and Scooping   Ben: Yeah and. On the note of pre-registering. Have you seen how it plays out in like extremely competitive Fields? So one of the world's that I'm closest to is like deep learning machine learning research and I have friends who keep what they're doing. Very very secret because they're always worried about getting scooped and they're worried about someone basically like doing the thing first and I could see people being hesitant to write down to [00:45:00] publicize what they're going to do because then someone else could do it. So, how do you see that playing out if at all? Brian: Yeah scoping is a real concern in the sense that people have it and I think that is also a highly inflated concern based on the reality of what happens in practice but nevertheless because people have the concern systems have to be built to address it. Yeah, so one simple answer on the addressing the concern and then reasons to be skeptical at the. The addressing the concern with the OSF you can pre-register an embargo your pre-registrations from to four years. And what that does is it still gets all the benefits of registering committing putting that into an external repository. So you have independent verification of time and date and what you said you were going to do but then gives you as the researcher the flexibility to [00:46:00] say I need this to remain private for some period of time because of whatever reason. As I need it to be private, right? I don't want the recent participants that I am engaged in this project to discover what the design is or I don't want it competitors to discover what the design is. So that is a pragmatic solution is sort of dress. Okay, you got that concern. Let's meet that concern with technology to help to manage the current landscape. There are a couple reasons to be skeptical that the concern is actually much of a real concerning practice Tristan. And one example comes from preprints. So a lot of people when they pre princess sharing the paper you have of some area of research prior to going through peer review and being published in a journal write and in some domains like physics. It is standard practice the archive which is housed at Cornell is the standard for [00:47:00] anybody in America physics to share their research through archive prior to publication in other fields. It's very new or unknown but emerging. But the exact same concern about scooping comes up regularly where they say there's so many people in our field if I share a preprint someone else with the lab that is productive lab is going to see my paper. They're going to run the studies really fast. They're going to submit it to a journal that will publish and quickly and then I'll lose my publication because it'll come out in this other one, right and that's a commonly articulated concern. I think there are very good reasons to be skeptical of it in practice and the experience of archive is a good example. It's been operating since 1991 physicists early in its life articulated similar kinds of concerns and none of them have that concern now, why is it that they don't have that concern now? Well the Norms have shifted from the way you establish priority [00:48:00] is not. When it's published in the journal, it's when you get it onto archive. Right? Right. So a new practice becomes standard. It's when is it that the community knows about what it is you did that's the way you get that first finder Accolade and that still carries through to things like publication a second reason is that. We all have a very inflated sense of self importance that our great our kids right? There's an old saw in in venture capital of take your best idea and try to give it to your competitor and most of the time you can write. We think of our own ideas really amazing and everyone else doesn't yeah people sleeping other people. Is Right Southern the idea that there are people looking their chops on waiting for your paper your registration to show up so they can steal your [00:49:00] idea and then use it and claim it as their own is is great. It's shows High self-esteem. And that's great. I am all for high self. I don't know and then the last part is that. It is a norm violation to do that to such a strong degree to do the stealing of and not crediting someone else for their work, but it's actually very addressable in the daily practice of how science operates which is if you can show that you put that registration or that paper up on a independent service and then it was it appeared prior to the other person doing it. And then that other group did try to steal it and claim it as their own. Well, that's misconduct. And if they did if they don't credit you as the originator then that's something that is a norm violation and how science operates and I'm actually pretty confident in the process of dealing with Norm [00:50:00] violations in the scientific Community. I've had my own experience with the I think this very rarely happens, but I have had an experience with it. I've posted papers on my website before there were pretty print services in the behavioral sciences since I. Been a faculty member and I've got a Google Scholar one day and was reading. Yeah, the papers that I have these alerts set up for things that are related to my work and I paper showed up and I was like, oh that sounds related to some things. I've been working on. So I've clicked on the link to the paper and I went to the website. So I'm reading the paper. I from these authors I didn't recognize and then I realized wait that's that's my paper. I need a second and I'm an author and I didn't submit it to that journal. And it was my paper. They had taken a paper off of my website. They had changed the abstract. They run it through Google translate. It looks like it's all Gobbledy gook, but it was an abstract. But the rest of it was [00:51:00] essentially a carbon copy of our paper and they published. Well, you know, so what did I do? I like contacted the editor and we actually is on retraction watch this story about someone stealing my paper and retraction watch the laughing about it and it got retracted. And as far as we heard the person that had gone it lost their job, and I don't know if that's true. I never followed. But there are systems place is the basic point to deal with the Regis forms of this. And so I have I am sanguine about those not be real issues. But I also recognize they are real concerns. And so we have to have our Technology Solutions be able to address the concerns as they exist today. And I think the those concerns will just disappear as people gain experience.   Top down v Bottom up for driving change   Ben: Got it. I like that distinction between issues and concerns that they may not be the same thing. To I've been paying attention to   sort of the tactics that you're [00:52:00] taking to drive this adoption. And there's  some bottom up things in terms of changing the culture and getting  one Journal at a time to change just by convincing them and there's also been some some top-down approaches that you've been using and I was wondering if you could just sort of go through those and what you feel like. Is is the most effective or what combinations of things are are the most effective for really driving this change? Brian: Yeah. No, it's a good question because this is a culture change is hard especially with the decentralized system like science where there is no boss and the different incentive drivers are highly distributed. Right, right. He has a richer have a unique set of societies. Are relevant to establishing my Norms you could have funders that fund my work a unique set of journals that I publish in and my own institution. And so every researcher [00:53:00] has that unique combination of those that all play a role in shaping the incentives for his or her behavior and so fundamental change if we're talking about just at the level of incentives not even at the level of values and goals requires. Massive shift across all of those different sectors not massive in terms of the amount of things they need to shift but in the number of groups that need to make decisions tissue. Yeah, and so the we need both top-down and bottom-up efforts to try to address that and the top down ones are. That we work on at least are largely focused on the major stakeholders. So funders institutions and societies particularly ones that are publishing right so journals whether through Publishers societies, can we get them like with the top guidelines, which is this framework that that has been established to promote? What are the transparency standards? What could we [00:54:00] require of authors or grantees or employees of our organizations? Those as a common framework provide a mechanism to sort of try to convince these different stakeholders to adopt new standards new policies to that that then everybody that associated with that have to follow or incentivised to follow simultaneously those kinds of interventions don't necessarily get hearts and minds and a lot of the real work in culture change. Is getting people to internalize what it is that mean is good science is rigorous work and that requires a very bottom up community-based approach to how Norms get established Within. What are effectively very siloed very small world scientific communities that are part of the larger research community. And so with that we do a lot [00:55:00] of Outreach to groups search starting with the idealists right people who already want to do these practices are already practicing rigorous research. How can we give them resources and support to work on shifting those Norms in their small world communities and so. Out of like the preprint services that we host or other services that allow groups to form. They can organize around a technology. There's a preprint service that our Unity runs and then drive the change from the basis of that particular technology solution in a bottom-up way and the great part is that to the extent that both of these are effective they become self reinforcing. So a lot of the stakeholder leaders and editor of a journal will say that they are reluctant. They agree with all the things that we trying to pitch to them as ways to improve rigor and [00:56:00] research practices, but they don't they don't have the support of their Community yet, right. They need to have people on board with this right well in we can the bottom. It provides that that backing for that leader to make a change and likewise leaders that are more assertive are willing to sort of take some chances can help to drive attention and awareness in a way that facilitates the bottom-up communities that are fledgling to gain better standing and we're impact so we really think that the combination of the two is essential to get at. True culture change rather than bureaucratic adoption of a process that now someone told me I have to do yeah, which could be totally counterproductive to Scientific efficiency and Innovation as you described. Ben: Yeah, that seems like a really great place to to end. I know you have to get running. So I'm really grateful. [00:57:00] This is this has been amazing and thank you so much. Yeah, my pleasure.  
undefined
Dec 7, 2018 • 1h 20min

Venture Capital Meets Fusion Power with Malcolm Handley [Idea Machines #2]

My Guest this week is Malcolm Handley, General Partner and Founder of Strong Atomics. The topic of this conversation is Fusion power - how it’s funded now, why we don’t have it yet, and how he’s working on making it a reality. We touch on funding long-term bets in general, incentives inside of venture capital, and more. Show Notes Strong Atomics Malcolm on Twitter (@malcolmredheron) Fusion Never Plot Fusion Z-Pinch Experiment. ARPA-e Alpha Program ITER - International Thermonuclear Experimental Reactor. NIF - National Ignition Facility ARPA-e Office of Fusion Energy Science Sustainable Energy without the Hot Air Transcript  [00:00:00] This podcast I talk to Malcolm Hanley about Fusion funding long-term bets incentives inside of venture capital and more Malcolm is the managing partner of strong atomics. Strong atomics is a venture capital firm that exists solely in a portfolio of fusion projects that have been selected based on their potential to create net positive energy and lead to plausible reactors before starting strong atomics. Malcolm was the first employee at the software company aside. I love talking to Malcolm because he's somewhat of a fanatic about making Fusion Energy reality. But at the same time he remains an intense pragmatist in some ways. He's even more pragmatic than I am. So here in the podcast. He thinks deeply about everything he does. So we go very deep on some topics. I hope you enjoy the conversation as much as I did.   Intro   Ben: Malcolm would you would you introduce yourself? Malcolm: Sure. So I'm Malcolm heavily. I found in strong [00:01:00] atomics after 17 years is software engineer because I. I was looking for the most important thing that I could work on and concluded that that was kind of change that was before democracy fell off the rails. And so it was the obvious most important thing. So my thesis is that climate change is a real problem and the. Typical ways that we are addressing it or insufficient, for example, even if you ignore the climate deniers most people seem to be of the opinion that we're on track that Renewables and storage for renewable energy are going to save the day and my fear as I looked into this more deeply is that this is not sufficient that we are in fact not on track and that we need to be looking at more possible ways of responding to [00:02:00] climate change. So I found an area nuclear fusion that is that it has the potential to help us solve climate change and that in my opinion is underinvested. So I started strong atomics to invest in those companies and to support them in other ways. And that's what I'm doing these days   What did founding strong atomics entail?   Ben: and he did a little bit more into what founding strong atomics and Tails. You can just snap your fingers and bring it into being Malcolm:  I almost did because it was extremely lucky but in general Silicon Valley has a pretty well worn model for how people start startups and I think even the people getting out of college actually no a surprising amount about how to start a company and when you look at Fusion companies getting started you realize just how much knowledge we take for granted in Silicon Valley. On the other hand as far as I can tell the way [00:03:00] that every VC fund get started in the way that everyone becomes a VC is unique. It was really one story for how you start a company and there are n stories for how funds get started. So in my case, I wasn't sure that I wanted to start a fund more precisely. It hadn't even occurred to me that I would start a fund. I was a software engineer and looking for what I could do about climate change. I'm just assuming that I was looking for a technical way to be involved with that. I was worried because my only technical skill is software engineering but I figured hey, but software you can do many things. There must be a way that a software engineer can help. So I made my way to The arpa-e Summit in DC at the beginning of 2016 and went around and talked to a whole lot of people if they're different boots about what they were doing and. My questions for myself was does what you're doing matter. My question for them was how might a software engineer help [00:04:00] and to a first approximation even at a wonderful conference like the arpa-e summit. I think you'd have to say mostly these things are not moving the needle mostly in my terminology. They don't matter and it really wasn't clear how a software engineer could help and then because I was curious because I'd read many things about. Companies claiming that they were working on fusion and they were closed and made an effort to hit every Fusion Booth. I could find and a one of those booths. I said, I'm a software engineer. What can I do and they said well the next time this guy comes to San Francisco, you should organize an audience and he'll give a talk and won't that be fun? So that guy is now one of my science advisors, but that was. The first part of my relationship there. So he came I organized the talk we had dinner beforehand and is like how close is fusion and he says well, it could be 10 years away, but it's actually [00:05:00] in infinite time away. And the problem is we're not funded. So then you say well how much money do you need and it turns out to be a few million dollars you say that's really really dumb here. I am in Silicon Valley my. The company I work for is sauna making collaboration software for task management just raised 50 million dollars in here. These people are credibly trying to save the world and they're short two million dollars. Maybe I can find some rich people who can put some money in the answer was yes, I could find a rich person who is willing to put some money in and Rich. By and large unless they're really excited about the company do not want to put money in directly. They don't want that kind of relationship. So you work through all the mechanics here and you run as you can convince people to put money in but you need to [00:06:00] grease the wheels by making a normal VC structure in this case. And then before, you know it you wind up as the managing partner of a one-person VC fund but single investor. And then you say well, I've had a surprising amount of impact doing this. What should I do? Do I keep looking for that technical way to be involved and my conclusion was there's really no contest here. I could go back to my quest of how how is the software engineer? Can I help climate change or look? I've already put four million dollars into Fusion four million dollars of other people's money, but companies have four million dollars that. Born kinda half without me and several of them are doing way better making way more progress than they would have without me. And now I have all these contacts in the fusion industry. I can build a team of advisers. I'm in all of these internal discussions about [00:07:00] what's coming next in federal funding programs, and I'm invited to conferences and that kind of thing and it was. So obvious that the way to keep making an impact on climate change was to keep doing what I was doing. So that ends with my now taking the steps towards being what I call a real VC. Someone who goes out and really raises the next Fund in a much more normal way with multiple LPS and a much more significant amount of money. Ben: Got it Malcolm: Ray's right now in the baby if you see they VVC or. Ben: So you invest in babies? Malcolm: No. No, I'm the baby. That's and Tina that raises a whole bunch of questions.   Why did you structure the venture as a vc firm?   Ben: So one is why did you decide to structure it as a VC fund instead of say a philanthropic organization if you just wanted to redirect money. Malcolm: The short answer is [00:08:00] because I can get my hands on way more money. If this is a for-profit Enterprise, so my all P was very generous and trusting and also very open-minded and part of the four million dollars that I mentioned before actually was a donation. It was a gift to the University of Washington to support Fusion research there because. That particular project that we wanted to support was still an academic project for the others. The companies were our for-profit companies and there's just no good case to say to someone who has money. You should give money to support these for-profit things in a way that gets you know profit if they actually work you can tap a lot more money if you offer people a profit motive. And I think you create a stronger chain of [00:09:00] incentives. They are encouraged to give more money. I am more encouraged to look after that money. I have a share of the profit with my fund if it ever makes a profit and and finally you get a more traditional control structure. I don't yet have. At an actual Equity stake in these companies because we did a convertible note or a y combinator safe, but I sit on the board of the companies. They all know that my investment will turn into voting equity in the future and it's just a much cleaner setup. So I think there were no downsides to doing it this way and a lot of upsets the bigger question, which I. Contemplated the beginning of all of this was even for for-profit money is a fund the right vehicle or other other [00:10:00] options that I should pursue. That's something that I spent a lot of time looking into it after creating the first fund what other options are there, right? (Alternate structures) So one approach is you say, well there are four or however many companies here. I like what they're doing, but they're. Really annoyingly small by Annoying. I mean they are inefficient in terms of how they spend their money and their potentially leaving Innovation on the table. So the companies that I've invested in are all about four people maybe six, but that kind of size and they have one or two main science people in each company those. Interacts with other scientists a few times a year a conferences those scientists at the conference's are of course not completely trust to love each other. They are all competitors working at different companies [00:11:00] each convinced that they're going to crush the other guys and that's the extent of their scientific collaboration unless they have a couple of academics universities that they're close to. And when I think about my background in software, I never worked in a team that small I had many more people that I could turn to for help whenever it was stuck. So one thing we looked at seriously was starting a company that would raise a bunch of money and buy these four or so companies. We would merge them all into one. This is called the Roll-Up. And we'd move everyone to one place. They would certainly have a much larger pool of collaborators. They would also have the union of all of their equipment right? So now when someone had a new idea for an approach to Fusion, they wanted to test instead of needing to contemplate leaving [00:12:00] their job starting a new company raising money buying. Or scrounging a whole lot of equipment and then yours later doing the experiment. They could practically go in on the weekend and do the experiment after validating their ideas with their co-workers. Right? I think there's a lot to recommend this and it was seductive enough that I went a long way down this path in the end of the the complexities killed. And made it seem like something that wasn't actually a good idea when you netted everything.   Complexities of Roll-Ups   Ben: Can you go into a little more detail about that? Yeah, which complexities and how did you decided it was not a good idea, Malcolm: right? So it's much harder to raise money for because you're doing something much less traditional as I guess that's not necessarily harder in some ways. If you come to the market with a radically new idea. You're so novel that you. [00:13:00] Breakthrough everyone's filters and maybe you have an easier time raising money seen it go both witness. Yeah, and my existing investors was not enthused about this. So I would have certainly had to work past some skepticism there on top of that you have to convince all of these companies to sell to you and that looks really hard. The CEO of one of the companies told me look I'm a lone cowboy. I think he said and made it very clear that he was used to executing independently and didn't want to be part of larger company. Potentially. I could have bought his independence by offering him enough money that he couldn't refuse but that's not really the way you want to build your team. Other companies were enthusiastic but it would getting [00:14:00] the majority of these benefits would have required people moving. Yeah people and companies and these companies have connections to universities. Of course, the people have families they have whole lives. It wasn't clear that people wanted to move. It really looked as if everyone was really excited about a roll up that happened where they lived. Yeah on top of that. These people are cordial to each other at conferences. And at least think they wanted to collaborate more but they're also pretty Fierce competitors. So you also had to believe that when these people were all brought into one company. They would actually collaborate rather than get into status contests and fights and that kind of thing. Not to mention all the more subtle ways in which they might fail to collaborate and it really big wake-up call for me was when the [00:15:00] two technical co-founders one of my companies started fighting these people had known each other for decades. They were best men at each other's weddings. They had chosen to found the company together. No asshole VC had bought two companies include them together and force them to work together. This was their choice. And it got to the point where still they could not work together. I went down I spent two days at the company watching the team Dynamic interviewing each person at the company one-on-one and made the recommendation that the company fire one of the founders. So you look at that and then you're like, well these people say they're happy to cooperate with everyone at these other companies to I really believe that so. Huge caution, I think yeah other people cautioned me that the [00:16:00] competitive factors would be reduced. So I had one guy who went through YC not doing Fusion just a regular software startup say look when we were doing way see we were in the same year as Dropbox and it was clear the Dropbox was crash. And if we had known that actually we were part of some big roll up and we were going to share and dropboxes success. We would not have worked as hard on our little company as we did wanting to match their success. Yeah. (Holding companies and how they worked) So eventually I looked at the third model the first model being the VC fund at the second model being the roll up. The third model was a holding company and this is meant to be a middle ground where we would have a company that would invest in the various Fusion companies that we wanted to support. They would not be combined. I [00:17:00] guess. I'm neglected to mention several of the other advantages that we would have gotten with the holding with the roll up in addition to a unified team of scientists. We would have had the pool of Hardware that I did mention right we would also been able to have other infrastructure teams. For example, we could have had a software team that worked on modeling or simulation software that all of the different Fusion teams could use so the idea with the holding company was we would still be able to centralize things that made sense centralist right things where you could benefit by sharing. But we would have these companies remaining as separate companies. They could raise money from other people if they wanted to or we couldn't provide the money when they needed it. They wouldn't have to move they would be independent companies. But the first thing that we would do is say a condition of taking money from us is [00:18:00] you will give all of your experimental data and enough. Of the conditions of your experiments to us so that we can run our own simulations using our own software right and match them against your experimental results. We would of course encourage them to use our modeling software as well. But that's harder to force. So the idea was software is something that really can be shared right? We would encourage them to share it and by having access to their. Their detailed data, we would be able to validate what they were doing and being much more informed investors than others so we could make better investment decisions. We could tell who was really succeeding and who might be struggling or failing so we couldn't make better investment decisions than other investors, which would help us. It would also help the companies [00:19:00] because our decision to investor to continue to invest would be a more credible signal of success or the value creation and they could use that to to shop it around to raise money from other people. So to the benefits there would be still internalizing some of the externalities while keeping people with their independence, but allowing resource sharing and better signal. For further support raise so much more flexible sharing sharing where it made sense and not where it didn't and then in an optional way later later on we might have said, well, it turns out that the number of our companies need the same physical equipment may be pulsed Power Equipment, which is a large part of the expense for these companies so we could have bought that. Set it up somewhere and then said you're welcome to come and do experiments on our facility and you could imagine that over time. They would decide that the [00:20:00] facility was valuable enough that someone from the company moved there. And then maybe they do all their new hiring their and the company's gradually co-locate but in a much more gradual much smoother way than. In the roll-up where we envision seeing a condition of this purchase is you move right having just talked up the the holding company's so much. Obviously I decided I didn't like that either because that's not what I'm doing one of the death blows for the holding company was doing a science review of the four companies that I've invested in so far. Plus several other approaches by this point I built a team of four science advisors. We put all of these seven or so approaches past the advisors for basic feedback is this thing actually a terrible [00:21:00] idea and we haven't realized yet or what are the challenges or is this an amazing thing that we should be backing and the feedback that we got was that one of them was? And should definitely be back right for a bunch of them. The feedback was waiting to see another one. Was it an even more precarious position because of execution problems to more that received favorable feedback did not and still do not have companies associated with them but feedback was positive enough that we. Pay people to work on them inside basically shall companies so that we own the IP if something comes to that but what did we not so sorry just to interrupt right now. They're in universities right now. They're dormant. They're dormant. Okay a common theme in Fusion is someone does some [00:22:00] work gets some promising results and then for one reason or another fails to get. Funding to continue that it sometimes the story is then the Republicans got into power and cut the funding or they got less funding than they wanted. So they bought worse equipment and they wanted and therefore they weren't able to achieve the conditions that they wanted but they still did the experiment because of the bad conditions that got bad results. So they definitely didn't get any more money from that a whole host of reasons. The promising work doesn't continue. Yeah, so in both of those cases there are promising results and no one is working on this got it.  Yeah another sad Fusion story. So so bunch of things came out of that science review, but what did not come out of it was oh yes here. We have a pool of for companies that are all [00:23:00] strong and deserve. And have enough overlap that that some sort of sharing model makes sense on top of that. It was becoming clear that even a holding company was sufficiently novel pitch as to make my life even more difficult for fundraising. Yeah, so it just. Didn't look like something that was worth taking that fundraising hit for given that the benefits for seeming to be more theoretical or in the future than then in the present     Alternate Structures   Ben: So with a VC fund to my understanding you are sitting on already given capital and your job is. To deploy it I'm going to use air quotes as [00:24:00] quickly as possible within a   certain limit of responsibility. Would you ever consider something where you do something there there these private Equity firms that will have a thesis and the look for companies that meet these a certain set of conditions and only then. Will they basically exert a call option on promised money and invest that and it seems like that's that's another structure that you could have gone with. Did you consider anything like that at all? Malcolm: Right edit your description of a VC fund and yes, we may I please one is you're not sitting on a pool of money that is in your bank account. Some of the money is in your bank account, but there's a distinction between the money that is committed and the money that is raised. [00:25:00] So you might say I want to have a VC fund that has 40 million dollars over its lifespan if you wait until you have raised. All 40 million then the deals that you'd identified at the beginning that you are using to support the raising of your fund will likely be gone. It can take a long time to raise even a moderately sized fund. Yeah, unless you're one of those individuals leading very Charmed lives where in weeks they raise their entire fun, but for the rest of us the fundraising process can be 6-12 months that kind of thing so, You have a first close where you've identified or where you have enough and money committed to justify saying this fund is definitely happening. Right [00:26:00] we're going to do this even then maybe your first clothes is 15 million dollars. You don't need all 15 million dollars to start making your Investments right now. So you have 15 million dollars. Right, but over the life of the fund you do Capital calls when your account is too low to keep doing what you're trying to do. Right? So the LPS get penalized heavily if they fail to produce the money that they have committed within a certain amount of time after you're calling it got it you could in principle call all the money at the beginning but you damage your friends metrics if you do that. Got it. Funds are. Graded through their internal rate of return and I remember exactly how this is calculated. (Internal Rate of Return (IRR) ) But part of that is how long you actually have the money. So if you got the money closer to when you're going to spend it or invest it, you look better got it. So that's the first edit. The [00:27:00] second edit is I wouldn't say my job is to deploy the money as quickly as possible. Mmm. My job is to deploy the money for the best results possible. I measure results in terms of some combination of profit to my investors and impact on the world. Right because they think Fusion is well aligned to do both. I think these prophets are pretty consistent. So I'm not trying to spend my money as quickly as I can. I'm trying to support a large enough portfolio of companies for as long as I can. Large portfolio of companies because they want to mitigate the risk. I want to include as many companies in the portfolio. So that promising ideas do not go unsupported. That's the impact and also so that the company that succeeds if when ultimately does is in my fund so [00:28:00] that my investors get a return got it and then I want to support them for as long as I can because the longer I'm supporting them. The larger return my investors get rather than that later value creation accruing to later investors. Got it. Also.  longer. I can support them the greater the chance that the company has of surviving for long enough and making enough progress that it can then raise from other investors investors who probably will know less about fusion and be less friendly to Fusion.   Why not start the Bell Labs of Fusion   Ben:  okay, there's there's a bunch of bunch of bookmarks. I want to put there the first thing is one more question about possible structures. So a problem that you brought up consistently is the  efficiency gains from having people all in the same place all sharing equipment all sharing code all sharing knowledge that. Does not happen [00:29:00] when you have a bunch of companies, why are you so focused on sort of starting with companies as or groups of people who have already formed companies as as the basic building blocks. So for example, you could imagine a world where you create  the Bell Labs a fusion where you literally just start from scratch. Hire people and put them all in the same place with a bunch of equipment and aren't working together without having to pull people  who have already demonstrated their willingness to go out on their own and start companies. Malcolm: Yeah, great question and the bell Labs diffusion is an analogy of it gets thrown around a fair amount including to describe what I was trying to do. Although I agree. It's slightly. I think there are two answers to that question. One is [00:30:00] by the point that I was really considering this. I already had invested in for companies. So partly the answer is path dependence got it and partly the answer. Is that by the time I was clearly seeing the problems with the rollout especially but also the holding company it was. It didn't seem as if just starting a company from scratch was really going to change that some people make the argument that actually the best plasma physicists aren't in companies at the moment. They are in Academia or National Labs because the best ones don't want to risk their reputation and a great job for a two-bit company that's going to have trouble. And therefore the if you could come along and create a credible [00:31:00] proposition of the legitimate company that will do well fundraising and prove that it will do well at fundraising by endowing it with a lot of money in the beginning you may then hire those people right? I know some people who are convinced that this is possible. You still have to deal. The asshole complex that is common with infusion. These people have had their entire careers which are long because they are all old or they're in PHD programs basically to become quite sure of the approach that they want to take for Fusion, right? So it was difficult to find a team of four. Experienced knowledgeable and open-minded advisors for my science board and not all of those people are able to be hired for any price. I think if you want to actually stock a [00:32:00] company with these people you need more people right and they all need to be able to be hired and you still need to convince them to move and you still need to convince them to work on each other's projects. So it I think it's an interesting idea. I have real concerned about the lack of competition that you would get about all the areas that I just mentioned and on top of that when I looked into the situation around the software sharing and the hardware sharing more closely. I became less convinced that this is actually available. What's that on the software side many people don't even believe that it's possible in a reasonable time frame to create simulation software that [00:33:00] can sufficiently accurately simulate the conditions used by a whole range of different approaches to Fusion at the moment. We. Many different pieces of software or yeah codes as the physics Community calls them that they're each validated and optimized for different conditions different temperatures different densities different physical geometries of the plasma that kind of thing. There are some people who believe that we can make software that spans a sufficiently large range of these parameters. As to be useful for a family of fusion approaches. There are even people who claim to be working on them right now. Yeah, and when you dig more deeply you discover, yeah, they're working on them, but they haven't accomplished as much of that unified solution [00:34:00] as they think they have is they say they have so you talk to other people who use these and they're like, yes. Yes, I think those people really have the. I think they might be the people who can do this. They're not there yet. So the notion of spinning up a team of software engineers and plasma physicists and numerical experts and so forth to try to do that came to seem like a bigger lift with much more dubious payback in the relevant time frame than I had initially thought similarly on the hardware side. It is really costly in many ways to reconfigure physical equipment for one experiment and then reconfigure it for another experiment is really bad when you have to move things between locations as well or move a team to a site and configure everything there and then do your experiments for a month, but [00:35:00] it's still bad even. All the people and all the equipment are in when se you get to the most consistent results. If you can leave everything set up and you want to be able to keep going on Saturday or keep going on Monday because you weren't quite done with those through experiments. So to what degree can you really share these results these certain not these results. She killed to what degree can you really share this equipment? Yeah, definitely to some degree to a large enough to agree to justify. Spinning up a whole company. I'm not convinced got it on top of that. If I were to start a company doing this, I would need to find a CEO build up a whole team that I don't have to build when I'm investing in other companies, right? Should I be that CEO of many people assumed that I showed her that I wanted to or something like that. I think it's a really hard sell for [00:36:00] investors that I'm the best person to run this company on the other hand. It wasn't actually clear who should do it. Incentives: How do you measure impact and incentivize yourself?   Ben:  Yeah, that makes that makes a lot of sense. I want to [00:37:00] go back to you're talking about incentives previously both that your incentives are to both have impact and make money for your shareholders. Yeah, I want to ask first. How do you measure impact for yourself in terms of your incentives you. I mentioned something along the lines of company's existing that would not otherwise exist. So like how it's pretty easy to know. Okay. I've like made this much money. It's a little harder to say. Okay, I've had this much impact. So  how do you personally measure that? Malcolm: Yeah, the clearest example of impact so far is another project called fuse annoyingly. Annoyingly the same name is spelt differently. So this is the fusion z-pinch experiment Fu [00:38:00] Ze at the University of Washington. And it's the group that we donated to (Fusion Z-Pinch Experiment. https://www.aa.washington.edu/research/ZaP) it is all four of the companies that have given money to so far are supported by our pennies Alpha program (ARPA-e Alpha Program:https://arpa-e.energy.gov/?q=arpa-e-programs/alpha ) its Fusion program and all four of them got less money than. Rpe would have liked to have given them. So the time that I became involved with the fuse project they were behind schedule on their rpe milestones and we made them a donation that enabled them to hire an extra two people for the rest of the life of the project that enabled them to catch up with their milestones and become the. Most successful of the fusion programs that are P of fusion projects that are PE has [00:39:00] when I say most successful what I mean is they are hitting their Milestones they are getting very clean results. So there they have a simulation that says as they put more and more current through their plasma. They will get higher temperatures and higher densities basically. Better and better Fusion conditions and that at a certain point they will be making as much energy as they're putting in at a point beyond that they will actually be getting what we call reactor relevant game getting a large enough increase in energy through their Fusion that they could run a reactor off that this and the way we plot their progress is. We look at the increase in currents that they're putting through their pastor and check that they are getting results that match their theoretical results for them. It's especially clean because they have this theoretical concern this theoretical curve [00:40:00] and their experimental results keep falling very close to that curve. So it's a really nice story because the connection between the money that they got. From strong atomics and the people that they hired and the results that they were able to the progress they were able to make with those additional people and the scientific validity of what they were doing is clear at every step. Yeah. So so that's one way that I can see the impact of what I'm doing another way. That's more. Is by being involved in the field and trying to make sure that it all makes sense to me. I wind up having insights or coming to understandings the turn out to be helpful to everyone. So I spent a long time [00:41:00] wondering about the economics of fusion. Companies are understandably mainly focused on getting Fusion to work and they don't spend that much time thinking about the competitive energy Market that they're likely to be selling into 15 or 20 years from now and what that means for their product. I spend time thinking about that because I want to convince myself that the space matters enough to justify my time. So I went through the stock process. And came to the conclusion that the ways that the companies were calculating their cost of energy were wrong. They were assuming that the reactors would be operating more or less continuously and they would be able to sell all of the electricity that they made whereas the reality is likely to be that for five or hours or so every day. No one [00:42:00] will buy their electricity because wind and solar producing cheaper electricity. See, right. So the conclusion that I've come to is well so scratch that so the companies often conclude that they need to be demand following they need to make their reactors able to ramp up and down according to what the demand is right that has other problems because the reactors are so expensive to build and so cheap to run. That ramping your reactor down to follow the demand doesn't actually save you any money. And so it doesn't make the electricity any cheaper. So I worked through all of this and came to the conclusion, which I think most people in the fusion space agree with know that you actually need to have integrated thermal storage. Your reactor is producing typically hot molten salts. Anyway, right and rather than turning that into [00:43:00] electricity. You should store the bats of hot molten salt and then run the reactor continuously and ramp up and down the turbine that is used to go from hot molten salts to electricity interesting turbines are cheap. They have low fixed costs. So you can much more affordably ramp them up and down plus if you were going to be demand following you were already going to be ramping your turbine up and. All I'm saying is keep the turbine demand following right make the reactor smaller so that it can run continuously. Right which is the most efficient way to use a high Capital cost good and then have a buffer of molten salt. So that's the that's a kind of insight that have come to by working through the economics and overall the investment case for Fusion. That I hope will help all the companies not just the ones I'm [00:44:00] investing in.   Incentives for LPs   Ben:  so those are those are your incentives is that combination of impact  profit and you also have LPS because of the VC fund structure. Where do you see there? What are their incentives in terms of what they want to see out of this? All right out of your firm Malcolm: my current LP is anonymous and so there's a limit to what I can say about their incentives sure, but they care about climate change. They basically by into my argument that climate change is real and worth mitigating, right and. Fusion is a promising and underinvested potential mitigation.   Does the profit motive increase impact?   Ben:  And to go a [00:45:00] little farther into that this is just a comment about impact investing as a whole  so the question is could they get a better return? By tape putting that money into a this definitely putting you on the spot, but I think you could probably make an argument that they  might get a better return just on the money putting the money in some other investment vehicle. And so they probably  want to see that same impact that you want to see and. I guess the the thing that I'm interested in is  does having the profit motive actually increased impact and if so how Malcolm: regarding the potential for profit. When [00:46:00] I started doing this, I thought it was really a charity play. I guess more politely only in Impact play but set up in a for-profit structure so that if it happened to make a profit then the people who had enabled it to happen would be able to share in that profit right as I have. Looked at the space more closely and refined my argument or arguments in this area. I have come to believe that there's a meaningful potential for profit here. This is all hinges on what you think on the chance that you were saying for Fusion working. It's very clear that if you shouldnt Works in a way that is economically competitive. The company that gets there will be immensely valuable assuming that [00:47:00] it manages to retain and say p and that kind of thing. So I've taken stabs figuring out the valuation of one of these companies the error bars are huge. So I got numbers around 25 billion for my low-end valuation closer to a trillion for the high-end. It's really hard to say, but the numbers are big enough on the profit if it works side right that it really boils down to do I think it's going to work that is a hard thing to put numbers on but by investing in a portfolio of them you increase your chances.   Risk, Timescales, and Returns vs. Normal Firms   Ben: something I know from other VC firms is that. You have to they have to limit their risk. They don't make as risky Investments because of their LPS because they feel they have this financial duty [00:48:00] to return some amount to their LPS in a certain amount of time right? I do you worry about those same pressures. Have you figured out ways around them ways to extend those time scales. Malcolm: I don't think I'm going to be subject to the same pressures because anyone who gives me money is going to be expecting something very different. Yeah, so instead of being subject to those pressures. I think that the same psychology manifests for me as limiting my pool of investors. So it's a real problem. It just plays out differently. Got it. That's it. A few things are different for me because I'm pitching the fund differently a normal fund cannot look at a space and say it's really important that something works in this space, but [00:49:00] it's not clear which company might succeed because there's real science risk, right so normally. Normally the investors to Silicon Valley can decide that this company or these two companies should be the winner and they can all agree that they're going to put all their money in there and they can anoint a winner it will win because it's getting all the money shorter than major Scandal that does not work for in when investing in companies with a heavy science risk. That's why I think you need to invest in a portfolio of companies and a normal fun has trouble doing that because they are obliged by their investment thesis that their investors have signed off on to spread their money out across different different sectors. Okay. So again, that doesn't make life. Magically easy [00:50:00] for me. It means I need to find investors who are on board for doing something different specifically investors who are wealthy enough that they are diversifying their Investments by investing in other funds or Vehicles besides mine and are not expecting diversity from me. But having found those investors, I will then be in a much better position because I can concentrate. In one sector and really solve that or at least strongly supported   Is Money the limiting reagent on fusion?   Ben: got it that makes a lot of sense. I want to shift and talk about Fusion itself. Okay a little bit more. So I'm sure you've seen the the fusion never plot. I'll put a link up in the show notes. (Fusion Never Plot: http://benjaminreinhardt.com/fusion_never/) So the question is this plot makes it look like if you pour more money in it will go faster. Do you think that's actually the case or is [00:51:00] there something else limiting the rate at which  we achieve Fusion Malcolm:  if you have to leave the existing spending. Then adding money is a way to make it go faster. But a cheaper alternative is to spend your existing money more wisely the world's Fusion spending and America's Fusion spending to a first approximation all goes in to eat. The international thermonuclear experimental reactor this International collaboration in France. (ITER - International Thermonuclear Experimental Reactor. https://www.iter.org/proj/inafewlines) This thing came about because the next step for a fusion experiment in America and Russia was too expensive for either country to pursue independently, even though everyone's first inclination was surely to keep competing. So became a collaborative [00:52:00] Endeavor and. It's now a collaboration between many countries that things expected to suck up 20 billion or more and has a depressing schedule that ends with Fusion Energy on the grid and 2100. Okay, America puts on the order of a hundred fifty million a year into either directly and say 500 million a year. Into what are called either relevant projects domestic projects where you're trying to learn about something some problem that's relevant either but you're learning in a way that is smaller cheaper better controlled right than a 20 billion dollar massive building where everything is inevitably really complicated the. Other placed in America spends money is on [00:53:00] Neff the national ignition facility, which is really a weapons research facility that is occasionally disguised as an energy research facility. Another way that it's been described to me is the perfect energy research facility what these cynical people meant was it's too small to. (NIF - National Ignition Facility: https://lasers.llnl.gov/) Actually get to ignition or energy Break Even but it's big enough that the people working on it can tell themselves that it might get there. If only they work harder If Only They dedicate the rest of their career to this. So it has large numbers of people who really care about Fusion Energy much more than bombs working on. Because it's the best way that they can see meaning the best funded way that they can see to get there but they don't actually seem to believe that it's [00:54:00] going to get there. They just don't have any choice. So we spent a lot of money on these two programs and that funding would be more than adequate for forgetting to. If we spent it on anything more modern it is not controversial to say that these techniques these two facilities are the best Fusion approaches and experimental setups that we could come up with in the mid-80s the mid-90s when they were being designed that's a fact that's when they were being designed. They've had limited upgrades since but yeah, that's. That's the overall story. What is controversial is weather continuing to support them is the best move. There are people who believe that we need to keep putting money in there [00:55:00] because we're going to learn a lot if we keep doing that science or because if we don't put the money in there, then the money will get pulled and probably stand on bombs or something like that, but it won't come to fusion and so. Better bad money in a fusion than worse money somewhere else, right. My personal view is that eater is such a ridiculous energy project that it harms the entire Fusion field by forcing people to pay lip service. To the validity of its goals that we would be better off admitting that that thing is a travesty and that there are better ways to do Fusion, even if it meant losing the money now, I'm not certain about that. But that's the gamble I would take good news is we probably won't have to take that Gamble and it [00:56:00] looks as if the federal government is becoming much more open to to a yes and approach to funding. The mainstream approaches diffusion if and eater and a variety of projects for alternative approaches and more basic Research into things like tritium handling tritium breeding hardening materials to deal with high-energy neutrons. Lasting longer in the face of high-energy neutrons that kind of thing. So I think there's real momentum towards building a in inclusive program that can support everyone and that is of course the best much as I would take the gamble with killing eater and killing them if they do produce real scientific results, and and if we can have all these things that's a wonderful out.   Government Decision Making and Incentives   Ben:  on that note who ultimately [00:57:00] is the decision maker behind where government Fusion money is spent and what are their incentives? Malcolm: This is America. Is there ever one person who's the decision maker about some Ben: maybe not one person but is it is it Congress is it unelected officials in? Some department. Is it the executive branch? Do you have a sense? Is it some combination of all of them? Malcolm: The money flows through the department of energy a sub-department of the doughy is rpe, which has its 30 million dollar Fusion program and will hopefully have a new and larger Fusion program in the near future. (ARPA-e: https://en.wikipedia.org/wiki/ARPA-E) There's also the office of Fusion science in an office of Fusion Energy and science ofes that funds a lot [00:58:00] of the mainstream Research into Fusion ( Office of Fusion Energy Science https://science.energy.gov/fes/ ) arpa-e is to my knowledge created by Congress and fairly independent of the doughy, but there's still feuding. I think without describing malice to anyone. It is a great Testament to many people's conviction and political skills that they were able to get America to fund niff and either more or less consistently over decades at a high cost and. Those people are highly invested in those projects continuing. I don't know whether that's because they genuinely believe that that's the best way to spend the money or fear that the money would disappear from Fusion completely [00:59:00] if it stopped or don't think that the Alternatives actually have any scientific credibility or. Are so now trapped by the arguments that they've been making strongly and successfully for decades, but for one reason or another or many reasons, they strongly believe that we need to continue to do these so there is a tension between people who want to fund the Alternatives and the people who want to fund the mainstream fusion   Who are the government decision makers?   Ben: and who are these people  do you have any sense of actually who they are? Like I'm not asking you to name names, but like what is their role? What is their nominal job title? Malcolm: I think it's a bunch of civil servants within the Dewey Congress has a role like cotton Congress gets to decide how much money to provide and that's often attached to a. About how that money will be spent right there have been [01:00:00] Congressional hearings on Fusion the covered either and whether we should continue to fund either it's a lot of different people.   What are the roles of Academia, Government, Industry, and Philanthropy?   Ben: Okay? Yeah.  I'm just really interested in dating you down until I like where we're the incentive structure is set up  along those lines in in your mind in sort of an ideal world. what do you see the ideal rolls of the the four Columns of Academia government private investment and philanthropy in making sort of an epic level project like Fusion. Yeah happen. Malcolm: There's a ton of room for government support on this. The federal government has National Labs that have the best computers the best software which is often classified the best testing sites many in many [01:01:00] ways. The only testing site. And lots and lots of experts the one thing that the federal government lacks is a drive to put Fusion Energy on the grid as quickly and as commercially successful as possible. I don't rule out that the federal government could develop that drive but. It seems like a long shot given that there's a lot of disagreement about climate change and energy policy and that kind of thing. So I think that the ideal would be that the federal government supports Fusion research with all of its resources Financial expertise modeling modeling software. Modelling hardware and testing facilities in partnership with Private Industry. So [01:02:00] that Private Industry is providing the drive to get things done. So I imagine a lot of research done at the federal government so that if the current crop of companies bottom out if it turns out that their techniques don't work. We have more Fusion research coming down the pipeline to support a later crop of companies, but we would have companies working closely with the federal government to try to build reactors getting assistance in all those ways from the federal government and providing the drive. The company's  would have this. Call of Fusion Energy on the grid that they would be working towards but they would get to use the federal government's resources for the areas that they're focusing on. There are also the areas that the companies are not focusing on areas that are largely common [01:03:00] to all companies and therefore no company views it as on their critical path to demonstrating reactor relevant gain, For example, for example tritium is toxic to humans and difficult to contain. It turns out even hydrogen is difficult to contain it leaks through metal surfaces, but we don't talk about this because hydrogen is. Astonishingly boring and section in small quantities. So we don't care that it leaks out of our containers we do care when tritium leaks out of containers because it's heavily regulated and toxic right? So any Fusion company that's handling Trillium is going to need a way to contain tritium with very low leak rates. Also, the world does not have very much true. You can't actually use tritium as a [01:04:00] fuel for Fusion. You have to breed tritium in your reactor from lithium. So the real inputs to the reactor if it is a deuterium tritium reactor will be do tarian and lithium and you'll be breeding tritium from lithium in your reactor. So we also need to study how we're going to breathe. The tritium right? We're making mathematical calculations about the tritium breeding rate how much tritium we will get out after doing Fusion relative to the amount of tritium. We had before doing fusion and these tritium breeding rates are close to 1 if they're below 1 or really not enough above one. We're screwed, right? So there's important work for academics. And the federal government to do to better understand trillion breathing rates [01:05:00] and what we can do to increase them and write how to make this work.   Companies aren't incentivized to look at things on critical path   Ben: Right and at the companies are not incentivised to look into that right now because they don't feel like it's on their critical path Malcolm: investors Pope maybe including myself have made it clear to these companies that. What they will reward the companies for is progress on the riskiest parts, right? This is valid you want to work on the Unruh Burning Down the biggest risks that you have, right and everyone perceives that the biggest risk is getting through Fusion conditions. Sorry. In many of these companies are already getting to Fusion conditions, but everyone perceives that the biggest risk is getting Fusion to work getting reactor relevant games from Fusion, right? So compared to that these risks are small and it's [01:06:00] valid to the further, right? If you're a single Company If you're looking from the perspective of a portfolio, which the federal government is best positioned to do which is. Going to be somewhat well position to do then your risks are different you're willing to say I have a portfolio of these companies. I don't care which one succeeds I'm doing what I'm doing, assuming one of them will succeed. Now. What can I do to D risk my entire portfolio, right? You look at it differently and then these problems start to seem critical. So with my second fund one of the things I want to be able to do. Is support academics or maybe for-profit companies that are working on this but the federal government isn't even better fit for this it is perfectly positioned to do this.   What does the ideal trajectory for Fusion look like?   Ben:  I think a good closing question is in your in your Ideal World.  would Innovation infusion come into being what would the path look [01:07:00] like? Imagine  Malcolm? King of the universe and we started with the world we have today what would happen Malcolm:  I think the federal government would do the  heavy lifting but it would rely on private companies to really provide the drive. It would the federal government would also support the longer term. Things that are critical but not the highest risks such as the tritium issues today - and perfect.   Why do so few people invest in fusion   Ben: there anything that I didn't ask that I should have asked about Malcolm: one question that comes up quite often is why so few people invest in Fusion? Yeah and why it is. That I'm the only one with the poor.   Ben: Yeah, if you could if it's there's a possible payoff of a trillion dollars, right? And that's  even if it takes 20 years  the AR are still pretty good. Malcolm: Yeah, the way I think of it [01:08:00] is there's a funnel where. It's like a company's fun offer acquiring customers. But in this case, it's an industry's funnel for acquiring investors and investors are falling out of this funnel at every stage. The first stage is of course, you have to believe in anthropogenic climate change, but we have lots of investors who believe in.   Why do you need to believe in climate change to fund fusion   Ben: quick question. Why why do you need to believe in that in order to want to fun fusion? Malcolm: Okay, that's a fair point. You don't have to but it's the easiest route. Okay. If you don't believe in climate change, then you have to believe purely infusions potential to provide energy that will be cheaper than fossil fuels, right. I believe a zoo that but it is a higher bar [01:09:00] than I believe the climate change is going to encourage people one way or another to put a premium on clean energy. Got up when I do my modeling. I'm not taking into account carbon taxes or renewable portfolio standards, but it's nevertheless easier to convince yourself to care about the whole thing. If you think that this is an important problem, right? Otherwise you could do it just because you think you can make a whole ton of money, but it is a high-risk way of making money, right? So one way or another let's say you decide you're interested in well. No, I think that's carry on with the. The funnel for a climate change. Yes. Yes, because there are a few more places that they can fall out and these places might apply to an ordinary profit seeking investors as well, but it's less clear. So you've decided you believe in anthropogenic climate change [01:10:00] and you'd like to see what you can do about it. Maybe you then narrow to focusing on energy. That's a pretty reasonable bad energy is something like 70% of our emissions  when you track everything back to the root. So perfectly reasonable Pace to focus within energy. There are lots of different ways that you might think you can do something about it. There's geothermal power. There's title power a long range of long tail a large range of long tail. Ways that you can make energy and a lot of people really get trapped in there or they decide they're going to look at the demand side of energy and think about how they can get people to drive less or insulate their buildings better or whatever. And in my opinion most of these people are basically getting stuck on [01:11:00] things that don't move the. Yeah, the don't add up to a complete solution. They are merely incrementally better. So when you look at things that might move the needle on energy, I think that the supply side is way more promising because the demand side is huge numbers of buildings cars people whose habits need to change on and on if you can change the supply side then. It doesn't matter as much if we are insulating our houses poorly driving too much that kind of thing. If we have enough energy we can doesn't matter we can make hydrocarbons. From the raw inputs and energy and we can continue to drive our cars and flat our planes and heat our cart heat our houses that have natural gas furnaces and that kind of thing. Ben: Well, if you have if you have absurd amounts of energy, you can just literally pull carbon out of the air and stick it in concrete   right? Malcolm: That's [01:12:00] the other thing. So, I believe that focusing on energy Supply is the right way to go because. With a distributed Grid or not. We have many fewer places that we make energy than where we consume energy and what you said, we're looking at significantly increased demand in the likely or certain case where we need to do atmospheric carbon capture and sequestration. It takes the energy it takes energy to suck the carbon out of the atmosphere and it takes probably. More energy to put it into a form where we can store it for a long time. Right? So I think you need to focus on the energy Supply even within that we have lots of ways of making clean energy that aren't scalable and there's a wonderful book called sustainable energy without the hot [01:13:00] air (Sustainable Energy without the Hot Air https://www.withouthotair.com/ ) that catalogs for the United. All the ways that they could make energy renewably and all the ways that they use energy and is more or less unable to make the numbers add up without throwing new clear in their got it nuclear or solar in the Sahara and then transfer transmitting the energy to to the UK. So that's it. You can probably make Renewables work. If you can start but the problem with Renewables is that well Renewables come in two forms that are the predictable forms like Hydra where we control when we release energy and then the variable Renewables like wind and solar where they make energy when Nature cases  for those. We need something to match the demand. That humans have for energy [01:14:00] with the supply that nature is offering and you can try to do that in some combination of three ways. You can over build your Renewables to the point where even on bad days or in Bad seasons, you're making enough energy. You can transmit your energy. You can overbuild you can space shift your energy. By building long distance transmission lines and you can time shifts your energy through storage. Right and it turns out that all three of these have real costs and challenges storage is an area where people get pretty excited. So this is the next point that people fall off. They say wind and solar are doing great. storage isn't solved yet, but gosh it's on these great cost curves and like I can totally see how storage is going to. A solved problem and then Renewables for work and I think there are two problems with that is that these one is [01:15:00] storage actually needs to get a lot cheaper. If you want to scale it to the point that we can use it for seasonal storage. Right? And we need a solution to the seasonal problem. There are places like California the. Vastly more Renewable Power sometimes if you're in the other times of year in California, it's by a factor of 10 or 12. Wow. Yeah, so it's not a day by day. It's Yeah by month and and that's critical because if you are cycling your storage daily just to to bridge between when the Sun's shining and when people need their power you get to monetize your story. Every day you get the entire capacity of your stories roughly right used 365 times a year for the 20 years that your plant is left. Right? If you're cycling it seasonally meaning once a year you get to sell 20 times [01:16:00] the capacity of your storage rather than 20 times 365 times the capacity of your storage. So your storage needs to be one 365th of the price. To hit that same price Target. Yeah. This is a really high bar for storage is economics. So the first problem with betting on storage is it needs to drop in price a lot to really solve the problem. The second problem is lots of people are investing in storage. So if you want your money to make a difference in terms of making you a profit or having an impact on the world. You need to out-compete all those other people working on wind and solar and storage.  you're going to play in that space and that leaves something like nuclear. There are compared to Fusion plenty of people investing in various nuclear fission approaches. So again, your company can make a difference your money can make a difference there. [01:17:00] But and it'll be a bigger difference than in wind and solar and storage but a small difference than Fusion. So say you get all the way to I think I want to invest in Fusion.  what you now encounter is an industry where everyone you talk to will tell you that their approach is definitely going to work unless they're being really nice and that everyone else is approach is definitely not going to work  \so. Pretty understandable at that point to give up on the whole Space. So in this is the step before that you have to to get over the hurdle where Fusion is always 10 years way. It's been 10 years away for as long as most of us have been alive,  you have to decide that you even want to look at Fusion then you hit the problem where everyone says nasty things about everyone else to a first approximation if you spend a long enough time there you might find a company. The convinces you that they are different that all those other people are crazy [01:18:00] and they're worth investing in but I've talked about why investing in one company is a bad idea to invest in multiple companies. You have to find some advisors or just be Reckless enough that you can so you have to find. Enough advisors who are open-minded or you have to be sufficiently Reckless to just roll a dice on these companies or maybe just sufficiently Rich that you're going to roll the dice on these companies, but it's a real hurdle to find advisors who are experienced and credible and open-minded enough to support investing in a portfolio of these companies. Yes, even then, If you're a regular VC fund which you probably are if you have enough money to do this, right we have the problems that I mentioned earlier where your Charter is to invest in a diverse [01:19:00] set of companies and you can't put enough money in these companies to infusion to support a portfolio of companies. so that is the funnel  did ends with as far as I can tell just strong atomics coming out the end as the only fund supporting a portfolio of fusion companies. That's why in my opinion. There are not more people investing in Fusion in general along the way.   Outro I got a lot out of this conversation here are some of my top takeaways.  There are many ways of structuring organization that's trying to enable Innovations each with pros and cons that depend on the domain. You're looking at Malcolm realize that a VC fund is best for Fusion because of the low return from shared resources and the temperaments of people involved just because there's a lot of money going into a domain. It doesn't mean it's being spent well. I love the way that Malcolm thought very deeply about the incentives of everybody. He's dealing with and how to align them with his [01:20:00] vision of a fusion filled future. I hope you enjoyed that. It's like to reach out. You can find me on Twitter at been underscore Reinhardt. I deeply appreciate any feedback. Thank you.  
undefined
Dec 7, 2018 • 59min

NASA vs DARPA with Mark Micire [Idea Machines #1]

My guest this week is Mark Micire, group lead for the Intelligent Robotics Group at NASA’s Ames Research Center. Previously Mark was a program manager at DARPA, an entrepreneur, and a volunteer firefighter. The topic of this conversation is how DARPA works and why it’s effective at generating game-changing technologies, the Intelligent Robotics Group at NASA, and developing Robotics and technology in high-stakes scenarios. Links Intelligent Robotics Group DARPA Camp Fire DARPA Defense Sciences Office First DARPA Grand Challenge Footage - looks like a blooper reel FEMA Robotics   Transcript Ben: [00:00:00] [00:00:00] Mark, welcome to the show.  I actually want to start let's start by talking about the campfire. [00:00:04]Camp Fire [00:00:04] So we have a unprecedented campfire going on right now. It's basically being fought primarily with people. I know you have a lot of experience dealing with natural disasters and Robotics for emergency situations. So I guess the big question is why don't we have more robots fighting the campfire right now? [00:00:26] Mark: [00:00:26] Well, so the believe it or not. There are a lot of efforts happening right now to bring robotics to bear on those kinds of problems. Menlo Park fire especially has one of the nation's leading. Groups, it's a small called kind of like a squad of folks that are actually on Menlo Park fire trained in their absolute career firefighters who are now learning how to leverage in their case. [00:00:57] They're [00:01:00] using a lot of uavs to to do Arrow aerial reconnaissance. It's been used on multiple disasters the we had the damn breakage up in almost the same area as campfire. And they were using the the uavs to do reconnaissance for for those kind of things. So so the the ability for fire rescue to begin adopting these two new technologies is always slow the inroads that I have seen in the last say five years is that they like that it has cameras. [00:01:32] They like that it can get overhead and can give them a view they wouldn't have been able to see otherwise the fact that now you can get these uavs. That have thermal imaging cameras is frighteningly useful, especially for structure fires. So that's so that's the baby steps that we've taken where we haven't gone yet that I'm hopeful we'll eventually see is the idea that you actually have some of [00:02:00] these robots deploying suppressant. [00:02:01] So the idea that they are helping to, you know, provide water and to help put out the fire that that's a long leap from where we are right now, but I would absolutely see that being within the realm of the possible. Sybil about gosh now friend 2008. So about 10 years ago NASA was leveraging a predator be that it had with some with some. [00:02:27] Imagery technology that was up underneath it. Um to help with the fire that was down in Big Sur and I helped with with that a little bit while I was back then I was just an intern here at Nasa and that's I think a really really good example of us using of the fire service leveraging larger government facilities and capabilities to use Robotics and usually these and other things in a way that the fire service itself frankly doesn't have the budget or R&D [00:03:00] resources to really do on their own. [00:03:00]Ben: [00:03:00] [00:03:00]So you think it's primarily a resources thing [00:00:00] Mark: [00:00:00] t it's a couple factors there's resources. So, you know outside of I'll say really outside of DHS. So the problem that homeland security has a science and technology division that does some technology development outside of that. There's not a whole lot of organizations outside of commercial entities that are doing R&D a for fire rescue the it just doesn't exist. [00:00:28] So that's so that's that's your first problem. The second problem is culturally the fire service is just very slow to adopt new technology. And that's not it. It's one part. You know, well, my daddy didn't need it in my daddy's daddy didn't need it. So why the heck do I need it right at that? [00:00:49] That's it's easy to blame it on that. What I guess I've learned over [00:04:00] time and after working within the fire service is that everything is life-critical? There's very few things that you're doing when you're in the field providing that service in this case Wildfire response where lives don't. Kind of hang in the balance. [00:01:09] And so the technologies that you bring to bear have to be proven because what you don't want to do is bring half-baked ideas or half-baked Technologies and frankly have your normal operations have have that technology in a fail in a way that your normal operations would have provided the right kind of service to protect those lives God. [00:01:33] So the evaluation and also kind of the acceptance criteria. For technology is much much higher in especially the fire service. Then the many other domains that I've worked in. I can only think of a few other ones and you know, like aircraft safety and automobile safety tend to be the same where [00:05:00] they're just very slow to roll in Technologies and other things like that, but those two areas that I just described have government and other groups that are providing R&D budgets to help push that technology forward. [00:02:06] So when you get the. You get the the combination of we don't have a lot of budget for R&D and we're very slow to accept new technology because we have to be risk adverse that those two tend to just make that domain of very slow-moving Target for new technologies. [00:02:21]Enabling Innovations in Risk Averse Domains [00:02:21] [00:02:21]Ben: [00:02:21] that actually strikes me as very similar to to NASA. [00:00:03] Actually. We're , there's always the the saying that you know, you can't fly it until you've flown it and do you see any ways for. Making Innovations happen faster in these risk-averse domains you have any thoughts about that? [00:00:16]Mark: [00:00:16] It's it's tough. I mean so short short answer is I don't know. I've been trying for the last 15 years and [00:06:00] I'm still still swinging at it the. [00:00:29] The trick is just to keep going and ultimately I think it just comes down to exposure and the folks the the decision makers within the respective Fields just being comfortable with the technology. So as we now have automobiles that are sharing the highways with us that are controlling themselves and I'm not even talking like fully autonomous, you know, driverless Vehicles, you know, the fact that we have, you know, Tesla's and other high-end cars. [00:00:59] They have Auto Pilots that are Auto steering and Lane keeping and stuff like that the ability for folks within the fire rescue domain to start becoming comfortable with the idea that machines can make decisions that are in life. Critical scenarios and if they can make the right decision on a regular basis, it sounds weird to say that something completely removed from the fire service may help improve the ability for fire service to adopt those [00:07:00] Technologies. [00:01:27] It seems weird to think that that's the case. It's absolutely the case and I you know, like I've been doing this for longer than well. I guess 10 or 15 years now as much as I hate to admit that and I've seen a dramatic change in that now I can go into a room and I can talk about. Averaging and unmanned air vehicle and I'm not laughed out of the room. [00:01:48] There's a comfortableness now that I see these domains accepting that they wouldn't before so, you know, hopefully we're making inroads. It's not going to be a fast path by any stretch. Yeah culturation is something that I don't think people think about very much when it comes to technology, but that's a really good point. [00:02:09] I have geek we don't and that's that's unfortunate. And the one thing I've learned over time. That as Geeks we have to realize that sometimes the technology isn't first that there's a lot of other factors that play in. [00:02:20]Mark's Mission [00:02:20] [00:02:20] Ben: [00:02:20] Yeah, absolutely.  something that I want people to hear about is I feel like you're one of the most [00:08:00] mission-driven people that I know and not to put you on the spot too much but could you tell folks what you do? [00:00:07] Like why you do what you do? [00:00:11] Mark: [00:00:11] Um well and it really depends. I'll say in yeah, you can appreciate this a depends on what it is. I'm doing so, you know for my day job. I work at Nasa have always been a space geek and an advocate for humans finding ways of working in space and one of the best ways that I have found that at least for my talents that I can help enable that is to leverage machines to do a lot of the precursor. [00:00:42] Work that allows us to put humans in those places. It turns out strangely enough of it a lot of the talents that I use for my day job here also help with work that I do on the side related to my role as search and rescue Personnel in FEMA [00:09:00] that a lot of the life safety critical things that we have to do to keep humans alive in the vacuum of space also apply to. [00:01:11] Women's Safe and finding humans at and around and after disasters and so I've always had this strange kind of bent for trying to find a technology that not only ties to a mission but then you can very clearly kind of Point your finger at that and say well that's that's really going to help someone stay safer or do their job more effectively if they had that piece of equipment. [00:01:39] Those are fun, you know. An engineering standpoint. Those are the kind of Base requirements that you want and and it always helps with there's a lot of other technology areas that I could have played in and I like the fact that when I'm when I'm making a design decision or an engineering trade that I can look at it and really grounded out [00:10:00] into okay. [00:02:02] Is that going to make that person safer? Is that going to make them do their job better? And it's really motivating to be able to. To have those as kind of your level one requirements as you as you try to design things that make the world better. [00:02:14] Intro to IRG [00:02:14] [00:02:14]Ben: [00:02:14] and So currently you're the head of IRG. [00:00:05] Yeah group lead is the official title. So I'm the group leader of the intelligent robotics group. Yeah, and I bet that many people haven't actually heard of the intelligent robotics for group at Ames which is kind of sad, but could you tell us a publicly shareable story that really captures IRG as an organization? [00:00:22]Mark: [00:00:22] [00:00:22] Serve, yeah, well, I would say that it is it is a an interesting Motley Crew of capabilities that that allow robots to go do things and all kinds of different domains. We have folks within our group. That specialize in ground robotic. So we have [00:11:00] Rovers that have quite literally gone to the ends of the Earth and that we've had them up in the northern Arctic. [00:00:49] We've had them in desert in Chile. We they've roamed around just about every crater or interesting Landmark that we have in California here and long story short. We have folks that not only work with and make ground robotics smart, but then. Of them and one of the things I adore about the team is that they're all filled capable. [00:01:13] So we all subscribe to the philosophy that if we're not taking this equipment out in the field and breaking it. We're probably not learning the right things. And so none of our robots are garage queens and stay inside inside of the lab that we love like to take our stuff outside and take them out into these domains where they're really really tested. [00:01:34] We have a group here. Subgroup within RG that's working on Technologies for the International Space [00:12:00] Station. So we have a free flyer and have worked with many of the free Flyers that are up on the International Space Station. Now, there's a new one that we are building. That should fly very soon here called Astro B, which is all you can think of it as in astronauts assistant. [00:02:01] So it's able to not only do things on its own but hopefully will be helpful to astronauts and also allow ground controllers to to have a virtual presence on the International Space Station in the way that they the way they haven't been able to. Let's see. We're turns out that when you're working with robots like this having very good maps and representations of the world's that you are exploring becomes important. [00:02:27] And so we have a sub grouped here.  That works with planetary mapping. So in the best, I guess most digestible way of describing that is that if you've ever opened up [00:13:00] Google Google Earth and kicked it into Google moon or Google Mars mode. That most of the especially the base in Ministry imagery and other products that are in that in that Google Earth We're actually generated by my group. [00:03:00] And so it turns out that when you get these pictures and imagery from satellites, they're not always right and they need a lot of kind of carrying and coercing to make them actually look correct. And so my group has a suite of software. Where that's all publicly available the that can be used to make that imagery more correct and more digestible by things like Google Earth and other systems like that and then you know in general we at any given time have, you know, north of 30 to 40 researchers that are here. [00:03:38] Doing all kinds of work that is relevant to robotics relative [00:14:00] relevant to space and yeah, and it's an awesome group and every single one of them is motivated and exactly the right kind of ways. [00:03:52]Organizational Nitty-Gritties: IRG [00:03:52] [00:03:52] Ben: [00:03:52] Yeah. I mean having having worked there I completely agree with that statement from personal experience. [00:03:58] And actually related to to the motivations something that I really like doing is digging into the nitty gritties of organizations that really generate Innovations. So so look what tell me about the incentives that are  at play in IRG like what really what motivates people like, how are people sort of rewarded for success and failure and how do those pieces work? [00:00:12] Mark: [00:00:12] Well, I and. I'm going to say this and it's going to sound super simple. But the IRG is one of the few places and it's one of the reasons why I wanted to when I was given the opportunity to be the group lead that I took it is I still feel like I RG is like one of the last one of the few places. [00:15:00] I guess I'll say where the research can kind of be up front. [00:00:34] We're creativity can be king and we can kind of focus on doing the good work in a way that I'll just say that is a little bit more difficult when you're out. A commercial world because you know chasing the next product sometimes has a whole bunch of things that come along with it. You know, what is the market doing what you know our is this going to be supported by Senior Management other things like that that we that we don't have to deal with that as much it has to align with NASA's Mission. [00:01:06] It has to align with what the focus is our of the agency, but I will. That because we have such good researchers here our ability to create a proposal. So we end up just like everyone else writing proposals to to NASA itself and winning those proposals that they that they were kind of ization is actually in the [00:16:00] fact that these researchers get to do the research that they're wanting to do and all the research that's being handed down to them by, you know, a marketing team or some corporate exec. [00:01:40] The other thing that is huge here and I know. Probably experienced it during your tenure when I say the folks are here for the right reasons. We all know every single person within IRG and I'll say that within especially NASA Ames out here in Silicon Valley every single one of us could go a thousand yards outside of the fence and be making two to three times what we make working for the government. [00:02:08] And that's not it's not so much a point of Pride. But what it does is it just helps relieve the the idea that folks are are are here for the money you're here for the research and you here for the science. I use the best analogy I make quite often is I used to. [00:17:00] I used to teach as an Adjunct professor at a community college doing and this is more than this is about 15 years ago in the courses were on like PC repair and other things it was this certification called A Plus and the I used to confound the other professors because I used to always take they had one section that they would do and it was 8 a.m. [00:02:52] On Saturday morning. It was like a it was like an 8 a.m. To 1 p.m. And it was just one day a week and I used to always take that one and the other professors were like, why are you taking an 8 a.m. Saturday course and I would smile at them and say. Because every single student it's in there. I know they want to be there. [00:03:14] I know that they are motivated and want to be there because no one in their right mind is other than a motivated student is going to get up at 8 a.m. On a Saturday morning to go learn about PC repair and to add in to everyone's surprise, but not my surprise. I had a [00:18:00] 100 percent pass rate on that test because it was independently tested outside out of out of the classroom and I. [00:03:39] So just smile because it was like wow, you must be a great professor and I'm like, no, I've got great students because they all are motivated to be there. So that's effectively what I have here within NASA sitting inside of you know, this Silicon Valley bubble is I have a whole bunch of frightening Lee smart people that are motivated to do good science and have absolute have financial reasons to go elsewhere and decided for themselves. [00:04:07] This is where they'd rather work. Yeah and do so the in terms of the majority. Let's break that down a little bit the way that projects happen is that you do a proposal to like, who do you propose projects to I guess [00:19:00] is the the correct question. Well the the fun part and this is one of the the freedoms and NASA has. [00:04:36] Can really propose to anybody we have projects here that our commercial so we work with like for instance. We're doing work with Nissan on autonomous vehicles. And and if actually done some really really interesting work there, you know related to visualization and other things like that which which borrows a lot from work that we do with the Rovers so so we can work with companies. [00:05:03] We work with Within. First so NASA itself. One of the ways that NASA works is that because we have multiple centers, you know, NASA Ames for instance in our group will propose to NASA headquarters. So we just pitched a couple of months ago we pitch to a program that was doing satellite-based Technologies and I flew to NASA headquarters in DC and we [00:20:00] pitched it to a much like you would do to a VC or any. [00:05:35] No any funding source, if you were a company doing it in the valley and you pitch it and we and we want it. We also work with other government agencies. So we have done work for DARPA. We've done work with the Marine Corps. It turns out that the dod Department of Defense is interested in a lot of the ways that we have worked with autonomous vehicles as the Department of Defense tries to figure out how they want to work with autonomous vehicles. [00:06:05] So it's easy for us to open a conversation with Department of Defense and say hey, here's what we did for our Rovers our uavs or whatever and this may be something that you know, you may want to consider and a lot of times they'll come back and say well look we not only want to consider that but we'd also like to go ahead and kind of put you on proverbial payroll here and how do you either do the work for us or help us [00:21:00] understand? [00:06:30] You know, what are the important parts of this we can work with Academia? And so we will often have projects where we partner with a university and we will go in and do a joint proposal either to NASA or all of the different funding sources that that are out there. And so NASA. Has a lot of flexibility in a way that you know myself having previously worked in the Department of Defense. [00:06:58] NASA can do something unique and that NASA can be a consult or NASA can do work for a private company. We have a thing called a space act agreement and like the Nissan workout was talking about there. It seems odd that that a government organization would be able to receive a paycheck if you will. [00:07:18] Yeah from a private Corporation. And it turns out that NASA has a very unique way of doing that and we leverage that frankly as often as we can. [00:22:00] So I realized that's probably a really long answer to a simple question and that's to say we can take money from just about anybody as long as it is legal and it benefits NASA in some way. [00:07:41] Those are the only two real catches that we have. We You Know It ultimately has to benefit and NASA's Mission as. You know being Shepherds of taxpayer dollars, but as long as we can justify that we can work with a lot of different funding sources. [00:07:58]Aligning with NASA's Mission [00:07:58] [00:07:58] Ben: [00:07:58] And what is NASA's mission right now? Like how do you know whether something is within the purview of NASA's Mission or not? [00:08:08] Mark: [00:08:08] Well, I NASA takes its guidance from from a lot of different places. I mean, we you know, there's the two A's. NASA, you know with respect to you know Aeronautics. I'm sorry, the what we have Aeronautics and we have space right and those are the two kind of built into the name, you know missions that are in there. [00:08:29] We [00:23:00] also you know, the we take direction from NASA headquarters. And they are putting out, you know, we have the science side, especially for space which is really driven a lot by the Decay deal surveys and other kind of Direction with respect to where we want to see and it sounds kind of funny to say but it's like where we want to see mankind go in terms of, you know, space exploration and other things like that, but we also have Earth Sciences, you know, some of the kind of flipping back to to the the. [00:09:02] It's up in Northern California some of the some of the best especially satellite imagery that is coming through there's actually being processed through NASA's Earth Sciences missions. And so, you know, there's a worldview and a bunch of other tools that are out there that as as the Earth Sciences. [00:09:24] With all of the different things that are affecting especially, you know, the climate and everything else. It turns out that [00:24:00] NASA's mission is also to benefit that and to help with Earth observations in a way that ultimately helps us understand how we might be impacting other worlds when were able to achieve going there [00:09:42] [00:09:42]NASA-> DARPA [00:09:42] [00:09:42] Ben: [00:09:42] Got it. I'm going to transition a little bit  from your time at Nasa to then your time at DARPA. And what I wanted to know is like what were some of the biggest shocks transitioning from NASA to DARPA and then now back from DARPA to NASA because they're both government agencies, but they feel like they have very different fields at least from the outside. [00:00:20] Mark: [00:00:20] Yeah. Um, gosh, that's there's especially from NASA to DARPA.  It was I guess the biggest things that come to mind one as a program manager. It is frightening Lee empowering to go to an organization where you know [00:25:00] where you're at Nasa here. We you know with Ed My Level and with the group kind of scenario that I just described to you. [00:00:51] We're in the trenches right? We're trying to do the science. We're doing the research and we're we're trying to make a kind of an impact at a kind of a ground level right when you go in as a program manager at DARPA your your. Trying to change a field. So you have your basically being given the power to say within this field within this field of let's say autonomous vehicles. [00:01:19] I see the following Gap and in stating that and in creating kind of the the request for proposals and other things that you do that bring researchers to darpa's door you're saying. You're not saying I'm going to go do this technology technological thing you're saying I think everyone needs to focus on this part of the [00:26:00] technology landscape. [00:01:44] That's a that's a different conversation at a very different level and it was startling to be frankly one of those program managers where you say. Hey, I don't think the field is doing this right and then to have an entire field turn to you and say oh, okay. Well then let's. From the thing that you want that you're suggesting that that's that isn't interesting and kind of empowering position to be in. [00:02:11] but has a NASA does too but DARPA specifically especially with Department of Defense type technologies that eventually roll out into civilian use your ability to just speak at such a different level and at a level that is. Accepting of risk in a way that NASA does not do that for DARPA. You almost have to have if it's not ready [00:27:00] yet. [00:02:43] If it's not risky enough that you can have a program not basically make the cut DARPA because it didn't have enough custo. It didn't have they call it and dark within DARPA. They called The Laugh ability test and that if your if your idea isn't just crazy enough that it's almost laughable. Then then then it didn't it didn't it's going to have to work a lot harder to get there. [00:03:07] And so I'd say the probably I guess in conclusion the risk and just the empowerment to move an entire field than a different Vector that that would probably be the biggest difference as I had between between my NASA world and then going over and being able to Moonlight as a program manager [00:03:26]Fields Impacted by DARPA [00:03:26] [00:00:00] Mark: [00:00:00] and what are some fields that you. All like DARPA has really moved that concept is incredible and makes sense. And I it hasn't been expressed. So concisely before  I'd love some [00:28:00] examples of that. [00:00:02] Mark: [00:00:02] What are the best and I think the most recent example that we can now see the impact for is is autonomous vehicles. [00:00:12] I mean you have to remember the that that now is over a decade ago that the original that the first DARPA Grand Challenge happened and what you know, I was reflecting on this while I was being chased down by a Tesla on the way into work this morning that clearly was autonomously driving itself. And I remembered that in most people forget that the first arpa Grand Challenge. [00:00:38] First of all was millions and millions of dollars in investment and no one won. Yeah one got to the finish line. And in thinking about risk and thinking about risk acceptance what I think that's one of the best data or a really good data point of darpa's not only saying this is really hard. We're going to call it a Grand Challenge and we're going to have these [00:29:00] vehicles basically racing across the desert that if that wasn't gutsy enough from a risk standpoint, but they also then failed and then did it again and said, you know what week we literally had. [00:01:16] Humvee flipped over on fire on in the desert and that was on the evening news for everyone to enjoy to the embarrassment of DARPA and the dod and everybody else and then they said you know, what? No, we're going to double down. This is really worth it. And we need to make this happen and the the impact for that is huge because that then became, you know kind of the ground floor. [00:01:46] Of the vehicles that we now have running around especially out here and you know in the Bay Area you got fully autonomous vehicles now that are able to navigate their way through, you know through all of the different difficulties in the complex situations [00:30:00] that can be presented. The folks that were that Noble Sebastian threatened and his Stanford team that won the the the Grand Challenge that those people went on to to work for you know, what was the Google autonomous car which eventually became way Mo and all of the different companies and talented is sprung out of all of that. [00:02:25] That was all born over a decade ago by an organization that is using your taxpayer dollars to do. Risky things and to say for this it's for this autonomy thing. We really think that vehicles are where the money needs to be spent and spent in a real way that that takes guts and it's still in my mind one of the only organizations that really able to kind of make an impact like that until that entire field. [00:02:53] Hey, I don't think you're doing this right and here's what I want you to do and I'm going to put money behind those words and we're going to go change the world and a [00:31:00] decade later. We've got autonomous vehicles quite literally beside you on the highway. That's pretty awesome. [00:03:07]Levels of Risk DARPA Shoots For [00:03:07] [00:03:07] Ben: [00:03:07] That is incredibly awesome. [00:03:09] Do you have a sense of what the level of risk that you're shooting for is I'm thinking just sort of. Like what is the the acceptable or even desired failure rate or is there a sense of how many fields per decade you're shooting for? Right because you think about it and even if it's changing one field per decade. [00:03:42] The amount of change that comes out of something like autonomous cars or even  the human computer interaction that came out came from the 60s might even make the whole thing worth it. So do does anybody even  think about it in terms of numbers at all? [00:32:00] [00:00:03] Mark: [00:00:03] So I never heard it framed that way the thing that the Mantra that was always drilled into US was that that it was that the way that you kept score was by the number of Transitions and how how DARPA and I guess that's more of a general DOD term. [00:00:25] That's to say for something you create how many times did. Someone take that technology and go use it for something and so, you know, we would count a transition as you know, well, you know Army decided to take our autonomous vehicle and use it for this but we also got contacted by Bosh and they are interested in leveraging that thing that we built with this new sensor that they're commercially making available and we provided the missing link that now allows them to use that safely. [00:00:59] Vehicles and so you kind of keep score internally on [00:33:00] that basis. The other thing though that darpa's doing is you got multiple horses in the race. So DARPA is organized into multiple floors that have different specializations. So they have like and just a couple examples. I have like the biological technology office and the micro technology office and each one of those. [00:01:29] Floors has a specialization in so the idea that you're bringing in these program managers, you're empowering them to go change their respective fields. And then you're doing that across multiple broad domains like biology and micro technology and other things like that. That's pretty that's pretty and that's awesome in a way that it provides overlap because when I was for instance where I work, What's called DSL which is the defense Sciences office, which is to say it works on [00:34:00] kind of first principle science and physics and Mathematics and other things like that the fact that you can as somebody who's working that go talk to somebody who was fundamental in the development of mems technology, which is what MTO the micro technology office. [00:02:21] That's what they work. And then you want to see how let's say that new chip that is leveraging mems technology might. By law might be able to parallel or be inspired by biology and go get one of the experts from the biotechnology office to you know to scrimmage on some new idea that you're having or whatever that that's that's awesome. [00:02:44] And what that does is that just ends up being kind of this this this multiplier this Catalyst for innovations that are then, you know, you've got multiple domains that are all kind of being affected in the same kind of positive feedback loop. So I would say that's the biggest thing to directly to your question that I don't ever remember anybody saying, okay. [00:03:03] We're not [00:35:00] hitting quote. We need you know, we need another six domain changing ideas organize, you know not have satisfied or obligation of Congress. I don't ever remember any kind of conversations like that. [00:03:16]Organizational Nitty Gritties: DARPA [00:03:16] [00:03:16]Ben: [00:03:16] Yeah that description of the like cross-disciplinary interactions is shockingly similar to some descriptions that I've heard of bell labs and it's the parallels that are really interesting. [00:03:32] And I want to dig into sort of the organizational nitty-gritties of DARPA as well. So all of the the program managers who are the sort of the the drivers of DARPA, you're all  basically temporary employees. And so how did the incentives their work? what are your goals as. Program manager and what drives people, what incentivizes them to do their work? [00:00:04]Mark: [00:00:04] [00:36:00] Well, so you're right you're there. So as a DARPA program manager you therefore it's. Typically to two-year renewable contracts. So you you go in you have basically two years at which point you're evaluated as to how well your programs are doing and then you may be renewed for typically another two years. [00:00:26] Most program managers are there for about three years and that that's kind of the the center of the bell curve is three years the motivation simple and that you're you're being. Given one of the largest.  within certainly within DOD. If not within just the overall research community and DARPA has a bit of a Swagger. [00:00:51] It has a bit of a like a brand recognition that when DARPA says that we are going to now going to focus on this particular type of sensor this particular type of technology that you as [00:37:00] a program manager. You have the ability to go talk to the best of the best the the the folks that are. Either changing or moving or working in those those respective technology bases that you can drop somebody an email and the fact that it's you at DARPA dot mil that that will probably get you a response that you might not have been able to get otherwise. [00:01:28] And so so that's you know, that's I would say one of the biggest kind of motivators that are incoming program manager has as they're going in and then the the other big motivator there is you're not you're there for a limited amount of time. So for years may sound like a lot of time it's not it's really is not because you it takes about to go from like idea on the back of a napkin. [00:01:57] To you know to kick off of a program it takes about a year. [00:38:00] There's a for as much as it looks like it's loose and free and a little crazy in terms of the ideas and stuff like that. It turns out that there's a pretty regimented all jokingly call it a hazing ritual that's on the backside that involves multiple pitches. [00:02:21] There's a level of. Programmatic oversight called a tech Council that you have to go present to that is extremely critical of whatever it is. It is that you're you're presenting and I'll admit it some of the toughest pitches and certainly the toughest like presentations that I ever prepared for. My first tech Council was way more difficult than anything I ever did like for my PhD dissertation or anything. [00:02:52] Like that and so yeah, and so, you know once the so if if you're on a let's say a three year time scale and it takes you a year to get a [00:39:00] program up and running you have got enough time to maybe make two or three dents in the universe, which is what you're hoping to probably do when you go in the door. [00:03:16] And then the other thing that could happen is as program managers are cycling out. So, you know you everybody's on kind of disorder. Even in their out after three years the other program managers have to then inherit the programs that are run up and running that some previous program manager, you know may have pitched in awarded but is now headed off to you know, make you know, buku bucks and industry or whatever and so it's another disc I'll say distraction that you have because program managers sometimes naively myself included go in thinking. [00:03:47] Okay. Well, I'm just going to go in and. Ditch my own ideas, and I don't even know what this inheriting other programs thing is but I'm going to try to avoid that as much as possible and now you've got three or four or five different programs that you're running and hopefully what you've done is you've built a good [00:40:00] staff because you're able to assemble your own staff and you can kind of keep keep the ball running but that's kind of a that's the cycle if I can give you kind of a you know, the the the day in the life kind of you is that you're going to go in. [00:04:19] You're going to be pitching and coming up with new ideas and trying to get them through Tech Council. Once they get through Tech Council, then you've got a program up and running in as soon as that programs up and running then you've got to be looking toward the next program while your staff. You know the ball rolling on your other on your other programs, then you rinse and repeat at least three or four times [00:04:43]What does success or failure look like at DARPA [00:04:43] [00:00:00] Ben: [00:00:00] and what does the end of a program look like either success failure or question? [00:00:11] Mark: [00:00:11] Um, it depends on the program and it depends on the objectives of the program, I guess, you know, the grand challenges always end with [00:41:00] a huge Fanfare and robots presumably in a running through Finish Lines and other things like that. There's other programs that end much much more quietly. We're a technology may have been built that is just dramatic. [00:00:37] We enabling and and the final tests occur and a lot of times DARPA may or may not have an immediate use for the technology. Are that the reasons for the Technology Building being built.  Innocence the program started and so you may see the companies basically take that technology back and continuing improving on it or incorporating it into their products and you know, and that's a very kind of quiet. [00:01:07] Quiet closure to what was a really really good runner really really good program and then presumably you would see that technology pop up and you know in the consumer world or in the, you [00:42:00] know our kind of our real world, you know in the next four to five years or so, and so there's a it's the full spectrum as you would probably imagine that that some of the program's some of them fail loudly some of them fail quietly. [00:01:35] And the successes are the same some of the successes are with great Fanfare and then other times and I'll say some of the most enabling Technologies are out there sometimes close their their time and their tenure at DARPA very quietly. And then some years later go on to do great things for the public. [00:01:53]How DARPA Innovations get into the world [00:01:53] [00:01:53] Ben: [00:01:53] That's something that I hadn't thought about so the sort of expectation of the model of how the the technology then gets. Into the world is just that the people who are working on it as part of the program are then the ones to go and take the ball and run with it. Is that accurate? [00:02:18] Mark: [00:02:18] Absolutely, and [00:43:00] I'd say that that's a difference so strictly speaking. [00:02:22] No research happens Within darpa's Walls, and I guess that's one of the things that that both Hollywood and the description of DARPA. Sometimes get confused is be. That you know DARPA is this this, you know, presumably the warehouse full of mad scientists and you go inside and everybody's in lab coats and it looks like something out of X-Files and that's not it's not the case at all that DARPA is there to to first, you know catalyzed Technologies for DOD purposes, but. [00:02:59] But those those folks that are working for DOD are also companies that are producing products made many of them are producing products that are very much outside of DOD and so the spillover and the fact that the DARPA can and I'll say relatively quietly create technology that [00:44:00] is that is just it's a catalyst for the greater good or the the greater use of Technology more broadly that that is a it's a wonderful. [00:03:28] Ability that DARPA has that a lot of other labs don't have that ability to do so you take and I'll give you an example. So let's take like either Air Force research Labs or Army research lab or any of the research Labs that are with the particular branches of the military that does have actual researchers much like NASA Ames here. [00:03:49] We have actual researchers that are inside of our four walls that are doing work and we can do work that you know is it can be exclusive to the government? But but in darpa's case because there is no research being done within its four walls that most of the contractors most of the what they would call the performers the folks that are performing the technology development that depending on the mostly depending on the contract and the contracts are usually written such that those companies can take those Technologies and and use them for [00:45:00] whatever they'd like after the the terms of the contract is done [00:04:26]Improving the Process of Getting DARPA Innovations into the world [00:04:26] [00:04:26]Ben: [00:04:26] something that I've always wondered is you try so many things at DARPA and there's there's no good way of sort of knowing all the things that have been tried and what the results were. Is there any there ever any thought. having it better knowledge base of what's been tried who tried it and what the result was because it feels like for  every technology that was developed by a company who then picks it up and runs with it. [00:00:04] Sometimes there's a something that's developed by a lab that. Is full of folks who just wanted to do the research and sort of have no desire to then push it out into the world so is there is there any effort to make that better make that [00:46:00] process better? [00:00:06] Mark: [00:00:06] Yes, and no but this is a bit of a trick question and I'll answer that. [00:00:12] Well, I'll answer the tricky part. First of all, let me ask let me back up. The obvious answer is that DARPA especially within the last five years or so on his been working much harder to be more open with the public about the work that's being done. You can hit darpa's website and. To the 80th percentile of an understanding of the work that's being done within within DARPA did that the balance of the twenty percent is stuff. [00:00:44] That's either classified or is of a nature where you would just need to do a little more digging or talking with the program manager to really understand what's happening. Okay. So that's a straightforward answer the trick. The trick answer here is that it's better sometimes. Have folks go in that don't know their [00:47:00] history. [00:01:05] The don't know why that previous program failed because since that previous program ran technology may have changed. There may be something that's different today that didn't exist 10 years ago. When that was when that program was also tried the there was this interesting effect within DARPA that because your. [00:01:31] Managers out about every three to four years and because I'll say it like this because DARPA didn't in the past had not done a very good job of documenting all of the programs that it had been running that there was a tendency for a program manager to come to the same Epiphany that a equivalent program integer had come up with a decade earlier. [00:01:56] But that doesn't mean that that program shouldn't be funded. Now. There were folks within DARPA that had been there for a long [00:48:00] time. So interestingly enough the caught the support contractors, so we call him sita's which is systems engineering technical assistance, and there are some CDs support staff that has been there for multiple decades. [00:02:20] So they were back at DARPA during the you know, roaring 80s and 90s, which is kind of, you know, some of the the Heyday for some of the more crazy DARPA stuff that was happening that you would have a program manager go and Pitch some idea. Timers in the back start, you know lean one would lean over to the other one in elbow on the you know, and pointed a slide and they both Giggle and then you would ask them later is like hey, what was the what was the weird body language? [00:02:48] He's like, yeah, you know, we tried this back in the 90s and and he didn't work out because Laser Technology was insufficiently precise in terms of its timing or you know, some other technical aspect or whatever, but it's good to see you doing this because I think it [00:49:00] actually has got a fight. [00:03:06] Chance of making it through this time and hearing that and watching that happen multiple times was interesting because we tend to We tend to say oh well if somebody already tried it and I you know, I'm probably not going to try it again. Whereas with DARPA that's built into the model. The the the ignorance is an essay. [00:03:26] It is ignorance. It's not necessarily it's ignorance of the fact that the idea and the Epiphany you just came up with may have been done before. For that is all I want to believe it's by design that then they will allow a program to be funded that may have been very similar to one that was funded earlier. [00:03:48] But because it's under a new it's under a whole new set of capabilities in terms of technology that if you do that intelligently that that's actually a blessing for for folks that are trying to come up with new programs. [00:04:04]The Heilmeier Catechism [00:04:04] [00:04:04] Ben: [00:04:04] The [00:50:00] concept of forgetting things that has been tried feels almost Blasphemous in the the face bright. [00:04:12] It's right. Like that's why I do wonder if there's sort of a middle ground where you say we tried this it failed for these reasons and then whenever someone wants to pick it up again, they can they can know that it's been tried and they have to make the argument of this is why the world is different now. [00:04:31]Mark: [00:04:31] yes. So that is actually part of within DARPA and one of the framings that they use for pitch is this thing called a Heilmeier catechism and and it's basically a framework that one of the previous DARPA directors made that said if you're going to pitch an idea pitch it within this Framing and that kind of helped that will help you kind of codify your argument and make it succinct one of the Lines within the. [00:00:27] Ism is why is this why can this happen now and that addresses that [00:51:00] kind of ignorance that I was talking about before as a program manager when you pitch that thing and you realize that some program manager did it back in 87 and you're all bummed because you're like, oh man, you can't come up with an original idea and these four walls that somebody hasn't done it previously that. [00:00:52] Then then you just after you get over, you know, the being hurt that you know that your idea is already been done. Then you go talk to some of the original contractors you go talk to some of the sita's you talk to the folks that were there and figure out what is different and then and that is part of the catechism that is part of the what is different. [00:01:13] Now that will enable this to work in a way that it didn't work previously. [00:01:18]Best ways to Enable Robotics [00:01:18] [00:01:18] Ben: [00:01:18] Yeah. The catechism is I think a. Our full set of questions that people don't ask enough outside of DARPA and I'll definitely put a link to it in the show notes. So I do know we're coming up on time. So as a final question, I want to ask [00:52:00] you've been involved in robotics in one way or another for quite some time in Academia and in governments and start. [00:01:42] And it's a notoriously tricky fueled in terms of the amount of hype and excitement and possibility versus the reality of robots coming into especially the the unstructured real world that we live in and why do you think of there? There's a better way to do it from sort of all the different systems that you've been a part of like is there an entirely different system. [00:02:10] What would you change to make to make some like more of that happen?   [00:02:16] Mark: [00:02:16] I this and I hate to say like this. I don't know that there's I don't know that there's much I would change. I think that right now especially working in robotics. That is I look at the. The capabilities the [00:53:00] sensors the all of the enabling work that we have right now in terms of machine learning and autonomy and everything else like it. [00:02:41] This is a great day to be alive and working in the field of Robotics in a way that you know, and I'll feel like the old man is I say this but you know, I started this back in the late 90s early 2000s and frankly when I think of the tools and the platforms and sensors that we had to work with. Um that that you spent especially my experience was a grad school grad student experience. [00:03:08] But when I remember how much time we would spend just just screwing around with sensors that didn't work right in platforms that weren't precise in their movements and you know, just all the other aspects that make robotics robotics and I now look at today the fact that you know, we've got kind of. [00:03:30] Bischoff platforms that we can go find that [00:54:00] you can use that that for these lower low-cost platforms. You can really dig deep into research areas that are still just wide open. The fact that now, you know in the mid-2000s if you wanted to do a Thomas car research you needed to be especially or basically. [00:04:03] They don't know how to work high power crazy high power servos and other things like that. Now you go buy a Prius like or Tesla or something, you know what I mean and you're off and the platforms built for you. We you know, the the lidar the computing power and everything else we have today. I might answer your question right now. [00:04:23] I don't know that I would change a thing. I maybe naively believe that we have all of the tools that we need to really really. Make dramatic impacts [00:55:00] and I believe we are making dramatic impacts in the world that we're living by enabling Automation and autonomy to do really really incredible things. [00:04:43] The biggest thing is is for folks to to go back and to kind of along the line of the last line of questioning the you would have had as far as you know, forgetting and remembering the things that we've done in the past. I find that some of the best ideas that I'm seeing that are coming forward into Robotics and autonomy are. [00:05:01] Ideas that were really born back in the 90s.  We just didn't have the computing power or the sensors to pull it off and now we do and so it's almost a go look back and look at you know, kind of create a Renaissance of us going back and looking at some of the really really great ideas. That just didn't have their day. [00:05:23] Back when you know when things were a little more scarce in terms of computing and algorithmic complexity and other things like that that we can now address in a really kind of powerful way that [00:56:00] is quite a note of optimism.  I really appreciate it mark, thank you so much for doing this. I want to let you get on with your day. [00:00:06]  I've learned a ton and I hope other folks have as well. Absolutely. Well, thank you for having me on I appreciate it.  

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode