Idea Machines cover image

Idea Machines

Latest episodes

undefined
May 27, 2024 • 30min

Speculative Technologies with Ben Reinhardt [Macroscience cross-post]

Tim Hwang turns the tables and interviews me (Ben) about Speculative Technologies and research management. 
undefined
10 snips
Feb 10, 2024 • 47min

Industrial Research with Peter van Hardenberg [Idea Machines #50]

Peter van Hardenberg discusses the contrast between Industrialists and Academics, the evolution of Ink&Switch, the Hollywood Model in R&D, internal lab infrastructure, and the importance of building a supportive community for idea sharing and project management in a research lab setting.
undefined
Nov 27, 2023 • 57min

MACROSCIENCE with Tim Hwang [Idea Machines #49]

A discussion with Tim Hwang on historical simulations, policy-science interaction, creative destruction in research, regulating scientific markets, and clock speeds of regulation versus technology. Explore insights on macroscience, metabolism of science, and indicators for the health of a field. Delve into simulations for historical events and the nuances of science discourse. Navigating regulatory challenges in technology, balancing national security with progress, and ethical considerations in research.
undefined
Oct 3, 2022 • 56min

Idea Machines with Nadia Asparouhova [Idea Machines #48]

Nadia Asparouhova talks about idea machines on idea machines! Idea machines, of course, being her framework around societal organisms that turn ideas into outcomes. We also talk about  the relationship between philanthropy and status, public goods and more.  Nadia is a hard-to-categorize doer of many things: In the past, she spent many years exploring the funding, governance, and social dynamics of open source software, both writing a book about it called “Working in Public” and putting those ideas into practice at GitHub, where she worked to improve the developer experience. She explored parasocial communities and reputation-based economies as an independent researcher at Protocol Labs and put those ideas into practice as employee number two at Substack, focusing on the writer experience. She’s currently researching what the new tech elite will look like, which forms the base of a lot of our conversation.  Completely independently, the two of us came up with the term “idea machines” to describe same thing — in her words: “self-sustaining organisms that contains all the parts needed to turn ideas into outcomes.” I hope you enjoy my conversation with Nadia Asparouhova.  Links Nadia's Idea Machines Piece Nadia's Website Working in Public: The Making and Maintenance of Open Source Software Transcript [00:01:59] Ben: I really like your way of, of defining things and sort of bringing clarity to a lot of these very fuzzy words that get thrown around. So, so I'd love to sort of just get your take on how we should think about so a few definitions to start off with. So I, in your mind, what, what is tech, when we talk about like tech and philanthropy what, what is that, what is that entity. [00:02:23] Nadia: Yeah, tech is definitely a fuzzy term. I think it's best to find as a culture, more than a business industry. And I think, yeah, I mean, tech has been [00:02:35] associated with startups historically, but But like, I think it's transitioning from being this like pure software industry to being more like, more like a, a way of thinking. But personally, I don't think I've come across a good definition for tech anywhere. It's kind, you know? [00:02:52] Ben: Yeah. Do, do you think you could point to some like very sort of like characteristic mindsets of tech that you think really sort of set it. [00:03:06] Nadia: Yeah. I think the probably best known would be, you know, failing fast and moving fast and breaking things. I think like the interest in the sort of like David and gly model of an individual that is going up against an institution or some sort of. Complex bureaucracy that needs to be broken apart. Like the notion of disrupting, I think, is a very tech sort of mindset of looking at a problem and saying like, how can we do this better? So it, in a [00:03:35] weird way, tech is, I feel like it's sort of like, especially in relation, in contrast to crypto, I feel like it's often about iterating upon the way things are or improving things, even though I don't know that tech would like to be defined that way necessarily, but when I, yeah. Sort of compare it to like the crypto mindset, I feel like tech is kind of more about breaking apart institutions or, or doing yeah. Trying to do things better. [00:04:00] Ben: A a as opposed. So, so could you then dig into the, the crypto mindset by, by contrast? That's a, I think that's a, a subtle difference that a lot of people don't go into. [00:04:10] Nadia: Yeah. Like I think the crypto mindset is a little bit more about building a parallel universe entirely. It's about, I mean, well, one, I don't see the same drive towards creating monopolies in the way that and I don't know if that was like always a, you know, core value of tech, but I think in practice, that's kind of what it's been of. You try to be like the one thing that is like dominating a market. Whereas with crypto, I think people are [00:04:35] because they have sort of like decentralization as a core value, at least at this stage of their maturity. It's more about building lots of different experiments or trying lots of different things and enabling people to sort of like have their own little corner of the universe where they can, they have all the tools that they need to sort of like build their own world. Whereas the tech mindset seems to imply that there is only one world the world is sort of like dominated by these legacy institutions and it's Tech's job to fix. Those problems. So it's like very much engaged with what it sees as kind of like that, that legacy world or [00:05:10] Ben: Yeah, I, I hadn't really thought about it that way. But that, that totally makes sense. And I'm sure other people have, have talked about this, but do, do you feel that is an artifact of sort of the nature of the, the technology that they're predicated on? Like the difference between, I guess sort of. The internet and the, the internet of, of like SAS and servers and then the [00:05:35] internet of like blockchains and distributed things. [00:05:38] Nadia: I mean, it's weird. Cause if you think about sort of like early computing days, I don't really get that feeling at all. I'm not a computer historian or a technology historian, so I'm sure someone else has a much more nuanced answer to this than I do, but yeah. I mean, like when I think of like sixties, computer or whatever, it, it feels really intertwined with like creating new worlds. And that's why like, I mean, because crypto is so new, it's maybe. It, we can only really observe what's happening right now. I don't know that crypto will always look exactly like this in the future. In fact, it almost certainly will not. So it's hard to know like, what are, it's like core distinct values, but I, I just sort of noticed the contrast right now, at least, but probably, yeah, if you picked a different point in, in text history, sort of like pre startups, I guess and, and pre, or like that commercialization phase or that wealth accumulation phase it was also much more, I guess, like pie this guy. Right. But yeah, it feel, it feels like at least the startup mindset, or like whenever that point of [00:06:35] history started all this sort of like big successes were really about like overturning legacy industries, the, yeah. The term disruption was like such a buzzword. It's about, yeah. Taking something that's not working and making it better, which I think is like very intertwined with like programmer mindset. [00:06:51] Ben: It's yeah, it's true. And I'm just thinking about sort of like my impression of, of the early internet and it, and it did not have that same flavor. So, so perhaps it's a. Artifact of like the stage of a culture or ecosystem then like the technology underlying it. I guess [00:07:10] Nadia: And it's strange. Cause I, I feel like, I mean, there are people today who still sort of maybe fetishizes too strong, a word, but just like embracing that sort of early computing mindset. But it almost feels like a subculture now or something. It doesn't feel. yeah. I don't know. I don't, I don't find that that's like sort of the prevalent mindset in, in tech. [00:07:33] Ben: Well, it, it feels like the, the sort of [00:07:35] like mechanisms that drive tech really do sort of center. I mean, this is my bias, but like, I feel like the, the way that that tech is funded is primarily through venture capital, which only works if you're shooting for a truly massive Result and the way that you get a truly massive result is not to build like a little niche thing, but to try to take over an industry. [00:08:03] Nadia: It's about arbitrage [00:08:05] Ben: yeah. Or, or like, or even not even quite arbitrage, but just like the, the, to like, that's, that's where the massive amount of money is. And, and like, [00:08:14] Nadia: This means her like financially. I feel like when I think about the way that venture capital works, it's it's. [00:08:19] Ben: yeah, [00:08:20] Nadia: ex sort of exploiting, I guess, the, the low margin like cost models. [00:08:25] Ben: yeah, yeah, definitely. And like then using that to like, take over an industry, whereas if maybe like, you're, you're not being funded in a way [00:08:35] that demands, that sort of returns you don't need to take as, as much of a, like take over the world mindset. [00:08:41] Nadia: Yeah. Although I don't think like those two things have to be at odds with each other. I think it's just like, you know, there's like the R and D phase that is much more academic in nature and much more exploratory and then venture capital is better suited for the point in which some of those ideas can be commercialized or have a commercial opportunity. But I don't think, yeah, I don't, I don't think they're like fighting with each other either. [00:09:07] Ben: Really? I, I guess I, I don't know. It's like, so can I, can I, can I disagree and, and sort of say, like, it feels like the, the, the stance that venture type funding comes with, like forces on people is a stance of like, we are, we might fail, but we're, we're setting out to capture a huge, huge amount of value and like, [00:09:35] And, and, and just like in order for venture portfolios to work, that needs to be the mindset. And like there, there are other, I mean, there are just like other funding, ways of funding, things that sort of like ask for more modest returns. And they can't, I mean, they can't take as many risks. They come with other constraints, but, but like the, the need for those, those power law returns does drive a, the need to be like very ambitious in terms of scale. [00:10:10] Nadia: I guess, like what's an example of something that has modest financial returns, but massive social impact that can't be funded through philanthropy and academia or through through venture capital [00:10:29] Ben: Well, I mean, like are, I mean, like, I think that there's, [00:10:35] I think that, that, that, [00:10:38] Nadia: or I guess it [00:10:39] Ben: yeah, I think the philanthropy piece is really important. Sorry, go ahead. [00:10:42] Nadia: Yeah. I guess always just like, I feel like it was like different types of funding for different, like, I, I sort of visualized this pipeline of like, yeah. When you're in the R and D phase. Venture capital is not for you. There's other types of funding that are available. And then like, you know, when you get to the point where there are commercial opportunities, then you switch over to a different kind of funding. [00:11:01] Ben: Yeah. Yeah, no, I, I definitely agree with that. I, I, I think, I think what we're like where, where, where I was at least talking about is like that, that venture capital is sort of in the tech world is, is like the, the, the thing, the go to funding mechanism. [00:11:16] Nadia: Yeah. Yeah. Which is partly why I'm interested in, I guess, idea machines and other sources of funding that feel like they're at least starting to emerge now. Which I think gets back to those kinds of routes that, I mean, it's actually surprising to me that you can talk to people in tech who don't always make the connection that tech started as an, [00:11:35] you know, academically and government funded enterprise. And not venture venture capital came along later. Right then and so, yeah, maybe we, we're kind of at that point where there's been enough wealth generated that can kind of start that cycle again. [00:11:47] Ben: yeah. And, and speaking of that another distinction that, that you've made in your writing that I think is really important is the difference between charity and philanthropy. Do you mind unpacking how you think about that? [00:12:00] Nadia: Yeah. Charity is, is more like direct services. So you're not, there's sort of like a one to one, you put something in, you get sort of similar equal measure back out of it. And there's, I mean, charity is, you know, you can have like emergency relief or disasters or yeah, just like charitable services for people that need that kind of support. And to me, it's, it's just sort of strange that it always gets lumped in with philanthropy, which is a. Enterprise entirely philanthropy is more of the early stage pipeline [00:12:35] for it it's, it's more like venture capital, but for public goods in the same way that venture capital is very early stage financing for private goods. Philanthropy is very early stage financing for public goods. And if those public goods show promise or yeah, one need to be scaled, then you can go to government to get to get more funding to sustain it. Or maybe there are commercial opportunities or, you know, there are multiple paths that can, they can branch out from there. But yeah, philanthropy at its heart is about experimenting with really wild and crazy ideas that benefit public society that that could have massive social returns if successful. Whereas charity is not really about risk taking charity is really about providing a stable source of financing for those who really need it in the moment. [00:13:21] Ben: And, and the there's, there's two things I, I, I want to poke at there is like, do so. So you describe philanthropy as like crazy risk taking do, do you think that most [00:13:35] philanthropists see it, that. [00:13:37] Nadia: Today? No. And yeah, philanthropy has had this very varied history over the last, like let's say like modern philanthropy in its current form has only really existed since the late 18 hundreds, early 19 hundreds. So we've got whatever, like a hundred, hundred 50 years. Most of what we think about in philanthropy today for, you know, most let's say adults that have really only grown up in the phase of philanthropy that you might call like late stage modern philanthropy to be a little cynical about it where it has. And, and part of that has just come from, I mean, just a bridge history of philanthropy, but you know, early on or. Premodern philanthropy. We had the the church was kind of maybe more played more of that, that role or that that force in both like philanthropic experiments and direct services. And then like when, in the age of sort of like, yeah, post gilded, age, post industrial revolution you had people who made a lot of, lot of self-made wealth. And you had people that were experimenting with new ideas [00:14:35] to provide public goods and services to society. And government at the time was not really playing a role in that. And so all that was coming from private citizens and private capital. And so those are, yeah, there was a time in which philanthropy was much more experimental in that way. But then as government sort of stepped in around you know, mid 19 hundreds to become sort of like that primary provider and funder of public services that diminished the role of philanthropy. And then in the late 1960s, Foundations just became much more heavily regulated. And I think that was sort of like the turning point where philanthropy went from being this like highly experimental and, and just sort of like aggressive risk taking sort of enterprise to much more like safe because it was just sort of like hampered by all these like accountability requirements. So yeah, I think like philanthropy today is not representative of what philanthropy has been historically or what it could be. [00:15:31] Ben: A and what are, what are some of your favorite, like weird, [00:15:35] risky pre regulation, philanthropic things. [00:15:40] Nadia: Oh, I don't do favorites, but [00:15:42] Ben: Oh, okay. Well what, what are, what are some, some amusing examples of, of risky philanthropic cakes. [00:15:51] Nadia: one I mean, [00:15:52] Ben: Take a couple. [00:15:54] Nadia: Probably like the most famous example would be like Carnegie public libraries. So like our public library system started as a privately funded experiment. And for each library that was created Andrew Carnegie would ask the government, the, the local government or the local community to find he would help fund the creation of the libraries. And then the government would have to find a way to like continue to sustain it and support it over the years. So it was this nice sort of like, I guess, public private type partnership. But then you have, I mean, also scientific research and public health initiatives that were philanthropically supported and funded. So Rockefeller's eradication of worm as a yeah. Public health initiative finding care for yellow fever. Those are some [00:16:35] examples. Yeah. I mean the public school education system in the south did not exist until there was sort of like an initiative to say, why aren't there public schools in the south and how do we just create them and, and fund. So and then also like the state of American private universities, which were sort of modeled after European universities at the time. But also came about after private philanthropists were funding research into understanding, like why is our American higher education? Not very good, you know, at the time it was like, not that good compared to the German university models. And so there was a bunch of research that was produced from that. And then they kind of like set out to yeah. Reform American universities and, yeah. So, I mean, there, there're just like so many examples of people just sort of saying, and, and I think like, I, I, one thing I do wanna caveat is like, I'm not regressive in the sense of. Wow. This thing, you know, worked really well a hundred years ago. And why don't we just do the exact same thing again? I feel like that's like a common pitfall in history. It's not that I think, you know, [00:17:35] everything about the world is completely different today versus let's say 19 years, but [00:17:39] Ben: in the past. And so it could be different to her in the [00:17:41] Nadia: exactly that that's sort of, the takeaway is like, where we're at right now is not a terminal state or it doesn't have to be a terminal state. Like philanthropy has been through many different phases and it can continue to have other phases in the future. They're not gonna look exactly like they did historically, but yeah. [00:17:56] Ben: That, that's that such a good distinction. And it goes for, for so many things where like, like when you point to historical examples I don't know. Like, I, I think that I, I suffer the same thing where I, you know, it's like you point to, to historical examples and it's like, not, it's not bringing up the historical examples to say, like, we should go back to this it's to say, like, it has been different and it could be different. [00:18:18] Nadia: Something I think about, and this is a little, it just, I don't know. I, I just think of like any, any adult today in, like, let's say like the, the who's like active in the workforce. We're talking about the span of like a, you know, like 30 year institutional memory or something. Like, and so [00:18:35] like anything that we think about, like, what is like possible or not possible is just like limited by like our biological lifespans. Like anyone you're talking, like, all we ever know is like, what we've grown up with in like, let's say the last 30 ish years for anyone. And so it's like, the reason why it's important to study history is to remind yourself that like everything that you know about, you know, what I think about philanthropy right now, based on the inputs I've been given in my lifetime is very different from if I study history and go, oh, actually it's only been that way for like pretty short amount of time. Only a few decades. [00:19:06] Ben: Yeah, totally. And I, I, I guess this is, this might be a, a slightly people might disagree with this, but from, from my perspective there's been sort of less institutional change within. The lifetime of most people in, in the workforce and especially most people in tech, which tends to skew younger than there was in the past, [00:19:30] Nadia: Yeah. [00:19:32] Ben: like, or, or like to put, put more fine on a point of it. [00:19:35] Like there's, there seems to have been less institutional change in the like latter half of the, the 20th century than in the first, like two thirds of it. [00:19:44] Nadia: Yeah. I think that's right. It feels much more much more stagnant. [00:19:49] Ben: Yeah. And I, I think the, the last thing like pull, pull us back to, to, to definitions real quick. So how, how do you like to describe idea of machines to people? Like if, if someone was like, Nadia, what, what is an idea machine besides this podcast? How would you, how would you describe that? [00:20:05] Nadia: I would point them to my blog post. So I don't have to explain it. [00:20:08] Ben: Okay. Excellent. Perfect. Everybody. [00:20:14] Nadia: If I had to, I mean, if I had to sort of explain in short version, I would say it's kind of like the modern successor to philanthropic foundations, maybe depending who I'm talking to, I might say that or yeah, it's just, it's sort of like a, a framework for understanding the interaction between funders and communities and that are like [00:20:35] centered around to similar ideology and how they turn ideas into outcomes is like there's a whole bunch of soft social infrastructure that, that. To take someone who says, Hey, I have an NDO. Why don't we do X? And like, how does that actually happen in the world? There's so many different inputs that like come together to make that happen. And that was just sort of my attempt at creating a framework for. [00:20:54] Ben: Yeah, no, I think it's a really good framework. And, and the, the, one of the, the powerful things I think in it is that you say there's like these like five components where there's like an ideology, a community ideas, an agenda, and people who capitalize the agenda. And then and I guess I'll, I'll like caveat this for, for the listeners, like in, in the piece you use effective altruism or EA for short as, as a, kind of like a case study in, in idea machines. And so it is, is sort of very topical right now. And I, I think what we will try to avoid is like the, the topical topics about it, but use it as a, an object of study. I think it's actually a very good object of study. [00:21:35] For thinking about these things. And, and actually one of the things that I thought was, was sort of stood out to me about it about EA a as opposed to many other philanthropies is that EA feels like one of the few places where the people who are capitalizing the agenda are, are willing to capitalize other people's other people's agendas as opposed to, to like sort of imposing their own on that. Do you, do you get a sense of that? [00:22:03] Nadia: Yeah. Yeah. It feels, it feels like there's. Mm, yeah. Some sort of shift there. So, I mean, if you think about. You know, someone got super wealthy in the let's call, Haiti of, of the five, one C three foundation. Like, I don't know, let's say like the fifties or something. Yeah, someone, someone makes a ton of money and like the next step is at some point they end up setting up a charitable foundation, they appoint a committee of people to help them figure out like, what should my agenda? And they, but it's all kind of like flowing from the donor and saying like, I want to [00:22:35] create this thing in the world. I wanna fund this thing in the world because it's sort of like my personal interest. Whereas I feel like we're starting to see some examples today of sure. Like, you know, there has to be alignment between a funder's interest and maybe like a community's interest. But in some ways the agenda is being driven, not just by the funder or like foundation staff but by a community of people that are sort of all like talking to each other and saying like, here's what we think is the most important agenda. And so it feels in some ways, like much. Yeah, much more organic. And it's not to say that, you know, the funder is not influencing that or doesn't have an influence in that. But but I, I sort of like seeing now that there, if, if it feels like it's like much more yeah. Intertwined or like it could go in a lot of different directions. So yeah, you see that with EA, which was the example I had used of like the agenda is very strongly driven by its community. It's not like there's like one foundation of, of people that are just like sitting in an ivory tower and saying, here's what we think we should fund. And then they just like go off and do it. And I think that just creates a lot more [00:23:35] possibilities for serendipity around, like what kinds of ideas end up getting funded? [00:23:38] Ben: Yeah. And it also, it also feels like at least to me I'd be interested if you agree with this, it feels like it makes for situations where you can actually like pool capital more easily for for, for sort of like larger projects. Where, when it's, it's like individual. When there's not sort of like a, a broader agenda you have sort of like the, the funding gets very dispersed, but whereas like, if there's, there's a way for like multiple funders to say like, okay, like this is an important thing, then it makes it much easier to like pull capital for, for bigger ideas. [00:24:19] Nadia: Yeah, I think that's right. Like I think within the world of philanthropy, there's it is just sort of more naturally. Towards zero sum games and competitiveness of funding because there's just less funding available. And because there is always this sort of like [00:24:35] reputation or status aspect intertwined with it, where like you wanna be, you know, the funder that made something happen in the world. But I agree that when it, the, the, the, the boundaries feel a little bit more porous when it's not just like, you know, two distinct foundations that are competing with each other or two distinct funders, but it's like, we're, there are multiple funders, you know, that are existing, bigger fish, smaller fish, or whatever that are like, sort of amplifying the agenda of, of a separate community that is not, you know, is not even formally affiliated with any of, any of these funders. [00:25:08] Ben: Yeah. And do, do you have a sense of how, like, almost like what, what are the, the necessary preconditions for that? Level of community to, to come about. Right. Like EA I think maybe is it's under talked about how, like it has, you know, a hundred years of like thinking behind it, of, of before [00:25:35] people really, you know, it's like sort of like different utilitarian and consequentialist philosophers, really sort of like working out, like thinking about how do we prioritize things. And, and so it's just, I guess it's like, if for, for like creating new, powerful, useful idea machines, like what, what are sort of like the, the like bricks that need to be created to lay the groundwork for them? [00:26:01] Nadia: Yeah. I mean, you've seen it come out in different sorts of ways. So like for EA, as you said it, I mean, it already existed before any major funders came in. It was for, I mean, first you have sort of its historical roots in utilitarianism, which go way back, but then even just effective altruism itself was, you know, started in Oxford and like was an academic discipline right at, at its outset. So there was already a seed of something there before they had major funders coming in, but then there are other, other types of idea machines, I think that are where like that community has to be actively nurtured. And it's weird cause [00:26:35] yeah, I mean, I don't think there's anything wrong with that. Or I think people tend to. Underestimate, how many communities had a lot of elbow grace put in to get them going, right. So it's like, you need to create some initial momentum to build a scene. It's not like it's not always just, you know, a handful of people got together and decided to make a thing. I think that's sort of like the historical story that guest glorified, like we like thinking about a bunch of artists and creatives that are just sort of like hanging out at the same cafe and then like, you know, this scene starts to organically form. That's definitely a thing, but right, right. But you know, there's also, yeah. In, in many cases there are funders behind the scenes who are helping make these things happen. They're, you know, convenings that are organized, there are you know, individual academics or or creatives or writers that are being funded in order to help you. Bring these sorts of ideas to to the, [00:27:35] the forefront of, of people's minds. So yeah, I think there's a lot of work that can go, it's just like, you know, start anything, but there's a lot of work that can go on behind the scenes to help these communities even start to exist. But then they start to have these compounding returns for funders, I think, where it's like, okay, now, instead of, you know, instead of hiring a couple of program officers to my foundation I am starting this like community of people that is now a beacon for attracting other people I might not have even even heard of that are sort of like flocking to this cause. And it's sort of like a, a talent, well, in itself, [00:28:08] Ben: Yeah. To change tracks a little bit. So with, with these sort of like new waves of like sort of potential philanthropists in, in both like the tech world or the crypto world do you have any sense of like risky, philanthropic experiments that you would want to see people do? Like just sort of like any, any kind of wishlist. [00:28:32] Nadia: I don't know. I don't know if that's like the role that I am trying to play [00:28:35] necessarily. I mean, I think like personally one area that still feels the way I think about it is I just think about, you know, what are the different components of, of, of the public sector and sort of like what areas are being more or less. Covered right now. And so we see funders that are getting more involved in politics and policy. We see funders that are you know, replicating or trying to, to field build in, in academia. I feel like media is still strangely kind of overlooked or just this big enigma to me, at least when I think about, yeah. How do, how do funders influence different aspects of the public sector? And so, yeah, there's, there's sort of, well, I don't think it's even necessarily a lack of interest because I, I see a lot of. You know, again, that sort of tech mindset and yeah, I guess I'm more specific thinking about tech right now, but going back to, you know, tech wanting to break apart institutions or tech, sort of like being this ancy teenager that is like railing against the institution you see a lot [00:29:35] of that and there's, you know, a lot of tension between tech industry and media right now. So you see that sort of like champing up bit. But then it's not clear to me, like what, like what they're doing to replace that. Is it, and, and, and some of that is just maybe more existential questions about like, what is the future of media? Like, what should that be? Is it this sort of focus on individual media creators instead of, you know, going to like the mainstream newspaper or the mainstream TV network or whatever you're going to Joe Rogan, let's say that's relevant today, cuz I just saw. Mark Zuckerberg did an interview on, on Joe Rogan so like, you know, is, is it like, is that what the future looks like? Is that the vision of what tech wants media to look like? It's not totally clear to me what the answer is yet, but, and I also feel like I'm seeing sort of like a lack of interest in and funding towards that. So that that's sort of like one area where, and it's sort of unsurprising to me, I guess that like, you know, tech is gonna be interested in like science or [00:30:35] politics. And maybe just sort of tech is not great at thinking about cultural artifacts. But you know, in terms of like my personal wishlist or just areas where I think their deficiencies on the sort of public sector checklists that, that one of them. [00:30:49] Ben: yeah, no, that's that's and I think the important thing is, is to, to flag these things. Right. Cuz it's like, it's, it's sort of hard to know what counterfactuals are, but it's like, yeah, like like media media as public goods. Does seem like kind of underrated as an idea, right. It's like would, would, I don't know. It's like, I think Sesame Street's really important and that was, that was publicly funded, right? [00:31:17] Nadia: mm-hmm and even education is sort of like a, a weird, like, I mean, there's talk about homeschooling. There's talk about how universities aren't, you know, really adequate today. I mean, you have like the, you know, one effort to, to [00:31:35] build a new university, but it feels. I don't know, I'm still sort of like waiting for like, what are like the really big, ambitious efforts that we're gonna see in terms of like tech people that are trying to rebuild either, you know, primary, secondary education or higher education. I just, yeah, I don't know. [00:31:53] Ben: Yeah, no, that, that that's in a great, a great place. Like it does not feel like there have been a lot of ambitious experiments there. In terms of right. Like anything along the lines of, of like building all the, the public schools in the south. Right.  [00:32:06] Nadia: Right. Like at that level and this actually, I mean, this is like, and I think you, and I may not agree on this topic, but like I do genuinely wonder, you know, at the same time, we're also iterating at the same time you have these, you know, cycles of wealth that come in and, and shape public society in different ways, on like a broader scale. You also have the, you know, a hundred year institutional cycle where like, Institutions are built and then they kind of mature and then they, they start to stagnate and, and die down. What have we learned from like the last a hundred [00:32:35] years of institution building? Like maybe we learned that institutions are not as great as they seem, or they inevitably decline. And like, maybe people are interested in ways to avoid that in, in other words, like, you know, do we need to build another CNN in, in the realm of media? Or do we need to build another Harvard or is maybe the takeaway that like institutions themselves are falling out of favor and the philanthropically funded experiments might not look like the next Harvard, but they're gonna look like some, yeah, some, some sort of more broken down version of that. [00:33:05] Ben: Ooh, [00:33:06] Nadia: I don't know. And yeah. Yeah. I don't know. [00:33:10] Ben: sorry. Go, go ahead. [00:33:11] Nadia: Oh, I was just gonna say, I mean, like, this is, this is where I feel like history only has limited things to teach us. Right. Because yeah, the sort of copy paste answer would be. There used to be better institutions. Let's just build new institutions. But I think, and I think this is actually where crypto is thinking more critically about this than tech where crypto says like, yeah, like, why are we [00:33:35] just gonna repeat the same mistakes over and over again? Let's just do something completely different. Right. And I think that is maybe part of the source of their disinterest in what legacy institutions are doing, where they're just like, we're not even trying to do that. We're not trying to replicate that. We wanna just re rethink that concept entirely. I, I feel like, yeah, in tech, there's still a bit of LARPing around like, like around like, you know, without sort of the critical question of like, what did we, what did we take away from that? Maybe that wasn't so good. What we did in the past. [00:34:04] Ben: Yeah, well, I, I guess my response just is, is I think definitely that. That institutions are not functioning as well as they have. I think the, the question is like, what is the conclusion to draw from that? And, and maybe the, the conclusion I draw is that we need like different, like newer, different [00:34:35] institutions. And I feel like there's different levels of implicitness or explicitness of an institution, but broadly, it is some way of coordinating people that last through time. Right. And so, even what people are doing in crypto is I would argue building institutions. They just are organized wildly differently than ones we've seen before. [00:35:00] Nadia: Yeah. Yeah. And again, it's like, so the history is so short in crypto. It's hard to say what exactly anyone is trying to do until maybe we can understand that in retrospect. Yeah, I mean, I don't know. I, I think like there is just like some. Like, I feel like there's probably some learning from, from open source where I spent a lot of my brain space in the past around like, it was just like an entirely different type of coordination model from, from like centralized cozy firms. [00:35:34] Ben: Yeah. [00:35:34] Nadia: [00:35:35] And like there's some learning there and, and crypto is modeling itself much more after like open source projects than it is after like KO's theory of the firm. And, and so I, so I, I think there's probably some learnings there of like, yes, they're building things. I don't know. I mean, like in the world of opensource, like a lot of these projects don't last very, like you don't sort of like iterate upon existing projects. A lot of times you just build a new project and then eventually try to get people to like switch over to that project. So it's like these much shorter lifespans And so I don't, I don't know what that looks like in terms of institutional design for like the public sector or social institutions, but I just, yeah, I don't know. I think I just sort of wonder what that looks like. And yeah, I do see, like, there are some experiments within sort of like non crypto tech world as well. Like I was just thinking about Institute for progress and they're a, a policy think tank in, in DC. And I think like one of the things that they're doing well is trying to iterate [00:36:35] upon the sort of, you know, existing think tech tank model. And like one of the things that they acknowledge better than maybe, you know, you go to ano you go to a sort of like one of the stodgy older think tanks, and you're like, your brand is the think tank, right? You are like an employee of that place and you are representing their brand. Whereas I think my sense, at least with Institute for progress is they've been a little bit more like you are someone who is an expert already in your. domain. You, you already have your own audience. You're, you're someone who's already widely known and we're kind of like the infrastructure that is supporting you. I don't wanna speak on their behalf. That's sort of like the way I've been understanding it. And yeah, I mean, so, you know, even outside of crypto, I think people are still contending with that whole atomization of the firm, cetera, etcetera of like how do you balance or like individual reputation versus firm reputation. And maybe that is where it plays out. Like to my question about, you know, are you trying to build another media institution or is it just about supporting like lots of in individual influencers? But yeah, [00:37:35] just, I wonder like, are we sitting here waiting for new institutions to be built and like, actually there are no more, maybe we're just like institutions period are dying and like that's the future. Or yeah, at the same time, like they do provide this sort of like history and memory that is useful. So I don't know. [00:37:51] Ben: yeah, I mean, like, it sounds to me like, there's, there's, I mean, from what you're saying, there's like a much more sort of subtle way to look at it where there's, there's like a number of different sort of like sliders or spectra, right. Where it's like, how. I don't know, like internalized versus externalized, the institution is right where it's like, you think of like your like 1950s company and it's like, people are like subsume themselves to it. Right. And that's like on some end of the spectrum. And then on the other end of the spectrum, it's like like, I don't know, like YouTube, right. Where it's like, yeah. Like all like YouTube YouTubers are like technically all YouTubers, but like beyond that [00:38:35] they have no like coordination or, or real like connection. And like, and like that's one access. And then like new institutions could like come in and, and maybe we're like moving towards an era of history where like the, like just there is more externalization, but then like, sort of like explicitly acknowledging that and then figuring out how to. Do a lot of good and like have that, that sort of like institutional memory, given the, a world where, where like everybody's a brand [00:39:09] Nadia: Yeah. [00:39:10] Ben: that it, it seems like it's, that's not necessarily like institutions are dead. It's just like institutions live in a different like, like are, are just like structurally different [00:39:23] Nadia: Yeah. Yeah. Like, I, I, I wondered, like if we just sort of embrace the fact that maybe we are moving towards having much shorter memories like what does a short term memory [00:39:35] institution look like? I dunno, like maybe that's just sort where, right. You know, like I try to sort of like observe what is happening versus kind of being like, it should be different. And so like, if that just is what it is then, like, how do we design for that? I have an idea and I think that actually get to like part of what crypto is trying to do differently is saying, okay, like, this is where we have sort like trustless and where we have the rules that are encoded into a protocol where like, you don't need to remember anything like the, the network is remembering for you. [00:40:03] Ben: Yeah, I'm just thinking, I, I haven't actually watched it, but do you know the movie memento, which I [00:40:09] Nadia: Yes, [00:40:10] Ben: a guy who has yeah, exactly is short term memory loss and just like tattoos all over his body. So it's like, what, what is the institutional version of that? I guess, I guess like, yeah, exactly. That's that's where the, the note taking goes.  [00:40:25] Nadia: Your. [00:40:27] Ben: yeah, exactly. So sort of down another separate track is, is something that I've noticed is like, [00:40:35] I guess, how do you think about what is and is not a public good? And I, and I asked this because I think my experience talking to many people in, in tech there's, there's sort of this attitude that sort of everything can be made like that, that almost like public goods don't exist. That it's like every, like everything can, can sort of be done by a, for profit company. And if like you can't capture the value of what you're doing it might not be valuable. [00:41:06] Nadia: Yeah, that's a frustrating one. Yeah, I mean like public goods have a very literal and simple economic definition of being a, a good that is non rivals and non-excludable so non excludable, meaning that you can't prevent anyone from accessing it and non rivals, meaning that if someone uses the public good, it doesn't diminish someone else's ability to use that, that public good. And that sort of stands in contrast to private goods and other types of goods. So, you know, there's that definition to start with, but then of course in [00:41:35] real life, real life is much more complex than that. Right. And so I, I noticed there was like a lot of, yeah, just like assumptions. I get rolled up in that. So like one of the things. Open source code, for example in the book that I wrote I tried to sort of like break apart, like people think of open source code as a public. Good. And that's it. Right. And, and with that carries a bunch of implications around, well, if open source is, you know, freely accessible, it's not excludable. That means that we should not prevent anyone from contributing to it. And that's like, you know, then, then that leads to all these sort of like management problems. And so I kind of try to break that apart and say the consumption of open source code. Like the, the actual code itself can be a public good that is freely accessible, but then the production of open source, like who actually contributes to an open source community could be, you know, like more like a membership style community where you do exclude people. That's just, you know, one example that comes to mind of like how public goods are not as black and white as they seem. I think another, like assumption that I see is that public goods have to be funded by government. And government has again, [00:42:35] like, you know, Especially since mid 19 hundreds, like been kinda like primary provider of public goods, but there are also public goods that are privately funded. Like, you know like roads can be funded through public private partnerships or privately funded. So it's not just because something is a public good. Doesn't say anything about how it has to be funded. So yeah, there, there is just sort of like, and then, yeah, as you're saying within tech, I think there's just because the vehicle of change in the world that is sort of like the defining vehicle for the tech industry is startups. Right. And so it's both like understandable why like everything gets filtered through that lens of like, why is it not a startup? But then, you know, as, as we both know, kind of minimizes the text history, the reason that we even, you know, got to the commercial era of startups and the startup. Era is because of the years and years of academic and government funded research that, that led up to that. So and, and then, and same with sort of like the open source work that I [00:43:35] was doing was to say, okay, all these companies that are developing their software products, every single one of these private companies is using open source code. They're relying on this public digital infrastructure to build their software. So like, it's, it's not quite as clean cut as especially, I mean, by some estimates, like a vast majority of let's say, yeah, any, any private company, any private software company, like, you know, let's say like 70% of their, their code or, you know, it's, it varies so much between companies, but like certainly a majority of the code that is quote unquote written is actually just like shared public code. So it's, you know it's, it's not quite as simple as saying like public goods have no place in, in tech. I think they, they still have a very, very strong place. [00:44:16] Ben: Yeah, no, and it it's, it's also just, just thinking about like, sort of like the, the publicness of different things, right? Cuz it's like, there are for profit, there, there are profitable private schools. Right. And yet, [00:44:35] like I think most people would agree that. If all schools were, were for profit and private I mean, yeah, I guess separating out like the, the, like, even if schools were for profit and private you would prob like, it would probably still be a good thing to have government getting money into those schools. Right. Like even like, I, I think people who don't like public schooling still think that it is worthwhile for the government to give money towards schools. Right. [00:45:12] Nadia: Mm-hmm [00:45:13] Ben: Is that [00:45:14] Nadia: Yeah. And, and this is a distinction between, for the example of education, it's like, you know, the concept of education might be a public. Good. But then how is education funded might, you know, get funded in different ways, including private. [00:45:27] Ben: yeah, exactly. And, and, and I. Yeah. So, so the, the, the concept of education [00:45:35] as, as a public good. Yeah, that's a, that's a good way of putting it and there's like, but I, and I think, I guess there, there are, there are more I guess think fuzzier places where it's like, it's less clear whe like, to what extent it's to public good, like like I think infrastructure maybe one where it's like, you, you could imagine a system where like, everybody just like, who uses, say like a sewer line buys into it versus having it be, be publicly funded. And I think like research might be another one. [00:46:11] Nadia: I mean, even education is if you go far back enough, right? Like not everyone went to public schools before. Not everyone got an education. It was not seen as necessarily something that it was something for like privileged people to get. It was not something that was just like part of the public sector. So yeah, our, our notions of what the public sector even is, or what's in and out of it is definitely evolved over the years. [00:46:32] Ben: Yeah, no, that's a really good point. So it's, [00:46:35] it's like that again is like, that's, that's where it's complicated where it's like, it's not just some like attribute of the world. Right. It's like our, like some kind of social consensus, [00:46:45] Nadia: Great. [00:46:46] Ben: around public goods. And, and something I also wanted to, to talk about is like, I know you've been thinking a lot about like the, sort of the relationship between philanthropy and status and I guess like, do, do you have, like, what's like. Do do you have a sense of like, why? Like, and it's different for everybody, but like why do people do philanthropy now? Like when you, when you don't have like a, a sort of like a, a reli, excuse me, a religious mandate to do it. [00:47:21] Nadia: I actually think, yeah, I think this question is more complicated than it seems. Because there's so many different types of philanthropists you know, The old adage of, if you've met one philanthropist, you've met one philanthropist. And so motivations [00:47:35] are, I mean, there are a lot of different motivations and also just sort of like, there's some spectrum here that I am still kind of lack and vocabulary on, but like a lot of philanthropy, if you just look by the numbers, like a lot of philanthropy is done at the local level, right. Or it's done within a philanthropy sort of local sphere. Like we forget about, you know, when you think about philanthropy, you think about the biggest billionaires in the world. You think about bill gates or Warren buffet or whatever. But like, we forget that, you know, there are a lot of people that are wealthy that are just kind of like that, that aren't part of the quote unquote global elite. Right? So like I, yeah, one example I have to think about is like the, the Koch family. And and so we all know the Koch brothers, but then like, They were, they were not the original philanthropist in their family. Their father was, and their father was originally, I mean, they had a family foundation and they just kind of focused on their local area doing local philanthropy. And it was only with the next generation that they ended up sort of like expanding into this like more global focus. But like, yeah, I mean, there's so much philanthropy. That is, so when we say, you know, like, what are the motivations of someone of a philanthropist? Like, it, it really [00:48:35] depends on like who you're talking about. But I do think like one aspect that just gets really under discussed or underappreciated philanthropy is the kind of like cohort nature of at least philanthropy that operates on a more like global, global skill. And I don't mean literally global in the sense of like international, I just mean like, I don't know what the right term is for this, but like outside of your yeah, like nonlocal right. [00:48:59] Ben: Yeah. [00:49:00] Nadia: And yeah, I don't know. That feels unsatisfying too. I don't really know what, what, what the term is, but there is a distinction there, right. But yeah, I think like, well, yeah, I don't know. I don't know what the right term is. But like I, the, the ways in which, so like, you know, why does a, why does a philanthropist? I, I think I have one open question of like, why, what makes a philanthropist convert from kinda like the more local focus to some expanded quote unquote global focus is one question. I think like when people talk about the motivations of philanthropists, they tend to focus on individual motivations of that person. So, you [00:49:35] know, the classic answer to like, why do, why do people give philanthropically? It's always like something like about altruism and wanting to give back or it's, or it's like the, you know, the, the edgy self-interested model of like, you know, people that are motivated by, by status and wanting to look good. I don't, I feel like those answers, they don't, they're not like they're just not fully satisfying to me. I think there's. This aspect of maybe like, like a more like power relational theory that is maybe under, under discussed or underappreciated of if you think about like like these wealth generations, rather than just like individuals who are wealthy you can see these sort of like cohorts of people that all became wealthy in similar sorts of ways. So you have wall street wealth, you have tech wealth, you have crypto wealth. And and you know, these are very large buckets, but you can sort of group people together based on like, they got wealthy because they had some unique insight that the previous paradigm did not have. And I think like, [00:50:35] there's sort of like, yeah, there are these cycles that like wealth is moving in where first you're sort of like the outcast you're working outta your garage, you know, let's use the startup example. No one really cares about you. You're very counterculture. Then you become sort of like more popular you're you're like a, but you're still like a counterculture for people that are like in the know, right. You're showing traction, you're showing promise whatever, and then there's some explosion to the mean stream. There's sort of this like frenzied period where everyone wants to, you know, do startups or join a startup or start a startup. And then there's sort of like the crash, right? And this is this mirrors Carla press's technological revolutions and, and financial capital where she talks about how technological innovations influence financial markets. But you know, she talks about these sort like cycles that we move in. And then like, after the sort of like crash, there's like a backlash, right? There's like a reckoning where the public says, you know, how, how could we have been misled by this, these crazy new people or whatever. But that moment is actually the moment in which the, the new paradigm starts to like cement its power and starts to become sort of like, you know, the dominant force in the field. It needs to start. [00:51:35] Switching over and thinking about their public legacy. But I think like one learnings we can have from looking at startup wealth now and sort of like how interesting it is that in the last couple years, like suddenly a lot of people in tech are starting to think about culture building and institution building and, and their public legacies that wasn't true. Like, you know, 10 years ago, what is actually changed. And I think a lot of that really was influenced by the, the tech backlash that was experienced in, in 2016 or so. And so you look at these initiatives now, like there are multiple examples of like philanthropic initiatives that are happening now. And I don't find it satisfying to just say, oh, it's because these individuals want to have a second act in their career. Or because they're motivated by status. Like, I think those are certainly all components of it, but it doesn't really answer the question of why are so many people doing it together right now? Not literally coordinated together, but like it's happening independently in a lot of different places. And so I feel like we need some kind of. Cohort analysis or cohort explanation to say, okay, I actually think this is kind of like a defense mechanism because you have this [00:52:35] clash between like a rising new paradigm against the incumbents and the new paradigm needs to find ways to, you know, like wield its influence in the public sector or else it's just gonna be, you know, regulated out of existence or they're gonna, you know, be facing this sort of like hostile media landscape. They need to learn how to actually like put their fingers into that and and, and grapple with the role. But it it's this sort of like coming of age for our counterculture where they're used to, like tech is used to sort of being in this like safe enclave in Silicon valley and is now being forced or like reckoned with the outside world. So like that, that, that is one answer for me of like, why do philanthropists do these things? It's we can talk about sort of like individual motivations for any one person. In, in my sort of like particular area of interest in trying to understand, like, why is tech wealth doing this? Or like, what will crypto wealth be doing in the future? I, I find that kind of explanation. Helpful. [00:53:25] Ben: Yeah. That's I feel like it has a very like Peter Turin vibe like in, in the good way, in the sense of like, like identifying. [00:53:35] like, I, I, I don't think that history is predictive, but I do think that there are patterns that repeat and like that, like, I've never heard anybody point out that pattern, but it feels really truthy to me. I think the, the, the really cool thing to do would be to like, sort of, as you dig into this, like, sort of like set up some kind of like bet with yourself on like, what are the conditions under which like crypto people will become like start heavily going into philanthropy. Right. Like, [00:54:09] Nadia: Yes, totally. I think about this now. That's why I'm like, I'm weirdly, like, to me, crypto wealth is the specter in the future, but they're not actually in the same boat as what tech wealth is in right now. So I'm almost in a, like, they're, they're not yet really motivated to deal with this stuff, because I think like that moment, if I had to like, make a bet on it is like, it's gonna be the moment where like crypto, when, when crypto really faces like a public [00:54:35] backlash. Because right now I think they're still in the like we're counterculture, but we're cool kind of moment. And then they had a little bit of this frenzy in the crash, but like, yeah, I think it's still. [00:54:44] Ben: for tech, right? Or 2000. [00:54:46] Nadia: Yeah. And even despite exactly. And, and, and despite the, you know, same as in 2001 where people were like, ah, pets.com, you know, it was all a scam. This was all bullshit. Oh, sorry. I dunno if I could say that.  [00:54:57] Ben: Say that. [00:54:57] Nadia: But then, you know, like did not even, like startups had a whole other Renaissance after that was like not, you know, far from being over. But like people still by and large, like love crypto. And like, there are the, you know, loud, negative people that are criticizing it in the same way that people criticize startups in 2001. But like by and large, a lot of people are still engaging with it and are interested in it. And so, like, I don't feel like it's hit that public backlash moment yet the way that startups did in 2016. So I feel like once it gets to that point and then like, kind of the reckoning after that is kind of the point where crypto wealth will be motivated to act philanthropically in kind of like this larger cohort [00:55:35] kind of way. [00:55:36] Ben: Yeah. And I don't think that the time scales will be the same, but I mean the time scale for, for that in tech, if we sort of like map it on to the, the 2000 crash is like, you know, so you have like 15 years. So like, that'd be like 20 37 is when we need to like Peck back in and see like, okay, is this right? [00:55:56] Nadia: It's gonna be faster. So I'm gonna cut that in half or something. I feel like the cycles are getting shorter and moving faster.  [00:56:01] Ben: That, that, that definitely feels true. Looking to the future is, is a a good place for us to, to wrap up. I really appreciate this. 
undefined
Sep 1, 2022 • 1h 14min

Institutional Experiments with Seemay Chou [Idea Machines #47]

Seemay Chou discusses building a new research organization focusing on underesearched biology areas, hiring entrepreneurial scientists, studying non-model organisms, ticks, the transition from nonprofit to for-profit, finding the right individuals for Arcadia, and the importance of transparency in scientific research processes.
undefined
Aug 2, 2022 • 48min

DARPA and Advanced Manufacturing with William Bonvillian [Idea Machines #46]

A deep dive into DARPA and advanced manufacturing with William Bonvillian, exploring DARPA's innovative model, funding approaches, team collaboration in advanced manufacturing, evolution of research post-World War II, advances in robotics and metal 3D printing, and integration of new technologies in manufacturing.
undefined
Jul 2, 2022 • 1h 5min

Philanthropically Funding the Foundation of Fields with Adam Falk [Idea Machines #45]

In this conversation, Adam Falk and I talk about running research programs with impact over long timescales, creating new fields, philanthropic science funding, and so much more.  Adam is the president of the Alfred P. Sloan Foundation,  which was started by the eponymous founder of General Motors and has been funding science and education efforts for almost nine decades.  They’ve funded everything from iPython Notebooks to the Wikimedia foundation to an astronomical survey of the entire sky. If you’re like me, their name is familiar from the acknowledgement part of PBS science shows. Before becoming the president of the Sloan Foundation, Adam was the president of Williams College and a high energy physicist focused on elementary particle physics and quantum field theory. His combined experience in research, academic administration, and philanthropic funding give him a unique and fascinating perspective on the innovation ecosystem. I hope you enjoy this as much as I did.  Links - The Sloan Foundation - Adam Falk on Wikipedia  - Philanthropy and the Future of Science and Technology Highlight Timestamps - How do you measure success in science? [00:01:31] - Thinking about programs on long timescales [00:05:27] -  How does the Sloan Foundation decide which programs to do? [00:08:08] - Sloan's Matter to Life Program [00:12:54] -  How does the Sloan Foundation think about coordination? [00:18:24] -  Finding and incentivizing program directors [00:22:32] - What should academics know about the funding world and what should the funding world know about academics? [00:28:03] - Grants and academics as the primary way research happens [00:33:42] - Problems with grants and common grant applications [00:44:49] - Addressing the criticism of philanthropy being inefficient because it lacks market mechanisms [00:47:16] - Engaging with the idea that people who create value should be able to capture that value [00:53:05]   Transcript [00:00:35] In this conversation, Adam Falk, and I talk about running research programs with impact over long timescales, creating new fields, philanthropic science funding, and so much more. Adam is the president of the Alfred P Sloan foundation, which was started by the eponymous founder of general motors. And has been funding science and education efforts for almost nine decades. They funded everything from IP. I fond [00:01:35] notebooks to Wikimedia foundation. To an astronomical survey of the entire sky. If you're like me, their name is familiar from the acknowledgement part of PBS science shows. Before becoming the president of the Sloan foundation. Adam was the president of Williams college and I high energy physicist focused on elementary particle physics in quantum field theory. His combined experience in research. Uh, Academic administration and philanthropic funding give him a unique and fascinating perspective on the innovation ecosystem i hope you enjoy this as much as i did [00:02:06] Ben: Let's start with like a, sort of a really tricky thing that I'm, I'm myself always thinking about is that, you know, it's really hard to like measure success in science, right? Like you, you know, this better than anybody. And so just like at, at the foundation, how do you, how do you think about success? Like, what is, what does success look like? What is the difference between. Success and failure mean to [00:02:34] Adam: you? [00:02:35] I mean, I think that's a, that's a really good question. And I think it's a mistake to think that there are some magic metrics that if only you are clever enough to come up with build them out of citations and publications you could get some fine tune measure of success. I mean, obviously if we fund in a scientific area, we're funding investigators who we think are going to have a real impact with their work individually, and then collectively. And so of course, you know, if they're not publishing, it's a failure. We expect them to publish. We expect people to publish in high-impact journals, but we look for broader measures as well if we fund a new area. So for example, A number of years ago, we had a program in the microbiology of the built environment, kind of studying all the microbes that live in inside, which turns out to be a very different ecosystem than outside. When we started in that program, there were a few investigators interested in this question. There weren't a lot of tools that were good for studying it. [00:03:35] By 10 years later, when we'd left, there was a journal, there were conferences, there was a community of people who were doing this work, and that was another measure, really tangible measure of success that we kind of entered a field that, that needed some support in order to get going. And by the time we got out, it was, it was going strong and the community of people doing that work had an identity and funding paths and a real future. Yeah. [00:04:01] Ben: So I guess one way that I've been thinking about it, it's just, it's almost like counterfactual impact. Right. Whereas like if you hadn't gone in, then it, the, it wouldn't be [00:04:12] Adam: there. Yeah. I think that's the way we think about it. Of course that's a hard to, to measure. Yeah. But I think that Since a lot of the work we fund is not close to technology, right. We don't have available to ourselves, you know, did we spin out products? Did we spin out? Companies did a lot of the things that might directly connect that work to, [00:04:35] to activities that are outside of the research enterprise, that in other fields you can measure impact with. So the impact is pretty internal. That is for the most part, it is, you know, Has it been impact on other parts of science that, you know, again, that we think might not have happened if we hadn't hadn't funded what we funded. As I said before, have communities grown up another interesting measure of impact from our project that we funded for about 25 years now, the Sloan digital sky survey is in papers published in the following sense that one of the innovations, when the Sloan digital sky survey launched in the early. Was that the data that came out of it, which was all for the first time, digital was shared broadly with the community. That is, this was a survey of the night sky that looked at millions of objects. So they're very large databases. And the investigators who built this, the, the built the, the, the telescope certainly had first crack at analyzing that [00:05:35] data. But there was so much richness in the data that the decision was made at. Sloan's urging early on that this data after a year should be made public 90% of the publications that came out of the Sloan digital sky survey have not come from collaborators, but it come from people who use that data after it's been publicly released. Yeah. So that's another way of kind of seeing impact and success of a project. And it's reached beyond its own borders. [00:06:02] Ben: And you mentioned like both. Just like that timescale, right? Like that, that, that 25 years something that I think is just really cool about the Sloan foundation is like how, how long you've been around and sort of like your capability of thinking on those on like a quarter century timescale. And I guess, how do you, how do you think about timescales on things? Right. Because it's like, on the one hand, this is like, obviously like science can take [00:06:35] 25 years on the other hand, you know, it's like, you need to be, you can't just sort of like do nothing for 25 years. [00:06:44] Adam: So if you had told people back in the nineties that the Sloan digital sky survey was going to still be going after a quarter of a century, they probably never would have funded it. So, you know, I think that That you have an advantage in the foundation world, as opposed to the the, the federal funding, which is that you can have some flexibility about the timescales on what you think. And so you don't have to simply go from grant to grant and you're not kind of at the mercy of a Congress that changes its own funding commitments every couple of years. We at the Sloan foundation tend to think that it takes five years at a minimum to have impact into any new field that you go into it. And we enter a new science field, you know, as we just entered, we just started a new program matter to life, which we can talk about. [00:07:35] That's initially a five-year commitment to put about $10 million a year. Into this discipline, understanding that if things are going well, we'll re up for another five years. So we kind of think of that as a decadal program. And I would say the time scale we think on for programs is decades. The timescale we think of for grants is about three years, right? But a program itself consists of many grants may do a large number of investigators. And that's really the timescale where we think you can have, have an impact over that time. But we're constantly re-evaluating. I would say the timescale for rethinking a program is shorter. That's more like five years and we react. So in our ongoing programs, about every five years, we'll take a step back and do a review. You know, whether we're having an impact on the program, we'll get some outside perspectives on it and whether we need to keep it going exactly as it is, or adjust in some [00:08:35] interesting ways or shut it down and move the resources somewhere else. So [00:08:39] Ben: I like that, that you have, you almost have like a hierarchy of timescales, right? Like you have sort of multiple going at once. I think that's, that's like under underappreciated and so w one thing they want to ask about, and maybe the the, the life program is a good sort of like case study in this is like, how do you, how do you decide what pro, like, how do you decide what programs to do, right? Like you could do anything. [00:09:04] Adam: So th that is a terrific question and a hard one to get. Right. And we just came out of a process of thinking very deeply about it. So it's a great time to talk about it. Let's do it. So To frame the large, the problem in the largest sense, if we want to start a new grantmaking program where we are going to allocate about $10 million a year, over a five to 10 year period, which is typical for us, the first thing you realize is that that's not a lot of money on the scale that the federal government [00:09:35] invest. So if your first thought is, well, let's figure out the most interesting thing science that people are doing you quickly realize that those are things where they're already a hundred times that much money going in, right? I mean, quantum materials would be something that everybody is talking about. The Sloan foundation, putting $10 million a year into quantum materials is not going to change anything. Interesting. So you start to look for that. You start to look for structural reasons that something that there's a field or an emerging field, and I'll talk about what some of those might be, where an investment at the scale that we can make can have a real impact. And And so what might some of those areas be? There are fields that are very interdisciplinary in ways that make it hard for individual projects to find a home in the federal funding landscape and one overly simplified, but maybe helpful way to think about it is that the federal funding landscape [00:10:35] is, is governed large, is organized largely by disciplines. That if you look at the NSF, there's a division, there's a director of chemistry and on physics and so forth. And but many questions don't map well onto a single discipline. And sometimes questions such as some of the ones we're exploring in the, you know, the matter to life program, which I can explain more about what that is. Some of those questions. Require collaborations that are not naturally fundable in any of the silos the federal government has. So that's very interdisciplinary. Work is one area. Second is emerging disciplines. And again, often that couples to interdisciplinary work in a way that often disciplines emerge in interesting ways at the boundaries of other disciplines. Sometimes the subject matter is the boundary. Sometimes it's a situation where techniques developed in one discipline are migrating to being used in another discipline. And that often happens with physics, the [00:11:35] physicist, figure out how to do something, like grab the end of a molecule and move it around with a laser. And suddenly the biologists realize that's a super interesting thing for them. And they would like to do that. So then there's work. That's at the boundary of those kind of those disciplines. You know, a third is area that the ways in which that that can happen is that you can have. Scale issues where, where kind of work needs to happen at a certain scale that is too big to be a single investigator, but too small to kind of qualify for the kind of big project funding that you have in the, in the, in the federal government. And so you're looking, you could also certainly find things that are not funded because they're not very interesting. And those are not the ones we want to fund, but you often have to sift through quite a bit of that to find something. So that's what you're looking for now, the way you look for it is not that you sit in a conference room and get real smart and think that you're going to see [00:12:35] things, other people aren't going to see rather you. You source it out, out in the field. Right. And so we had an 18 month process in which we invited kind of proposals for what you could do on a program at that scale, from major research universities around the country, we had more than a hundred ideas. We had external panels of experts who evaluated these ideas. And that's what kind of led us in the end to this particular framing of the new program that we're starting. So and, and that, and that process was enough to convince us that this was interesting, that it was, you know, emergent as a field, that it was hard to fund in other ways. And that the people doing the work are truly extraordinary. Yeah. And that's, that's the, that's what you're looking for. And I think in some ways there are pieces of that in all of the programs that particularly the research programs that. [00:13:29] Ben: And so, so actually, could you describe the matter to life program and like, [00:13:35] and sort of highlight how it fits into all of those buckets? [00:13:38] Adam: Absolutely. So the, the, the matter of the life program is an investigation into the principles, particularly the physical principles that matter uses in order to organize itself into living systems. The first distinction to make is this is not a program about how did life evolve on earth, and it's actually meant to be a broader question then how is life on earth organized the idea behind it is that life. Is a particular example of some larger phenomenon, which is life. And I'm not going to define life for you. That is, we know what things are living and we know things that aren't living and there's a boundary in between. And part of the purpose of this program is to explore that it's a think of it as kind of out there, on, out there in the field. And, and mapmaking, and you know, over here is, you [00:14:35] know, is a block of ice. That's not alive. And, you know, over here is a frog and that's alive and there's all sorts of intermediate spaces in there. And there are ideas out there that, that go, you know, that are interesting ideas about, for example, at the cellular level how is information can date around a cell? What might the role of. Things like non-equilibrium thermodynamics be playing is how does, can evolution be can it can systems that are, non-biological be induced to evolve in interesting ways. And so we're studying both biotic and non biotic systems. There are three strains, stray strands in this. One is building life. That is it was said by I think I, I find men that if you can't build something, you don't understand it. And so the idea, and there are people who want to build an actual cell. I think that's, that's a hard thing to do, but we have people who are building in the laboratory little bio-molecular machines understanding how that might [00:15:35] work. We, we fund people who are kind of constructing, protocells thinking about ways that the, the ways that liquid separate might provide SEP diff divisions between inside and outside, within. Chemical reactions could take place. We funded businesses to have made tiny little, you know, micron scale magnets that you mix them together and you can get them to kind of organize themselves in interesting ways. Yeah. In emerge. What are the ways in which emergent behaviors come to this air couple into this. And so that's kind of building life. Can you kind of build systems that have features that feel essential to life and by doing that, learn something general about, say the reproduction of, of, of, of DNA or something simple about how inside gets differentiated from outside. Second strand is principles of life, and that's a little bit more around are [00:16:35] there physics principles that govern the organization of life? And again, are there ways in which the kinds of thinking that informed thermodynamics, which is kind of the study of. Piles of gas and liquid and so forth. Those kinds of thinking about bulk properties and emergent behavior can tell us something about what's the difference between life that's life and matter. That's not alive. And the third strain is signs of life. And, you know, we have all of these telescopes that are out there now discover thousands of exoplanets. And of course the thing we all want to know is, is there life on them? We were never going to go to them. We maybe if we go, we'll never come back. And and we yet we can look and see the chemical composition of these. Protoplanets just starting to be able to see that. And they transition in front of a star, the atmospheres of these planets absorb light from the stars and the and the light that's absorbed tells you something about the chemical composition of the atmosphere. [00:17:35] So there's a really interesting question. Kind of chemical. Are there elements of the chemical composition of an atmosphere that would tell you that that life is present there and life in general? Right. I, you know, if, if you, if you're going to look for kind of DNA or something, that might be way too narrow, a thing to kind of look for. Right. So we've made a very interesting grant to a collaboration that is trying to understand the general properties of atmospheres of Rocky planets. And if you kind of knew all of the things that an atmosphere of an Earth-like planet might look like, and then you saw something that isn't one in one of those, you think, well, something other might've done that. Yeah. So that's a bit of a flavor. What I'd say about the nature of the research is it is, as you could tell highly interdisciplinary. Yeah. Right. So this last project I mentioned requires geoscience and astrophysics and chemistry and geochemistry and a vulcanology an ocean science [00:18:35] and, and Who's going to fund that. Yeah. Right. It's also in very emerging area because it comes at the boundary between geoscience, the understanding of what's going on on earth and absolutely cutting edge astrophysics, the ability to kind of look out into the cosmos and see other planets. So people working at that boundary it's where interesting things often, often happen. [00:18:59] Ben: And you mentioned that when, when you're looking at programs, you're, you're looking for things that are sort of bigger than like a single pie. And like, how do you, how do you think about sort of the, the different projects, like individual projects within a program? Becoming greater than the sum of their parts. Like, like, you know, there's, there's some, there's like one end of the spectrum where you've just sort of say, like, go, go do your things. And everybody's sort of runs off. And then there's another end of the spectrum where you like very explicitly tell people like who should be working on what and [00:19:35] how to, how to collaborate. So like, how do you, [00:19:37] Adam: so one of the wonderful things about being at a foundation is you have a convening power. Yeah. I mean, in part, because you're giving away money, people will, will want to come gather when you say let's come together, you know? And in part, because you just have a way of operating, that's a bit independent. And so the issue you're raising is a very important one, you know, in the individual at a program at a say, science grant making program we will fund a lot of individual projects, which may be a single investigator, or they may be big collab, collaborations, but we also are thinking from the beginning about how. Create help create a field. Right. And it may not always be obvious how that's going to work. I think with matter to life we're early on and we're, you know, we're not sure is this a single field, are there sub fields here? But we're already thinking about how to bring our pies together to kind of share the work they're doing and get to share perspectives. I can give you another example from a program Reno law, we recently [00:20:35] closed, which was a chemistry of the indoor environment. Where we were funded kind of coming out of our work in the microbiology indoors. It turns out that there's also very interesting chemistry going on indoors which is different from the environmental chemistry that we think about outdoors indoors. There are people in all the stuff that they exude, there's an enormous number of surfaces. And so surface chemistry is really important. And, and again, there were people who were doing this work in isolation, interested in, in these kinds of topics. And we were funding them individually, but once we had funded a whole community of people doing. They decided that be really interesting to do a project where, which they called home cam, where they went to a test house and kind of did all sorts of indoor activities like cooking Thanksgiving dinner and studying the chemistry together. And this is an amazing collaboration. So we had, so many of our grantees came together in one [00:21:35] place around kind of one experiment or one experimental environment and did work then where it could really speak to each other. Right. And which they they'd done experiments that were similar enough that they, the people who were studying one aspect of the chemistry and another could do that in a more coherent way. And I think that never would have happened without the Sloan foundation having funded this chemistry of indoor environments program. Both because of the critical mass we created, but also because of the community of scholars that we, that we help foster. [00:22:07] Ben: So, it's like you're playing it a very important role, but then it, it is sort of like a very then bottom up sort of saying like, like almost like put, like saying like, oh, like you people all actually belong together and then they look around and like, oh yeah, yeah, [00:22:24] Adam: we do. I think that's exactly right. And yeah. You don't want to be too directive because, you know, we're, we're just a foundation where we got some program directors and, you know, [00:22:35] we, we do know some things about the science we're funding, but the real expertise lives with these researchers who do this work every day. Right. And so what we're trying to see when, when we think we can see some things that they can't, it's not going to be in the individual details of the work they're doing, but it may be there from up here on the 22nd floor of the Rockefeller center, we can see the landscape a little bit better and are in a position to make connections that then will be fruitful. You know, if we were right, there'll be fruitful because the people on the ground doing the work with the expertise, believe that they're fruitful. Sometimes we make a connection and it's not fruitful in that. It doesn't fruit and that's fine too. You know, we're not always right about everything either, but we have an opportunity to do that. That comes from the. Particular in special place that we happen to sit. Yeah. [00:23:28] Ben: Yeah. And just speaking of program directors, how do you, how do you think about, I mean, like [00:23:35] you're, you're sort of in charge and so how do you think about directing them and, and sort of how do you think about setting up incentives so that, you know, good work like so that they do good work on their programs and and like how much sort of autonomy do you give them? Sort of how does, how does all of that work? [00:23:56] Adam: Absolutely. So I spent most of my career in universities and colleges. I was my own background is as, as, as a theoretical physicist. And I spent quite a bit of time as a Dean and a college president. And I think the key to being a successful academic administrator is understanding deep in your bones, that the faculty are the heart of the institution. They are the intellectual heart and soul of the institution. And that you will have a great institution. If you hire terrific faculty and support them you aren't telling them, you know, you as, and they don't require a lot of telling them what to do, but the [00:24:35] leadership role does require a lot of deciding where to allocate the resources and helping figure out and, and figuring out how, and in what ways, and at what times you can be helpful to them. Yeah. The program directors at the Sloan foundation are very much. The faculty of a, of a university and we have six right now it's five PhDs and a road scholar. Right. And they are, each of them truly respect, deeply respected intellectual leaders in the fields in which they're making grants. Right. And my job is to first off to hire and retain a terrific group of program directors who know way more about the things they're doing than I do. And then to kind of help them figure out how to craft their programs. And you know, there's different kinds of, you know, different kinds of help that different kind of program directors needs. Sometimes they just need resources. Sometimes they need, you know, a collaborative conversation. You know, [00:25:35] sometimes, you know, we talk about the ways in which their individual programs are gonna fit together into the larger. Programs at the Sloan foundation sometimes what we talk about is ways in which we can and should, or shouldn't change what we do in order to build a collaboration elsewhere. But I don't do much directing of the work that program directors to just like, I don't, didn't ever do much of any directing of the work that, that that the faculty did. And I think what keeps a program director engaged at a place like the Sloan foundation is the opportunity to be a leader. Yeah. [00:26:10] Ben: It's actually sort of to double click on that. And on, on, on hiring program directors, it seems it like, I, I, I would imagine that it is, it is sometimes tough to get really, really good program directors, cause people who would make good program directors could probably have, you know, their pick. Amazing roles. And, and to some extent, and, and [00:26:35] they, they, they do get to be a leader, but to some extent, like they're, they're not directly running a lab, right. Like they're, they, they don't have sort of that direct power. And they're, they're not like making as much money as they could be, you know, working at Google or something. And so, so like how do you both like find, and then convince people to, to come do that? [00:26:57] Adam: So that's a great question. I mean, I think there's a certain, you know, P people are meant to be program directors are, are not the, usually the place like the Sloan foundation and different foundations work differently. Right. So but in our case are not people who Otherwise, who would rather be spending their time in the lab. Yeah. Right. And many of them have spent time as serious scholars in one discipline or another, but much like faculty who move into administration, they've come to a point in their careers, whether that was earlier or later in their [00:27:35] career where the larger scope that's afforded by doing it by being a program director compensates for the fact that they can't focus in the same way on a particular problem, that, that the way a faculty member does or a researcher. Yes. So the, the other thing you have to feel really in your bones, which is, again, much like being an academic administrator is that there's a deep reward in finding really talented people and giving them the resources. They need to do great things. Right. And in the case, if you're a program director, what you're doing is finding grantees and When a grantee does something really exciting. We celebrate that here at the foundation as, as a success of the foundation. Not that we're trying to claim their success, but because that's what we're trying to do, we're trying to find other people who can do great things and give them the resources to do those great things. So you have to get a great kind of professional satisfaction from. So there are people who have a [00:28:35] broader view or want to move into a, a time in their careers when they can take that broader view about a field or an area that they already feel passionate about. And then who have the disposition that, that, you know, that wanting to help people is deeply rewarding to them. And, you know, say you, how do you find these folks? It's, it's just like, it's hard to find people who were really good at academic administration. You have to look really hard for people who are going to be great at this work. And you persuade them to do it precisely because they happen to be people who want to do this kind of work. Yeah. [00:29:09] Ben: And actually and so, so you, you sort of are, are highlighting a lot of parallels between academic administration and, and sort of your role now. I think it. Is there anything that, but at the same time, I think that there are many things that like academics don't understand about sort of like science funding and and, and this, that, that world, and then there's many things that it seems like science funders don't understand about [00:29:35] research and, and you're, you're one of the few people who've sort of done in both. And so I guess just a very open-ended question is like, like what, what do you wish that more academics understood about the funding world and things you have to think about here? And what do you wish more people in the funding world understood about, about research? Yeah, [00:29:54] Adam: that is, that is great. So I can give you a couple of things. The, I think at a high level, I, I always wish that on both sides of that divide, there was a deeper understanding of the constraints under which people on the other side are operating. And those are both material constraints and what I might call intellectual constraints. So there's a parallelism here. I, if I first say from the point of view of the, of as a foundation president, what do I wish that academics really understood? I, I, I'm always having to reinforce to people that we really do mean it when we say we do fund, we fund X and we don't fund Y [00:30:35] yeah. And that please don't spend time trying to persuade me that Z, that you do really is close enough to X, that we should fund it and get offended. When I tell you that's not what we fund, we say no to a lot of things that are intrinsically great, but that we're not funding because it's not what we fund. Yeah. We as, and we make choices about what to fund that are very specific and what areas to fund in that are very specific so that we can have some impact, right. And we don't make those decisions lightly, you know, for almost any work someone is doing, we're not the only foundation who might fund it. So move on to someone else. If you're not fitting our program, then argue with us and just understand why it is that, that we do that. Right. I think that is that's a come across that a lot. There's a total parallel, which I think is very important for people in foundations who have very strong ideas about what they should fund to understand that, you know, academics are not going to drop what they're doing and start doing something else because there's a [00:31:35] little bit of money available that, you know, is an academic, of course, you're trying to make. Your questions, two ways, things you can support, but usually driven because some question is really important to you. And if, you know, if some foundation comes to you and says, well, stop doing that and do this, I'll find it. You know why maybe that's, you're pretty desperate. You're not going to do that. So the best program directors spend a lot of time looking for people who already are interested in the thing that the foundation is funding, right? And really underst understand that you can't bribe people into doing something that they, that they, that they otherwise wouldn't do. And so I think those are very parallel. I mean, to both to understand the set of commitments that people are operating under, I would say the other thing that I think it's really important for foundations to understand about about universities is and other institutions is that these institutions. Are not just platforms [00:32:35] on which one can do a project, right? They are institutions that require support on their own. And somebody has to pay the debt service on the building and take out the garbage and cut the grass and clean the building and, you know hire the secretaries and do all of the kind of infrastructure work that makes it possible for a foundation such as Sloan to give somebody $338,000 to hire some postdocs and do some interesting experiments, but somebody is still turning on the lights and overhead goes to the overhead is really important and the overhead is not some kind of profit that universities are taking. It is the money they need in order to operate in ways that make it possible to do the grants. And. You know, there's a longer story here. I mean, even foundations like Sloan don't pay the full overhead and we can do that because [00:33:35] we typically are a very small part of the funding stream. But during the pandemic, we raised our overhead permanently from the 15% we used to pay to the 20% that we pay now, precisely because we've, we felt it was important to signal our support for the institutions. And some of those aren't universities, some of those are nonprofits, right? That other kinds of nonprofits that we're housing, the activities that we were interested in funding. And I just think it's really important for foundations to understand that. And I do think that my own time as a Dean at a college president, when I needed that overhead in order to turn on the lights, so some chemist could hire the post-docs has made me particularly sensitive [00:34:16] Ben: to that. Yeah, no, that's, that's a really good. Totally that I don't think about enough. So, so, so I really appreciate that. And I think sort of implicit implicit in our conversation has been two sort of core things. One, is that the way that you [00:34:35] fund work is through grants and two, is that the, the primary people doing the research are academics and I guess it just, w let's say, w w what is, what's the actual question there it's like, is it like, do you, do you think that that is the best way of doing it? Have you like explored other ways? Because it, it, it feels like those are sort of both you know, it's like has been the way that people have done it for a long time. [00:35:04] Adam: So there's, there's two answers to that question. The first is just to acknowledge that the Sloan foundation. Probably 50 out of the $90 million a year in grants we make are for research. And almost all of that research is done at universities, I think primarily because we're really funding basic research and that's where basic research has done. If we were funding other kinds of research, a lot of use inspired research research that was closer to kind of technology. We would be, you might be [00:35:35] funding people who worked in different spaces, but the kind of work we fund that's really where it's done. But we have another significant part of the foundation that funds things that aren't quite research, that the public understanding of science and technology diversity, equity and inclusion in stem, higher ed of course, much of that is, is money that goes into universities, but also into other institutions that are trying to bring about cultural change in the sciences badly needed cultural change. And then our technology program, which looks at all sorts of technologies. Modern technologies that support scholarships such as software scholarly communication, but as increasingly come to support modes of collaboration and other kinds of more kind of social science aspects of how people do research. And there are a lot of that funding is not being given to universities. A lot of that funding is given to other sorts of institutions, nonprofits, always because we're a [00:36:35] foundation, we can only fund nonprofits, but that go beyond the kind of institutional space that universities occupy. We're really looking for. You know, we're not driven by a kind of a sense of who we should fund followed by what we should fund. We're interested in funding problems and questions. And then we look to see who it is that that is doing that work. So in public understanding some of that's in the universities, but most of it isn't and [00:37:00] Ben: actually the two to go back. One thing that I wanted to to ask about is like It seems like there's, if you're primarily wanting to find people who are already doing the sort of work that is within scope of a program, does it, like, I guess it almost like raises the chicken and egg problem of like, how, how do you, like, what if there's an area where people really should be doing work, but nobody is, is doing that work [00:37:35] because there is no funding to do that work. Right. Like this is just something that I struggled with. It's not right. And so, so it's like, how do you, how do you sort of like bootstrap thing? Yes. [00:37:46] Adam: I mean, I think that the way to think about it is that you work incrementally. That is if, if once, and I think you're, you're quite right. That is in some sense, we are looking for areas that. Under inhabited, scientifically because people aren't supporting that work. And that's another way of saying what I said at the beginning about how we're looking for maybe interdisciplinary fields that are hard to support. One way you can tell that they're hard to support is that there isn't a support people aren't doing it, but typically you're working in from the edges, right. There's people on the boundaries of those spaces chomping at the bit. Right. And when you say, you know, what is the work? You can't do what you would do if you add some funding and tell [00:38:35] us why it's super interesting. That's the question you're asking. And that's kind of the question that drives what we talked about before, which is how do you identify a new area, but it's it it's actually to your point, precisely, it's not the area where everybody already is. Cause there's already a lot of money there. Right? So I would say. You know, if you really had to bootstrap it out in the vacuum, you would have to have the insights that we don't pretend to have. You'd have this ability to kind of look out into the vacuum of space and conjure something that should be there and then have in conjure who should do it and have the resources to start the whole thing. That's not the Sloan foundation we do. We don't operate at that scale, but there's another version of that, which is a more incremental and recognizes the exciting ideas that researchers who are adjacent to an underfunded field. Can't th th th th th the, the excitement that they have to go into a new [00:39:35] area, that's just adjacent to where they are and being responsive to that. [00:39:39] Ben: No, that's, and that's, it sort of ties back in my mind to. Y you need to do programs on that ten-year timescale, right? Like, you know, it's like the first three years you go a little bit in the next three years, you do a little bit in, and by like the end of the 10 years, then you're actually in, in [00:39:59] Adam: that new. No, I think that's exactly right. And the other thing is you can, you know, be more risky or more speculative. I like the word speculative better than risky. Risky makes it sound like you don't know what you're doing. Speculative is meant to say, you don't know where you're going to go. So I don't ever think the grants we're funding are particularly risky in the sense that they're going to, the projects will fail. They're speculative in the sense that you don't know if they're going to lead somewhere really interesting. And this is where. The current funding landscape is really in the federal funding. Landscape is really challenging because [00:40:35] the competition for funding is so high that you really need to be able to guarantee success, which doesn't just mean guarantee that your project will work, but that it will, you know, we will contribute in some really meaningful way to moving the field forward, which means that you actually have to have done half the project already before that's, what's called preliminary data playmate. As far as I'm concerned, preliminary data means I already did it. And now I'm just going to clean it up with this grant. And that is, that's a terrible constraint and we can, we're not bound by that kind of constraint in funding things. So we can have failures that are failures in the sense that that didn't turn out to be as interesting as we hoped it would be. Yeah. I, [00:41:17] Ben: I love your point on, on the risk. I, I, I dunno. I, I think that it's, especially with like science, right? It's like, what is it. The risk, right? Like, you're going to discover something. You might discover that, you know, this is like the phenomenon we thought was a [00:41:35] phenomenon is not really there. Right. But it's, it's still, it's, it's not risky because you weren't like investing for, [00:41:43] Adam: for an ROI. Can I give you another example? I think it was a really good one. Is, is it in the matter of the life program? We made a grant to a guy named David Baker, the university of Washington and hated him. And so, you know, David Baker. And so David Baker builds these little nanoscale machines and he has an enormous Institute for doing this. It's extraordinarily exciting work and. Almost all of the work that he is able to do is tool directed toward applications, particularly biomedical applications. Totally understandable. There's a lot of money there. There's a lot of need there. Everybody wants to live forever. I don't, but everybody else seems to want to, but, so why did, why would, why do we think that we should fund them with all of the money that's in the Institute for protein engineering? Which I think is what it's called. It's because we actually funded him to do some basic science.[00:42:35] Yeah to build machines that didn't have an application, but to learn something about the kinds of machines and the kinds of machinery inside cells, by building something that doesn't have an application, but as an interesting basic science component to it, and that's actually a real impact, it was a terrific grant for us because there's all of this arc, all of this architecture that's already been built, but a new direction that he can go with his colleagues that that he actually, for all of the funding he has, he can't do under the content under the. Umbrella of kind of biomedicine. And so that's another way in which things can be more speculative, right? That's speculative where he doesn't know where it's going. He doesn't know the application it's going to. And so even for him, that's a lot harder to do unless something like Sloan steps in and says, well, this is more speculative. It's certainly not risky. I don't think it's risky to fund David bay could do anything, but it's speculative about where this particular [00:43:35] project is going to lead. [00:43:36] Ben: Yeah, no, I like that. It's just like more, more speculation. And, and you, you mentioned just. Slight tangent, but you mentioned that, you know, Sloan Sloan operates at a certain skill. Do you ever, do you ever team up with other philanthropies? Is that, is that a thing? [00:43:51] Adam: Yeah, we, we do and we love, we love co-funding. We've, we've done that in many of our programs in the technology program. We funded co-funded with more, more foundation on data science in the, we have a tabletop physics program, which I haven't talked about, but basically measuring, you know, fundamental properties of the electron in a laboratory, the size of this office rather than a laboratory. You know, the Jura mountains, CERN and there we, it was a partnership actually with the national science foundation and also with the Moore foundation we have in our energy and environment program partnered with the research corporation, which runs these fascinating program called CYA logs, where they bring young investigators out to Tucson, Arizona, or on to zoom lately, but [00:44:35] basically out to Tucson, Arizona, and mix them up together around an interesting problem for a few days, and then fund a small, small kind of pilot projects out of that. We've worked with them on negative emission science and on battery technologies. Really interesting science projects. And so we come in as a co-funder with them there, I think, to do that, you really need an alignment of interests. Yeah. You really both have to be interested in the same thing. And you have to be a little bit flexible about the ways in which you evaluate proposals and put together grants and so forth so that, so that you don't drive the PIs crazy by having them satisfy two foundations at the same time, but where that is productive, that can be really exciting. [00:45:24] Ben: Cause it seems like I'm sure you're familiar with, they feel like the common application for college. It just, it seems like, I mean, like one of the, sort of my biggest [00:45:35] criticisms of grants in general is that, you know, it's like you sort of need to be sending them everywhere. And there's, there's sort of like the, the well-known issue where, you know, like PI has spend some ridiculous proportion of their time writing grants and it. Sort of a, like a philanthropic network where like, it just got routed to the right people and like sort of a lot happened behind the scenes. That seems like it could be really powerful. Yeah. [00:46:03] Adam: I think that actually would be another level of kind of collective collaboration. Like the common app. I think it would actually in this way, I love the idea. I have to say it's probably hard to make it happen because pre-site, for a couple of reasons that don't make it a bad idea, but it just kind of what planet earth is like. You know, one is that we have these very specific programs and so almost any grant has to be a little bit re-engineered in order to fit into because the programs are so specific fit into a new foundations [00:46:35] program. And the second is. We can certainly at the Sloan foundation, very finicky about what review looks like. And very foundations have different processes for assuring quality. And the hardest work I find in a collaboration is aligning those processes because we get very attached to them. It's a little like the tenure review processes at university. Every single university has its own, right. They have their own tenure process and they think that it was crafted by Moses on Mount Sinai and can never be changed as the best that it possibly ever could be. And then you go to another institution, that thing is different and they feel the same way. That is a feature. I mean really a bug of of the foundation, but it's kind of part of the reality. And, and we certainly, if, if what we really need in order for there to be more collaboration, I strongly feel is for everyone to adopt the Sloan foundation, grant proposal guidelines and review practices. And then all this collaboration stuff would be a piece of cake.[00:47:35] It's like, [00:47:35] Ben: like standards anywhere, right. Where it's like, oh, of course I'm willing to use the standard. It has to be exactly. [00:47:41] Adam: We have a standard we're done. If you would just, if you would just recognize that we're better this would be so much simpler. It's just, it's like, it's the way you make a good marriage work. [00:47:51] Ben: And speaking of just foundations and philanthropic funding more generally sort of like one of the criticisms that gets leveled against foundations especially in, in Silicon valley, is that because there's, there's sort of no market mechanism driving the process that, you know, it's like, it, it can be inefficient and all of that. And I, personally don't think that that marketing mechanisms are good for everything, but I'd be interested in and just like. Sort of response to, to [00:48:23] Adam: that. Yeah. So let me broaden that criticism and because I think there's something there that's really important. There's the enormous discretion that [00:48:35] foundations have is both their greatest strength. And I think their greatest danger that is, you know what, because there is not a discipline that is forcing them to make certain sets of choices in a certain structure. Right. And whether that's markets or whether you think that more generally as, as a, as a kind of other discipline in it, disciplining forces too much freedom can, or I shouldn't say too much freedom, but I would say a lot of freedom can lead to decision-making that is idiosyncratic and And inconsistent and inconstant, right? That is a nicer, a more direct way to say it is that if no one constraints what you do and you just do what you feel like maybe what you feel like isn't the best guide for what you should do. And you need to be governed by a context which assure is strategic [00:49:35] consistencies, strategic alignment with what is going on at other places in, in ways that serve your, you know, that serve the field a commitment to quality other kinds of commitments that make sure that your work is having high impact as a, as a funder. And those don't come from the outside. Right. And so you have to come up with ways. Internally to assure that you keep yourself on the straight and narrow. Yeah. I think there's some similar consideration about which is beyond science funding and philanthropy about the necessity of doing philanthropic work for the public. Good. Yeah. Right. And I think that's a powerful, ethical commitment that we have to have the money that we have from the Sloan foundation or that the Ford foundation, as of the Rockefeller foundation as are in it, I didn't make that money. What's more Alfred P Sloan who left us this money made the money in a context in which lots of people did a lot of work [00:50:35] that don't have that money. Right. A lot of people working at general motors plants and, and, you know, he made that work in a society that support. The accumulation of that fortune and that it's all tax-free. So the federal government is subsidizing this implicitly. The society is subsidizing the work we do because it's it's tax exempt. So that imposes on us, I think, an obligation to develop a coherent idea of what using our funding for the public good means, and not every foundation is going to have that same definition, but we have an obligation to develop that sense in a thoughtful way, and then to follow it. And that is one of the governors on simply following our whims. Right? So we think about that a lot here at the Sloan foundation and the ways in which our funding is justifiable as having a positive, good [00:51:35] that You know, that, that, that attaches to the science we fund or, or just society in general. And that if we don't see that, you know, we, we think really hard about whether we want to do that grant making. Yeah. So it's [00:51:47] Ben: like, I, and I think about things in terms of, of, of like systems engineering. And so it's like, you sort of have these like self-imposed feedback loops. Yes. While it's not, it's not an external market sort of giving you that feedback loop, you still there, you can still sort of like send, like to set up these loops so [00:52:09] Adam: that, so my colleague, one of the program directors here, my colleague, Evan, Michelson is written entire book on. On science philanthropy, and on applying a certain framework that's been developed largely in used in Europe, but also known here in this state, it's called responsible research and innovation, which provides a particular framework for asking these kinds of questions about who you fund and how you fund, what sorts of funding you do, what [00:52:35] sorts of communities you fund into how you would think about doing that in a responsible way. And it's not a book that provides answers, but it's a book that provides a framework for thinking about the questions. And I think that's really important. And as I say, I'm just going to say it again. I think we have an ethical imperative to apply that kind of lens to the work we do. We don't have an ethical imperative to come up with any particular answer, but we have an ethical imperative to do the thinking and I recommend Evan's book to all right. [00:53:06] Ben: I will read it recommendation accepted. And I think, I think. Broadly, and this is just something that, I mean, sort of selfishly, but I also think like there's a lot of people who have made a lot of money in, especially in, in technology. And it's interesting because you look at sort of like you could, you could think of Alfred P Sloan and, and Rockefeller and a lot of [00:53:35] in Carnegie's as these people who made a lot of money and then started, started these foundations. But then you don't see as much of that now. Right? Like you have, you have, you have some but really the, the, the sentiment that I've engaged with a lot is that again, like sort of prioritizing market mechanisms, a implicit idea that, that, like anything, anything valuable should be able to capture that value. And I don't know. It's just like, like how do you, like, have you [00:54:08] Adam: talked to people about, yeah, I think that's a really interesting observation. I think that, and I think it's something we think about a lot is the, the different, I think about a lot is the differences in the ways that today's, you know, newly wealthy, you know, business people, particularly the tech entrepreneurs think about philanthropy. As relates to the way that they made their money. So if we look at Alfred [00:54:35] P Sloan, he he basically built general motors, right? He was a brilliant young engineer who manufactured the best ball bearings in the country for about 20 years, which turned out at the nascent automobile industry. As you can imagine, reducing friction is incredibly important and ball bearings were incredibly important and he made the best ball-bearings right. That is a real nuts. And, but nothing sexy about ball-bearings right. That is the perspective you get on auto manufacturer is that the little parts need to work really well in order for the whole thing to work. And he built a big complicated institution. General motors is a case study is the case study in American business about how you build a large. In large business that has kind of semi-autonomous parts as a way of getting to scale, right? How do you get general motors to scale? You have, you know, you have Chevy and you have a Buick and you're a [00:55:35] Pontiac and you have old's and you have Cadillac and GMC and all, you know, and this was, he was relentlessly kind of practical and institutional thinker, right across a big institution. And the big question for him was how do I create stable institutional structures that allow individual people to exercise judgment and intelligence so they can drive their parts of that thing forward. So he didn't believe that people were cogs in some machine, but he believed that the structure of the machine needed to enable the flourishing of the individual. And that's, that's how we built general motors. That does not describe. The structure of a tech startup, right? Those are move fast and break things, right? That is the mantra. There. You have an idea, you build it quickly. You don't worry about all the things you get to scale as fast as you can with as little structure as you can. You [00:56:35] don't worry about the collateral damage or frankly, much about the people that are, that are kind of maybe the collateral damage. You just get to scale and follow your kind of single minded vision and people can build some amazing institutions that way. I mean, I think it's, it's been very successful, right? For building over the last decades, you know, this incredible tech economy. Right? So I don't fault people for thinking about their business that way. But when you turn that thinking to now funding science, There's a real mismatch, I think between that thinking about institutions and institutions don't matter, the old ones are broken and the new ones can be created immediately. Right? And the fact that real research while it requires often individual leaps forward in acts of brilliance requires a longstanding functioning community. It [00:57:35] fires institutions to fund that research, to host that research that people have long, you know, that the best research is actually done by people who were engaged in various parts of very long decades, careers doing a certain thing that it takes a long time to build expertise and Eva, as brilliant as you are, you need people around you with expertise and experience. There's a real mismatch. And so there can be a reluctance to fund. Th the reluctance to have the commitment to timescales or reluctance to invest in institutions to invest in. There's a I, I think has developed a sense that we should fund projects rather than people and institutions. And that's really good for solving certain kinds of problems, but it's actually a real challenge for basic research and moving basic research forward. So I think there's a lot of opportunity to educate people. And these are super smart people in the tech sector, right. About the [00:58:35] differences between universities and which are very important institutions in all of this and star tech startups. And they really are different sorts of institutions. So I think that's a challenge for us in this sector right now. [00:58:48] Ben: What I liked. To do is tease apart why, why is this different? Like, why can't you just put in more nights to your research and like come up with the, come out with the, like the brilliant insight faster. [00:59:01] Adam: Yeah. I mean, these people who are already working pretty hard, I would say, I mean, you, you know, you're of course, you know, this really well, there are different, I mean, science has, you know, has different parts of science that work on different sorts of problems and, you know, there's, there are problems. Where there's a much more immediate goal of producing a technology that would be usable and applicable. And those require a diff organism organizing efforts in different ways. And, you know, as you well know, the, the national, you know, [00:59:35] the, the private laboratories like bell labs and Xerox labs, and so forth, played a really important role in doing basic research that was really inspired by a particular application. And they were in the ecosystem in a somewhat different way than the basic research done in the universities. You need both of them. And so it, it's not that the way that say the Sloan foundation fund sciences, if everybody only funded science that way, that would not be good. Right. But, but the, the, the big money that's coming out of the, the newly wealthy has the opportunity to have a really positive impact on basically. Yeah, but only if it can be deployed in ways that are consistent with the way that basic sciences is done. And I think that requires some education and, [01:00:22] Ben: and sort of speaking of, of institutions. The, like, as I know, you're aware, there's, there's sort of like this, this like weird Cambridge and explosion of people trying stuff. And I, I guess, like, in addition [01:00:35] to just your, your thoughts on that, I'm, I'm interested particularly if you see, if you see gaps. That people aren't trying to fill, but like, you, you, you think that you would sort of like want to, to shine spotlights on just from, from, from your, your overview position. [01:00:52] Adam: I mean, that's a great question. I, I'm not going to be able to give you any interesting insight into what we need to do. I do think I'm in great favor of trying lots of things. I mean, I love what's going on right now that people are, you know, the, that people are trying different experiments about how to, to fund science. I think that I have a couple of thoughts. I mean, I do think that most of them will fail because in the Cambrian explosion, most of things fail. Right. That is that's if they all succeeded people, aren't trying interesting enough things. Right. So that's fine. I think that there is a, I think that a danger in too much reinventing the wheel. And I, you know, one of the things I, you know, when notice is, is [01:01:35] that you know, some of the new organizations, many of them are kind of set up as a little bit hybrid organizations that they do some funding. And, but they also want to do some advocacy. They're not 5 0 1 they maybe want to monetize the thing that they're, that they're doing. And I think, you know, the, you know, if you want to set a bell labs set up bell labs, there aren't. Magic bullets for some magic hybrid organization, that's going to span research all the way from basic to products, right. And that is going to mysteriously solve the problem of plugging all of the holes in the kind of research, you know, research ecosystem. And so I think it's great that people are trying a lot of different things. I hope that people are also willing to invest in the sorts of institutions we already have. And and that there's a, that there is kind of a balance. There's [01:02:35] a little bit of a language that you start to hear that kind of runs down, that it kind of takes a perspective that everything is broken in the way we're doing things now. And I don't think that everything is broken in the way we do things. Now. I don't think that the entire research institution needs to be reinvented. I think. Interesting ideas should be tried. Right. I think there's a distinction between those two things. And I would hate to see the money disproportionately going into inventing new things. Yeah. I don't know what the right balance is. And I don't have a global picture of how it's all distributed. I would like to see both of those things happening, but I worry a little bit that if we get a kind of a narrative that the tech billionaires all start to all start to buy into that the system is broken and they shouldn't invest in it. I think that will be broken, then it will be broken and we'll [01:03:35] miss a great opportunity to do really great things, right? I mean, the, you know, the, what Carnegie and Rockefeller left behind were great institutions that have persisted long after Carnegie and Rockefeller. We're long gone and informs that Carnegie and Rockefeller could never have imagined. And I would like that to be the aspiration and the outcome of the newly wealthy tech billionaires. The idea that you might leave something behind that, that 50 or a hundred years from now, you don't recognize, but it's doing good right. Long past your own ability to direct it. Right. And that requires a long-term sense of your investment in society, your trust in other people to carry something on after you to think more institutionally and less about what's wrong with institutions, I think would be a [01:04:35] helpful corrective to much of the narrative that I see there. And that is not inconsistent with trying exciting new things. It really isn't. And I'm all in favor of that. But the system we have has actually produced. More technological progress than any other system at any other point in history by a factor that is absolutely incalculable. So we can't be doing everything wrong. [01:04:58] Ben: I think that is a perfect place to stop. Adam. Thanks for being part of idea machines. And now a quick word from our sponsors. Is getting into orbit a drag. Are you tired of the noise from rockets? Well, now with Zipple the award-winning space elevator company, you can get a subscription service for only $1,200 a month. Just go to zipple.com/ideamachines for 20% off your first two months. That's zipple.com/ideamachines.
undefined
May 30, 2022 • 57min

Managing Mathematics with Semon Rezchikov [Idea Machines #44]

In this conversation, Semon Rezchikov and I talk about what other disciplines can learn from mathematics, creating and cultivating collaborations, working at different levels of abstraction, and a lot more! Semon is currently a postdoc in mathematics at Harvard where he specializes in symplectic geometry. He has an amazing ability to go up and down the ladder of abstraction — doing extremely hardcore math while at the same time paying attention to *how* he’s doing that work and the broader institutional structures that it fits into. Semon is worth listening to both because he has great ideas and also because in many ways, academic mathematics feels like it stands apart from other disciplines. Not just because of the subject matter, but because it has managed to buck many of the trend that other fields experienced over the course of the 20th century.   Links Semon's Website Transcript [00:00:35]  Welcome back to idea machines. Before we get started, I'm going to do two quick pieces of housekeeping. I realized that my updates have been a little bit erratic. My excuse is that I've been working on my own idea machine. That being said, I've gotten enough feedback that people do get something out of the podcast and I have enough fun doing it that I am going to try to commit to a once a month cadence probably releasing on the pressure second [00:01:35] day of. Second thing is that I want to start doing more experiments with the podcast. I don't hear enough experiments in podcasting and I'm in this sort of unique position where I don't really care about revenue or listener numbers. I don't actually look at them. And, and I don't make any revenue. So with that in mind, I, I want to try some stuff. The podcast will continue to be a long form conversation that that won't change. But I do want to figure out if there are ways to. Maybe something like fake commercials for lesser known scientific concepts, micro interviews. If you have ideas, send them to me in an email or on Twitter. So that's, that's the housekeeping. This conversation, Simon Rezchikov and I talk about what other disciplines can learn from mathematics, creating and cultivating collaborations, working at different levels of abstraction. is currently a post-doc in mathematics at Harvard, where he specializes in symplectic geometry. He has an amazing ability to go up, go up and down the ladder of [00:02:35] abstraction, doing extremely hardcore math while at the same time, paying attention to how he's doing the work and the broader institutional structures that affect. He's worth listening to both because he has great ideas. And also because in many ways, academic mathematics feels like it stands apart from other disciplines, not just because of the subject matter, but because it has managed to buck many of the trends that other fields experience of the course of the 20th century. So it's worth sort of poking at why that happened and perhaps. How other fields might be able to replicate some of the healthier parts of mathematics. So without further ado, here's our conversation. [00:03:16] Ben:  I want to start with the notion that I think most people have that the way that mathematicians go about a working on things and be thinking about how to work on things like what to work on is that you like go in a room and you maybe read some papers and you think really hard, and then [00:03:35] you find some problem. And then. You like spend some number of years on a Blackboard and then you come up with a solution. But apparently that's not that that's not how it actually works.  [00:03:49] Semon: Okay. I don't think that's a complete description. So definitely people spend time in front of blackboards. I think the length of a typical length of a project can definitely. Vary between disciplines I think yeah, within mathematics. So I think, but also on the other hand, it's also hard to define what is a single project. As you know, a single, there might be kind of a single intellectual art through which several papers are produced, where you don't even quite know the end of the project when you start. But, and so, you know, two, a two years on a single project is probably kind of a significant project for many people. Because that's just a lot of time, but it's true that, you know, even a graduate student might spend several years working on at least a single kind of larger set of ideas because the community does have enough [00:04:35] sort of stability to allow for that. But it's not entirely true that people work alone. I think these days mathematics is pretty collaborative people. Yeah. If you're mad, you know, in the end, you're kind of, you probably are making a lot of stuff up and sort of doing self consistency checks through this sort of formal algebra or this sort of, kind of technique of proof. It makes you make sure helps you stay sane. But when other people kind of can think about the same objects from a different perspective, usually things go faster and at the very least it helps you kind of decide which parts of the mathematical ideas are really. So often, you know, people work with collaborators or there might be a community of people who were kind of talking about some set of ideas and they may be, there may be misunderstanding one another, a little bit. And then they're kind of biting off pieces of a sort of, kind of collective and collectively imagined [00:05:35] mathematical construct to kind of make real on their own or with smaller groups of people. So all of those  [00:05:40] Ben: happen. And how did these collaborations. Like come about and  [00:05:44] Semon: how do you structure them? That's great. That's a great question. So I think this is probably several different models. I can tell you some that I've run across. So during, so sometimes there are conferences and then people might start. So recently I was at a conference and I went out to dinner with a few people, and then after dinner, we were. We were talking about like some of our recent work and trying to understand like where it might go up. And somebody, you know, I was like, oh, you know, I didn't get to ask you any questions. You know, here's something I've always wanted to know from you. And they were like, oh yes, this is how this should work. But here's something I don't know. And then somehow we realized that you know, there was some reasonable kind of very reasonable guests as to what the answer is. Something that needed to be known would be so I guess now we're writing a paper together, [00:06:35] hopefully that guess works. So that's one way to start a collaboration. You go out to a fancy dinner and afterwards you're like, Hey, I guess we maybe solved the problem. There is other ways sometimes people just to two people might realize they're confused about the same thing. So. Collaboration like that kind of from somewhat different types of technical backgrounds, we both realized we're confused about a related set of ideas. We were like, okay, well I guess maybe we can try to get unconfused together.  [00:07:00] Ben: Can I, can I interject, like, I think it's actually realizing that you are confused about the same problem as someone who's coming at it from a different direction is actually hard in and of itself. Yes. Yes. How, how does, like, what is actually the process of realizing that the problem that both of you have is in fact the same problem? Well,  [00:07:28] Semon: you probably have to understand a little bit about the other person's work and you probably have to in some [00:07:35] way, have some basal amount of rapport with the other person first, because. You know, you're not going to get yourself to like, engage with this different foreign language, unless you kind of like liked them to some degree. So that's actually a crucial thing. It's like the personal aspect of it. Then you know it because maybe you'll you kind of like this person's work and maybe you like the way they go about it. That's interesting to you. Then you can try to, you know, talk about what you've recently been thinking about. And then, you know, the same mathematical object might pop up. And then that, that sort of, that might be you know, I'm not any kind of truly any mathematical object worth studying, usually has incarnations and different formal languages, which are related to one another through kind of highly non-obvious transformation. So for example, everyone knows about a circle, but a circle. Could you could think of that as like the set of points of distance one, you could think of it as some sort of close, not right. You can, you can sort of, there are many different concrete [00:08:35] intuitions through which you can grapple with this sort of object. And usually if that's true, that sort of tells you that it's an interesting object. If a mathematical object only exists because of a technicality, it maybe isn't so interesting. So that's why it's maybe possible to notice that the same object occurs in two different peoples. Misunderstandings. [00:08:53] Ben: Yeah. But I think the cruxy thing for me is that it is at the end of the day, it's like a really human process. There's not a way of sort of colliding what both of, you know, without hanging out.  [00:09:11] Semon: So people. And people can try to communicate what they know through texts. So people write reviews on. I gave a few talks recently in a number of people have asked me to write like a review of this subject. There's no subject, just to be clear. I kind of gave a talk with the kind of impression that there is a subject to be worked on, but nobody's really done any work on it that you're [00:09:35] meeting this subject into existence. That's definitely part of your job as an academic. But you know, then that's one way of explaining, I think that, that can be a little bit less, like one-on-one less personal. People also write these a different version of that is people write kind of problems. People write problem statements. Like I think these are interesting problems and then our goal. So there's all these famous, like lists of conjectures, which you know, in any given discipline. Usually when people decide, oh, there's an interesting mathematical area to be developed. At some point they have a conference and somebody writes down like a list of problems and the, the conditions for these problems are that they should kind of matter. They should help you understand like the larger structure of this area and that they should, the problems to solve should be precise enough that you don't need some very complex motivation to be able to engage with them. So that's part of, I think the, the trick in mathematics. You know, different people have very different like internal understandings of something, but you reduce the statements or [00:10:35] the problems or the theorems, ideally down to something that you don't need a huge superstructure in order to engage with, because then people will different, like techniques or perspective can engage with the same thing. So that can makes it that depersonalizes it. Yeah. That's true. Kind of a deliberate, I think tactic.  And  [00:10:51] Ben: do you think that mathematics is. Unique in its ability to sort of have those both like clean problem statements. And, and I think like I get the sense that it's, it's almost like it's higher status in mathematics to just declare problems. Whereas it feels like in other discipline, One, there are, the problems are much more implicit. Like anybody in, in some specialization has, has an idea of what they are, but they're very rarely made lightly explicit. And then to pointing out [00:11:35] problems is fairly low status, unless you simultaneously point out the problem and then solve it. Do you think there's like a cultural difference?  [00:11:45] Semon: Potentially. So I think, yeah, anyone can make conjectures in that, but usually if you make a conjecture, it's either wrong or on. Interesting. It's a true for resulting proof is boring. So to get anyone to listen to you, when you make problem, you state problems, you need to, you need to have a certain amount of kind of controllers. Simultaneously, you know, maybe if you have a cell while you're in, it's clear. Okay. You don't understand the salary. You don't understand what's in it. It's a blob that does magic. Okay. The problem is understand the magic Nath and you don't, you can't see the thing. Right? So in some sense, defining problems as part of. That's very similar to somebody showing somebody look, here's a protein. Oh, interesting. That's a very [00:12:35] similar process. And I do think that pointing out, like, look, here's a protein that we don't understand. And you didn't know about the existence of this protein. That can be a fairly high status work say in biology. So that might be a better analogy. Yeah.  [00:12:46] Ben: Yeah, no, I like that a lot that math does not have, you could almost say like the substrate, that the context of reality.  [00:12:56] Semon: I mean it's there, right? It's just that you have to know what to look for in order to see it. So, right. Like, you know, number theorists, love examples like this, you know, like, oh, everybody knows about the natural numbers, but you know, they just love pointing out. Like, here's this crazy pattern. You would never think of this pattern because you don't have this kind of overarching perspective on it that they have developed over a few thousand years.  [00:13:22] Ben: It's not my thing really been around for a few thousand years. It's  pretty  [00:13:25] Semon: old. Yeah.  [00:13:27] Ben: W w what would you,  [00:13:30] Semon: this is just curiosity. What, what would  [00:13:32] Ben: you call the first [00:13:35] instance of number theory in history?  [00:13:38] Semon: I'm not really sure. I don't think I'm not a historian in that sense. I mean, certainly, you know, the Bell's equation is related to like all kinds of problems in. Like I think grease or something. I don't exactly know when the Chinese, when the Chinese remainder theorem is from, like, I I'm, I'm just not history. Unfortunately, I'm just curious. But I do think the basic server very old, I mean, you know, it was squared of two is a very old thing. Right. That's the sort of irrationality, the skirt of two is really ancient. So it must predate that by quite a bit. Cause that's a very sophisticated question.  [00:14:13] Ben: Okay. Yeah. So then going, going back to collaborations I think it's a surprising thing that you've told me about in the past is that collaborations in mathematics are like, people have different specializations in the sense that the collaborations are not just completely flat of like everybody just sort of [00:14:35] stabbing at a place. And that you you've actually had pretty interesting collaborations structures.  [00:14:43] Semon: Yeah. So I think different people are naturally drawn to different kinds of thinking. And so they naturally develop different sort of thinking styles. So some people, for example, are very interested in someone had there's different kinds. Parts of mathematics, like analysis or algebra or you know, technical questions and typology or whatnot. And some people just happen to know certain techniques better than others. That's one access that you could sort of classify people on a different access is about question about sort of tasting what they think is important. So some people. Wants to have a very kind of rich, formal structure. Other people want to have a very concrete, intuitive structure, and those are very different, those lead to very different questions. Which, you know, that's sort of something I've had to navigate with recently where there's a group of people who are sort of mathematical physicists and they kind of like a very rich, formal structure. And there's other [00:15:35] people who do geometric analysis. Kind of geometric objects defined by partial differential equations and they want something very concrete. And there are relations between questions about areas. So I've spent some time trying to think about how one can kind of profitably move from one to the other. But did Nash there's that, that sort of forces you to navigate a certain kind of tension. So. Maybe you have different access is whether people like these are the here's one, there's the frogs and birds.com. And you know, this, this is a real, this is a very strong phenomenon and mathematics is this, this  [00:16:09] Ben: that was originally dice. [00:16:11] Semon: And maybe I'm not sure, but it's certainly a very helpful framework. I think some people really want to take a single problem and like kind of stab at it. Other people want to see the big picture and how everything fits. And both of these types of work can be useful or useless depending on sort of the flavor of the, sort of the way the person approached it. So, you know, often, you know, often collaborations have like one person who's obviously more kind of hot and kind [00:16:35] of more birdlike and more frog like, and that can be a very productive.  [00:16:40] Ben: And how do you make your, like let's, let's let's date? Let's, let's frog that a little bit. And so like, what are the situations. W what, what are the, both like the success and failure modes of birds in the success and failure modes of  [00:16:54] Semon: frocks. Great, good. This is, I feel like this is somehow like very clearly known. So the success so-so what frogs fail at is they can get stuck on a technical problem, which does not matter to the larger aspect of the larger university. Hmm. And so in the long run, they can spend a lot of work resolving technical issues which are then like, kind of, not really looked out there because in the end they, you know, maybe the, you know, they didn't matter for kind of like progress. Yeah. What they can do is they can discover something that is not obvious from any larger superstructure. Right. So they can sort of by directly [00:17:35] engaging with kind of the lower level details of mathematical reality. So. They can show the birds something they could never see and simultaneously they often have a lot of technical capacity. And so they can, you know, there might be some hard problem, which you know, no one, a large perspective can help you solve. You just have to actually understand that problem. And then they can remove the problem. So that can learn to lead opened kind of to a new new world. That's the frog. The birds have an opposite success and failure. Remember. The success mode is that they point out, oh, here's something that you could have done. That was easier. Here's kind of a missing piece in the puzzle. And then it turns out that's the easy way to go. So you know, get mathematical physicists, have a history of kind of being birds in this way, where they kind of point out, well, you guys were studying this equation to kind of study the typology of format of holes instead of, and you should study, set a different equation, which is much easier. And we'll tell you all this. And the reason for this as sort of like incomprehensible to mathematician, but the math has made it much easier to solve a lot of problems. That's kind of the [00:18:35] ultimate bird success. The failure mode is that you spend a lot of time piecing things together, but then you only work on problems, which are, which makes sense from this huge perspective. And those problems ended up being uninteresting to everyone else. And you end up being trapped by this. Kind of elaborate complexity of your own perspective. So you start working on kind of like an abstruse kind of, you know, you're like computing some quantity, which is interesting only if you understand this vast picture and it doesn't really shed light on anything. That's simple for people to understand. That's usually not good. If you develop a new formal world that sort of in, maybe it's fine to work on it on this. But it is in the, and partially validated by solving problems that other people could ask without any of this larger understanding. That's  [00:19:26] Ben: yeah. Like you can actually be too,  [00:19:31] Semon: too general, almost. That's very often a [00:19:35] problem. So so you know, one thing that one bit of mathematics that is popular among non mathematicians for interesting reasons is category. So I know a lot of computer scientists are sort of familiar with category theory because it's been applied to programming languages fairly successfully. Now category theory is extremely general. It is, you know, the, the mathematical kind of joke description of it is that it's abstract nonsense. So, so that's a technical term approved by abstract now. this is a tech, there are a number of interesting technical terms like morally true, and the proof by abstract nonsense and so forth, which have, I think interesting connotation so approved by abstract nonsense is you have some concrete question where you realize, and you want to answer it and you realize that its answer follows from the categorical structure of the question. Like if you fit this question into the [00:20:35] framework of categories, There's a very general theorem and category theory, which implies what you wanted, what that tells you in some sense of that. Your question was not interesting because it had no, you know, it really wasn't a question about the concrete objects you were looking at at all. It was a question about like relations between relations, right? So, you know, the. S. So, you know, there's this other phrase that the purpose of category theory is to make the trivial trivially trivial. And this is very useful because it lets you skip over the boring stuff and the boring stuff could actually, you get to get stuck on it for a very long time and it can have a lot of content. But so category theory in mathematics is on one hand, extremely useful. And on the other hand can be viewed with a certain amount of. Because people can start working on kind of these very upstream, categorical constructions some more complicated than the ones that appear in programming languages, which, you know, most mathematicians can't make heads or tails of what they're about. And some of those [00:21:35] are kind of not necessarily developed in a way to be made relevant to the rest of mathematics and that there is a sort of natural tension that anyone is interested in. Category theory has to navigate. How far do you go into the land of abstract nonsense? So, you know, even as the mathematicians are kind of viewed as like the abstract nonsense people by most people, even within mathematics is hierarchy continues and is it's factal yeah. The hierarchy is preserved for the same reasons.  [00:22:02] Ben: And actually that actually goes back to I think you mentioned when you're, you're talking about the failure mode of frogs, which is that they can end up working on things that. Ultimately don't matter. And I want to like poke how you think about what things matter and don't matter in mathematics because sort of, I think about this a lot in the context of like technologies, like people, people always think like technology needs to be useful for, to like some, [00:22:35] but like some end consumer. But then. You often need to do things to me. Like you need to do some useless things in order to eventually build a useful thing. And then, but then mathematics, like the concept of usefulness on the like like I'm going to use this for a thing in the world. Not, not the metric, like yeah. But there's still things that like matter and don't matter. So  [00:23:01] Semon: how do you think about, so it's definitely not true that people decide which mathematics matters based on its applicability to real-world concerns. That might be true and applied with medics actually, which has maybe in as much as there's a distinction that it's sort of a distinction of value and judgment. But in mathematics, So I said that mathematical object is more real in some sense, when it can be viewed from many perspectives. So there are certain objects which therefore many different kinds of mathematicians can grapple with. And there are certain questions which kind of any mathematician can [00:23:35] understand. And that is one of the ways in which people decide that mathematics is important. So for example you might ask a question. Okay. So this might be some, so here's a, here's a question which I would think is important. I'm just going to say something technical, but I can kind of explain what it means, you know, understand sort of statements about the representation theory of of the fundamental group of a surface. Okay. So what that means is if you have any loop in a surface, then you can assign to that loop a matrix. Okay. And then if you kind of compose. And then the condition of that for this assignment is that if you compose the loops, but kind of going from one after the other, then you assign that composed loop the product of his two matrices. Okay. And if you deformed the loop then the matrix you assign is preserved under the defamation. Okay. So that's the, that's the sort of question was, can you classify these things? Can you understand them? They turn out to be kind of relevant to differential equations, to partial, to of all different kinds to physics, to kind of typology. Hasn't got a very bad. So, you know, progress on that is kind of [00:24:35] obviously important because it turns out to be connected to other questions and all of mathematics. So that's one perspective, kind of the, the, the simplest, like the questions that any mathematician would kind of find interesting. Cause they can understand them and they're like, oh yeah, that's nice. Those are that's one way of measuring importance and a different one is about the. Sort of the narrative, you know, mathematics method, you just spend a lot of time tying making sure that kind of all the mathematics is kind of in practice connected with the rest of it. And there are all these big narratives which tie it together. So those narratives often tell us a lot of things that are go far beyond what we can prove. So we know a lot more about numbers. Than we can prove. In some sense, we have much more evidence. So, you know, one, maybe one thing is the Remont hypothesis is important and we kind of have much more evidence for the Riemann hypothesis in some sense, then we have for [00:25:35] any physical belief about our world. And it's not just important to, because it's kind of some basic question it's important because it's some Keystone in some much larger narrative about the statistics of many kinds of number, theoretic questions. So You know, there are other more questions which might sound abstruse and are not so simple to state, but because they kind of would clarify a piece of this larger conceptual understanding when all these conjectures and heuristics and so forth. Yeah. You know, like making it heuristic rigorous can be very valuable and that heuristic might be to that statement might be extremely complex. But it means that this larger understanding of how you generate all the heuristics is correct or not correct. And that is important. There's also a surprise. So people might have questions about which they expect the answer to be something. And then you show it's not that that's important. So if there are strong expectations, it's not that easy to form expectations in mathematics, but,  [00:26:30] Ben: but as you were saying that there, there are these like narrative arcs. [00:26:35] Do something that is both like correct and defies the narrative. [00:26:39] Semon: That's an interesting, that means there must be something there, or maybe not. Maybe it's only because there was some technicality and like, you know, the technicality is not kind of, it doesn't enlighten the rest of the narrative. So that's some sort of balance which people argue about and is determined in the end, I guess, socially, but also through the production of, I don't know, results and theorems and expect mathematical experiments and so forth. [00:27:04] Ben: And to, to, so I'm gonna, I'm going to yank us back to, to the, the, the collaborations. And just like in the past, we've talked about like how you actually do like program management around these collaborations. And it felt like I got the impression that mathematics actually has like pretty good standards for how this is. What  [00:27:29] Semon: do you mean by program management? Meaning  [00:27:31] Ben: like like you're like, like how, like [00:27:35] how you were basically just managing your collaborators, like you you're talking about like how, what was it? It was like, you need to like wrangle people for, for. I, or yeah, or like, yeah. So you've got like, just like how to manage your collaborators. [00:27:51] Semon: So I guess  [00:27:54] Ben: we were developing like a theory on that.  [00:27:56] Semon: Yeah, I think a little bit. So on one hand, I guess in mathematics and math, every, so in the sciences, there's usually somebody with money and then they kind of determined what has. Is  [00:28:08] Ben: this, is this a funder or is this like  [00:28:10] Semon: a, I would think the guy pie is huge. So yeah, in the sciences, maybe the model is what like funding agencies, PI is and and lab members, right. And often the PIs are setting the direction. The grant people are kind of essentially putting constraints on what's possible. So they steer the direction some much larger way, but they kind of can't really see the ground to all right. And [00:28:35] then a bunch of creative work happens at lowest level. But you know, you're very constrained by what's possible in your lab in mathematics. There aren't really labs, right. You know, there are certainly places where people know more. Other places about certain parts of mathematics. So it's hard to do certain kinds of mathematics without kind of people around you who know something because most of the mathematics isn't written down. And  [00:28:58] Ben: that, that statement is shocking in and of itself.  [00:29:01] Semon: The second is also similar with the sciences, right? Like most things people know about the natural world aren't really that well-documented that's why it pays to be sometimes lower down the chain. You might find something that isn't known. Yeah. But so because of that, people kind of can work very independently and even misunderstand one another, which is good because that leads to like the misunderstanding can then lead to kind of creative, like developments where people through different tastes might find different aspects of the same problem. Interesting. And the whole thing is then kind of better that way. And then  [00:29:34] Ben: [00:29:35] like resolving, resolving. The confusion in a legible way,  [00:29:40] Semon: it sort of pushes the field. So that's, but also because everyone kind of can work on their own, you know, coordination involves, you know, a certain amount of narrative alignment. And so you have to understand like, oh, this person is naturally suited to this kind of question. This person is naturally suited to this kind of question. So what are questions where both people are. First of all, you would need both people to make progress on it. That gives you competitive advantage, which is important, extremely important in kind of any scientific landscape. And secondly if you can find a question of overlap, then, you know, there's some natural division of labor or some natural way in which both people can enlighten the other in surprising ways. If you can do everything yourself and you have some other person, like write it up, that's sort of not that phonic club. So yeah, so there's, and then there's like, kind of on a [00:30:35] larger, but that's like kind of one on a single project collaboration to do larger collaboration. You have to kind of, you know, give you have to assign essentially you have to assign social value to questions, right? Like math, unlike sort of the math is small enough that it can just barely survive. It's credit assigning system almost entirely on the basis of the social network of mathematicians. Oh, interesting. Okay. It is certainly important to have papers refereed because like it's important for somebody to read a paper and check the details. So the journals do matter, but a lot happen. So, you know, it doesn't have the same scaling. The biology or machine learning has in part, because it's a small,  [00:31:20] Ben: do you know, like roughly how many mathematicians. I can, I can look this  [00:31:25] Semon: up. I mean, it depends on who you count as a mathematician. So that's the technique I'm asking you. The reason, the reason I'm asking [00:31:35] that is because of course there's the American mathematical society and they publish, like, this is the number of mathematicians. And the thing is like, they count like quite a lot of people. So you actually have the decision actually dramatically changes your answer. I would say there are on the order of the. Tens of thousands of mathematicians. Like if you think about like the number of attendees of the ICM, the international Congress of mathematicians, like, and then, you know, the thing is a lot of people, so it depends on like pure mathematicians, how pure, you know, that's going to go up and down. But that's sort of the right order of magnitude. Okay. Cause which is a very small given that  [00:32:12] Ben: a compared to, to most other disciplines then, especially compared to even. Science as a whole like research  [00:32:20] Semon: has a whole. Yeah. So yeah, I think like if you look at like, you know, all the, if you say like, well look at the Harvard Kennedy school of business, and then they have an MBA program, which is my impression is it's serious. [00:32:35] And then you also look at like all the math pieces. Graduates and like the top 15 kind of us schools are kind of like, you know, I think the MBAs are like several times lecture. Yes. So that's, maybe I was surprised to learn that  [00:32:50] Ben: that's also good. Instead of  [00:32:51] Semon: like, you can look at the output rate, the flow rate, that's a very easy way to decide. Yeah. But yeah, so you have to, yeah. So kind of you, there's like kind of, depending on how, if you can let go. There are certain you have to, if you want to work with people, you have to find you, there's not, you can't really be a PI in mathematics, but if you are good at talking to people, you can encourage people to work on certain questions. So that over time kind of a larger set of questions get answered, and you can also make public statements to the and which are in some ways, invitations, like. If you guys do these [00:33:35] things, then it'll be better for you because they fit into a larger context. So therefore your work is more significant that you're actually doing them a service by explaining some larger context. And simultaneously by sort of pointing out that maybe some problem is easy or comparatively, easy to some people that you, you might not do. So that helps you if then they solve the problem because you kind of made a correct prediction of like, there is good mathematic. Yeah. So this is some complicated social game that, you know, mathematicians are not like, you know, they're kind of strange socially, but they do kind of play this game and the way in which they play this game depends on their personal preferences and how social they are. [00:34:13] Ben: And actually speaking of the social nature of mathematics I get the impression that mathematics sort of as a discipline is. It feels much closer to what one might think of as like old academia then many other disciplines in the sense that my, my impression is [00:34:35] that your, your tenure isn't as much based on like how much grant money you're getting in. And It's, it's not quite as much like a paper mill up and out  [00:34:46] Semon: gay. Yeah. There's definitely pressure to publish. There, the expected publishing rate definitely depends on the area. So, you know, probability publishers more, in some ways it's a little bit more like applied mathematics, which has more of a kind of paper mill quality to it. I don't want to overstate that. But so there is space for people to write just a few papers if they're good and have got a job. Yeah. And so it's definitely true as I think in the rest of the sciences, that kind of high quality trumps quantity. Right. Then, you know, but modular, the fact that you do have, you do have to produce a certain amount of work in order to stay in academia and You know, in the end, like where you end up is very much determined on the significance of your work. Right. And if you're very productive, consistently, certainly helps with people are kind of not as [00:35:35] worried. But yeah, it's definitely not determined based on grant money because essentially there's not that much grant money to go around. So that makes it have more of this old-school flavor. And it's also true that it's still not, it's genuinely not strange for people to graduate with like just their thesis to graduate from a PhD program. And they can do very well. So long as they, during grad school learn something that other people don't know and that matters. That seems that that's helpful, but so that allows for, yeah, this. You know, th this there's this weird trick that mathematicians play, where like proofs are kind of supposedly a universal language that everyone can read. And that's not quite true, but it tries to approximate that ideal. But everyone has sort of allowed to go on their own little journey and the communities does spend a lot of work trying to defend that. What,  [00:36:25] Ben: what sort of, what, what does that work  [00:36:27] Semon: actually look like? Well, I think it's true that it is actually true that grad students are not required to like publish a paper a year. Yeah, [00:36:35] that's true. And that's great that people, I think, do defend that kind of position and they are willing to put their reputation on the line and the kind of larger hiring process to defend that SAC separately. It's true that, you know, You know, work that is not coming out of one of the top three people or something is can still be considered legitimate. You know, because like total it's approved, it's approved. No one can disagree with it. So if some random person makes some progress, you know, it's actually very quickly. If, if people can understand it, it's very quickly kind of. And this allows communities to work without quite understanding one or other for awhile and maybe make progress that way, which can be  [00:37:18] Ben: helpful. Yeah. And and most of the funding for math departments actually comes from teaching. Is that  [00:37:26] Semon: yeah, I think that a lot of it comes from teaching. A certain chunk of it comes from grants. Like basically people use grants to, in order to teach less. Yeah. That's more or [00:37:35] less how it works. You know, of course there's this, as in, you know, mathematics has this kind of current phenomenon where, you know, rich individuals like fund a department or something or they fund a prize. But by and large, it seems to be less dependent on these gigantic institutional handouts from say the NSF or the NIH, because that the expenses aren't quite yet. But it does also mean that like, it is sort of constrained and you know, it can't, you know, like big biology has like, kind of so much money, maybe not enough, not as much as it needs. I mean, these grant acceptance rates are extremely low.  [00:38:13] Ben: If it's, for some reason, it's every mathematician magically had say order of magnitude more funding  [00:38:21] Semon: when it matters. Yeah. So it's not clear that they would know what to do with that. There is, I thought a lot about the question of, to what degree does the mathematics is some kind of social enterprise and that's maybe true of every research [00:38:35] program, but it's particularly true in mathematics because it's sort of so dependent on individual creativity. So I've thought a lot about to what degree you could scale the social enterprise and in what directions it could scale because it's true that kind of producing mathematicians is essentially an expensive and ad hoc process. But at the same time, Plausibly true that people might be able to do research of a somewhat different kind just in terms of collaborations or in terms of like what they felt to do free to do research on if they had access to different kind of funding, like math itself is cheap, but the. Kind of freedom to say, okay, well, these next two years, I'm going to do this kind of crazy different thing. And that does not have to fit with my existing research program that could, that you have to sort of fight for. And that's like a more basic stroke thing about the structure of kind of math academia. I feel like  [00:39:27] Ben: that's, that's like structurally baked into almost the entire world where there's just a ton of it's, it's [00:39:35] very hard to do something completely different than the things that you have done. Right? People, people, boat, people. Our book more inclined to help you do things like what you've done in the past. And they are inclined to push against you doing different things. Yeah,  [00:39:50] Semon: that's true.  [00:39:50] Ben: And, and sort of speaking of, of money in the past, you've also pointed out that math is terrible at capturing the value that it creates in this.  [00:40:02] Semon: Well, yeah. You know, math is, I mean, it may be hard to estimate kind of human capital value. Like maybe all mathematicians should be doing something else. I don't really know how to reason about that, but it's definitely objectively very cheap. Just in the sense of like all the funding that goes into mathematics is very little and arguably the  [00:40:21] Ben: sort of downstream, like basically every, every technical anything we have is to some extent downstream. Mathematics  [00:40:32] Semon: th there is an argument to be made of that kind. You know, [00:40:35] I don't think one should over I think, you know, there are extreme versions of this argument, which I think are maybe not helpful for thinking about the world. Like you shouldn't think like, ah, yes, computer science is downstream of the program. Like this turning thing. Like, I don't really know that it's fair to say that, but it is true that whenever mathematicians produce something that's kind of more pragmatically useful for other people, it tends to be. It tends to be easy to replicate and it tends to be very robust. So there are lots of other ideas of this kind and, you know, separately, even a bunch of the value of mathematics to the larger world seems to me to not even be about specific mathematical discoveries, but to be about like the existence of this larger language and culture. So, you know, neural network people now, you know, they have all of these like echo variant neural networks. Yeah. You know, that's all very old mathematics. But it's very helpful to have kind of that stuff feel like totally, like you need to have those kinds of ideas be completely explored [00:41:35] before a totally different community can really engage with them. And that kind of complete kind of that sort of underlying cultural substrate actually does allow for different kinds of things, because doing that exploration takes a few people a lot of time. So in that sense, then it's very hard to like you know, yeah. What you do well, most mathematicians do things which will have no relevance to the larger world. Although it may be necessary for the progress of the sort of more useful basal things. Like the idea of a manifold came out of like studying elliptic functions historically and manifolds are very useful idea. And I looked at functions are or something. I mean, they're also useful, but they maybe less well known. Certainly I think a typical scientist does not know about them. Yeah. It came out, but it did come out of like studying transformation laws for elliptic functions, which is a pretty abstruse sounding thing. So, but because of that, there's just, there's no S it's very hard to find a way for mathematicians to kind of like dip into the future. And because like, you can have a startup. You know, like it's not going to be industrially useful, but it is [00:42:35] clearly on this sort of path in a way that you kind of, it's very hard to imagine removing a completely. Yeah.  [00:42:42] Ben: So, no, I like it also because it's, again, it's, it's sort of this extreme example of some kind of continuum where it's like, everybody knows that math is really important, but then everybody also knows that it's not a. Immediately  [00:43:02] Semon: applicable. Yeah. And there's this question of, how do you kind of make the navigation that continuum smoother and that has you know, that's like a cultural issue and like an institutional issue to some degree, you know, it's probably true that new managers do know lots of stuff, empirically they get hired and then they get, they like, their lives are fine. So it seems that, you know, people recognize that but the, you know, various also in part too, because mathematicians try to kind of preserve this sort of space for [00:43:35] people to explore. There is a lot of resistance in the pure mathematics community for people to try to like try random stuff and collaborate with people. And, you know, there is probably some niche for you know, Interactions between mathematically minded people and kind of things which are more relevant to the contemporary world or near contemporary world. And that niches one where it's navigation was a little bit obscure. It's not There aren't, there are some institutions around it, but it's, it doesn't seem to me to be like completely systematized. And that's in part because of the resistance of the pure mathematics community. Like historically, I mean, you know, it's true that like statistics, departments kind of used to be part of pure mathematics departments and then they got kicked out, probably they left and they were like, we can make more money than you. No, seriously. I don't know. I mean, there's like, I don't know the history of Berkeley stats department isn't famously one of the first ones that have this. I don't know the detailed history, but there was definitely some kind of conflict and it was a cultural conflict. Yeah. So these sorts of cultural [00:44:35] issues are things that I guess anyone has a saying, and I, I'm kind of very curious how they will evolve in the coming 50 years. Yeah.  [00:44:42] Ben: To, to change the subject just a bit again the, can you, can you dig into how. Do you call them retreats? Like when, when the, the thing where you get a bunch of mathematicians and you get them to all live in a place  [00:44:56] Semon: for like, so there's this interesting well that's, there are things with a couple CS there. Of course they're there. That's maybe. So there are kind of research programs. So that's where some Institute has flies together. Post-docs maybe some grad students, maybe some sort of senior faculty and they all spend time in one area for a couple of months in order to maybe make progress on some kind of idea of a question. So, yeah. That is something that there are kind of dedicated institutes to doing. In some sense, this is one of the places where like kind of external [00:45:35] funding has changed the structure of mathematics. Cause like the Institute of advanced study is basically one of these things. Yes. This Institute at Princeton where like basically a few old people, I mean, I'm kind of joking, but you know, there's a few kind of totemic people like people who have gone there because they sort of did something famous and they sit there. And then what the Institute has done yesterday actually does in mathematics is it has these semester, longer year long programs. We're just house funding for a bunch of people to space. Been there spent a year there or half a year there, where to fly in there for a few weeks, a few times in the year. And that gets everyone together in one area and maybe by interacting, they can kind of figure out what's going on in some theoretical question, a different thing that people have done in much more short term is there's like a, kind of an interesting conference format, which is like, reminds me a little bit of like unconferences or whatnot, but it's actually kind of very serious where people choose you know, hot topic. In a [00:46:35] kind of contemporary research and then they like rent out a giant house and then they have, I don't know, 20 people live in this house and maybe cook together and stuff. And then, you know, everyone there's like every learning center is like a week long learning seminar where there's some people who are like real experts in the area, a bunch of people who don't know that much, but would like to learn. And then everyone has to give a talk on subjects that they don't know. And then there's serious people. The older people can go and point out where some, if there is a confusion and yeah, everyone. So there's like talks from nine to five and it's pretty exhausting. And then afterwards, you know, everyone goes on a hike or sits in the hot tub and talks about life and mathematics and that can be extremely productive and very fun. And it's also extremely cheap because it's much cheaper to rent out a giant house than it is to rent out a bunch of hotels. So. If you're willing to do that, which most mathematicians are and a story,  [00:47:25] Ben: like, I don't know if I'm misremembering this, but I remember you telling me a story where like, there were, there were two people who like needed [00:47:35] to figure something out together and like they never would have done it except for the fact that they just were like sitting at dinner together every night for, for some number of nights. [00:47:45] Semon: I. I mean, there are definitely apocryphal stories of that kind where eventually people realize that they're talking about the same thing. I can't think of an example, right? I think I told you, you asked me, you know, is there an example of like a research program where it's clear that some major advance happened because two people were in the same area. And I gave an example, which was a very contemporary example, which is far outside of my area of expertise, but which is this. You know, Peter Schultz Lauren far kind of local geometric language and stuff where basically there was at one of these at this Institute in Berkeley. They had a program and these two people were there and Schultz was like a really technically visionary guy and Fargo talked very deeply about certain ideas. And then they realized that basically like the sort of like fart, his dream could actually be made. And I think before that [00:48:35] people didn't quite realize like how far this would go. So that's kinda, I just gave you that as an example and that happens on a regular basis. That's maybe the reason why people have these programs and conferences, but it's hard to predict because so, you know, I don't really, like, I wish I could measure a rate. Yes.  [00:48:50] Ben: You just need that marination. It's actually like, okay. Oh, a weird thought that just occurred to me. Yeah. That this sort of like just getting people to hang out and talk is unique in mathematics because you do not need to do cause like you can actually do real work by talking and writing on a whiteboard. And that like, if you wanna to replicate this in some other field, you would actually need that house to be like stocked with laboratory. Or something so that people could actually like, instead of just talking, they could actually like poke at whatever  the  [00:49:33] Semon: subject is. That would [00:49:35] be ideal, but that would be hard because experiments are slow. The thing that you could imagine doing, or I could imagine doing is people are willing to like, share like very preliminary data, then they could kind of both look at something and figure out oh, I have something to say about your final. And I, that I don't know to what degree that really happens at say biology conferences, because there is a lot of competitive pressure to be very deliberate in the disclosure of data since it's sort of your biggest asset. Yeah.  [00:50:05] Ben: And is it, how, how does mathematics not fall into that trap?  [00:50:11] Semon: That is a great question. In part there is. So I'm part, there are somewhat strong norms against that, like, because the community is small enough. If it's everyone finds out like, oh yeah, well this person just like scooped kind of, yeah. There's a very strong norm against scooping. That's lovely. It's okay. In certain contexts, like if, if, if it's clear for everyone, like somebody [00:50:35] could do this and somebody does the thing and it's because it was that it's sort of not really scooping. Sure. But if you, if there is really You know, word gets around, like who kind of had which ideas and when people behave in a way that seems particularly adversarial that has consequences for them. So that's one way in which mathematics avoids that another way is that there's just like maybe it's, it's actually true that different people have kind of different skills. It is a little bit less competitive structurally because it isn't like everyone is working at the same kind of three problems. And everyone has like all the money to go and like, just do the thing. And  [00:51:16] Ben: it's like small enough that everybody can have a specialization such that there are people like you, you can always do something that someone else can't.  [00:51:24] Semon: Often there are people, I mean, that, that might depend on who you are. But yeah, often people with. It's more like it's large enough for that to be the case. Right? Like you [00:51:35] can develop some intuition about some area where yeah. Other people might be able to kind of prove what you're proving, but you might be much better at it than them. So people will be like, yeah, why don't you do it? That's helpful. Yeah. It's that's useful. I mean, it certainly can happen that in the end, like, oh, there's some area on everyone has the same tools and then it does get competitive and people do start. Sorry. I think in some ways it has to do with like a diversity of tools. Like if, if every different lab kind of has a tool, which like the other labs don't have, then there's less reason to kind of compete. You know, then you might as well kind of, but also that has to do with the norms, right? Like your, the pressure of being the person on the ground is that's a very harsh constraint. That's not. Premiere. I mean that my understand, I guess, that is largely imposed by the norms of the community itself in the sense that like a lot of like an NIH grants are actually kind of determined by scientific committees [00:52:35] or committees of scientists. So,  [00:52:38] Ben: I mean, you could argue about that, right? Because  [00:52:41] Semon: don't,  [00:52:42] Ben: is it, is it like, I mean, yes, but then like, those committees are sort of mandated by the structure of the funding agencies. Right. And so is it which, and there's of course a feedback loop and they've been so intertwined for decades that I'm clear which way that causality runs. [00:53:02] Semon: Yeah. So I remember those are my two guesses for how it's like one, there's just like a very strong norm against this. And you don't, you just don't, you know, if you're the person with the idea. And then you put the other person on the paper because they like were helpful. You don't lose that much. So it's just, you're not that disincentivized from doing it. Like in the end, people will kind of find out like, who did what work to some degree, even though officially credit is shared. And that means that, you know, everyone can kind of get. [00:53:35]  [00:53:35] Ben: It seems like a lot of this does is depend on, on  [00:53:38] Semon: scale. Yeah. It's very scale because you can actually find out. Right. And that's a trade-off right. Obviously. So, but maybe not as bad a trade off in mathematics, because it's not really clear what you would do with a lot more scale. On the other hand, you don't know, like, you know, if you look at, say a machine learning, this is a subject that's grown tremendously. And in part, you know, they have all these crazy research directions, which you, I think in the end kind of can only happen because they've had so many different kinds of people look at the same set of ideas. So when you have a lot of people looking at something and they're like empowered to try it, it is often true that you kind of progress goes faster. I don't really know why that would be false in mathematics.  [00:54:23] Ben: Do you want to say anything about choosing the right level of Metta newness? Hmm.  [00:54:28] Semon: Yeah. You're thinking about, I guess this is a, this is like a question [00:54:35] for, this is like a personal question for everyone almost. I mean, everyone who has some freedom over what they work on, which is actually not that many people you know, You in any problem domain, whether that's like science, like science research or whether that's like career or whatnot, or even, you know, in a company there's this kind of the, the bird frog dichotomy is replicated. What Altitude's. Yeah. So for example, you know, in math, in mathematics, you could either be someone who. Puts together, lots of pieces and spend lots of time understanding how things fit together. Or you can be someone who looks at a single problem and makes hard progress at it. Similarly, maybe in biology, you can also mean maybe I have a friend who was trying to decide whether she should be in an individual contributor machine learning research company, or. And that for her in part is Metta non-metro choice. So she [00:55:35] really likes doing kind of like explicit work on something, being down to the ground as a faculty, she would have to do more coordination based work. But that, like, let's see, you kind of have more scope. And also in many cases you are so in many areas, but not in all doing the. Is a higher status thing, or maybe it's not higher status, but it's better compensated. So like on a larger scale, obviously we have like people who work in finance and may in some ways do kind of the most amount of work and they're compensated extremely well by society. And but you need people you need, you know, very kind of talented people to work with. Yeah, problems down to the ground because otherwise nothing will happen. Like you can't actually progress by just rearranging incentive flows and having that kind of both sides of this be kind of the incentives be appropriately structured is a very, very challenging balancing act because you need both kinds of people. But you know, you need a larger system in which they work and there's no reason for that [00:56:35] system. A B there's just no structural reason why the system would be compensating people appropriately, unless like, there are specific people who are really trying to arrange for that to be the case. And that's you know, that's very hard. Yeah. So everyone kind of struggles with this. And I think because in sort of gets resolved based on personal preference. Yeah.  [00:56:54] Ben: I think, I think that's, yeah. I liked that idea that the. Unless sort of by default, both like status and compensation will flow to the more Metta people. But then that ultimately will be disastrous if, if, if taken to its logical conclusion. And so it's like, we need to sort of stand up for the trend.  [00:57:35] 
undefined
Jan 18, 2022 • 1h 3min

Scientific Irrationality with Michael Strevens [Idea Machines #43]

Professor Michael Strevens discusses the line between scientific knowledge and everything else, the contrast between what scientists as people do and the formalized process of science, why Kuhn and Popper are both right and both wrong, and more. Michael is a professor of Philosophy at New York University where he studies the philosophy of science and the philosophical implications of cognitive science. He’s the author of the outstanding book “The Knowledge Machine” which is the focus of most of our conversation. Two ideas from the book that we touch on: 1. “The iron rule of science”. The iron rule states that “`[The Iron Rule] directs scientists to resolve their differences of opinion by conducting empirical tests rather than by shouting or fighting or philosophizing or moralizing or marrying or calling on a higher power` in the book Michael Makes a strong argument that scientists following the iron rule is what makes science work. 2. “The Tychonic principle.” Named after the astronomer Tycho Brahe who was one of the first to realize that very sensitive measurements can unlock new knowledge about the world, this is the idea that the secrets of the universe lie in minute details that can discriminate between two competing theories. The classic example here is the amount of change in star positions during an eclipse dictated whether Einstein or Newton was more correct about the nature of gravity. Links Michael’s Website The Knowledge Machine on BetterWorldBooks Michael Strevens talks about The Knowledge Machine on The Night Science Podcast  Michael Strevens talks about The Knowledge Machine on The Jim Rutt Show    Automated Transcript [00:00:35] In this conversation. Uh, Professor Michael And I talk about the line between scientific knowledge and everything else. The contrast between what scientists as people do and the formalized process of science, why Coon and popper are both right, and both wrong and more. Michael is a professor of philosophy at New York university, where he studies the philosophy of science and the philosophical implications [00:01:35] of cognitive science. He's the author of the outstanding book, the knowledge machine, which is the focus of most of our conversation. A quick warning. This is a very Tyler Cowen ESCA episode. In other words, that's the conversation I wanted to have with Michael? Not necessarily the one that you want to hear. That being said I want to briefly introduce two ideas from the book, which we focus on pretty heavily. First it's what Michael calls the iron rule of science. Direct quote from the book dine rule states that the iron rule direct scientists to resolve their differences of opinion by conducting empirical tests, rather than by shouting or fighting or philosophizing or moralizing or marrying or calling on a higher power. In the book, Michael makes a strong argument that scientist's following the iron rule is what makes science work. The other idea from the book is what Michael calls the Taconic principle. Named after the astronomer Tycho Brahe, who is one of the first to realize that very sensitive measurements can unlock new [00:02:35] knowledge about the world. This is the idea that the secrets of the universe that lie into my new details that can discriminate between two competing theories. The classic example, here is the amount of change in a Star's position during an eclipse dictating whether Einstein or Newton was more correct about the nature of gravity. So with that background, here's my conversation with professor Michael strengthens. [00:02:58] Ben: Where did this idea of the, this, the sort of conceptual framework that you came up with come from? Like, what's like almost the story behind the story here. [00:03:10] Michael: Well, there is an interesting origin story, or at least it's interesting in a, in a nerdy kind of way. So it was interested in an actually teaching the, like what philosophers call that logic of confirmation, how, how evidence supports or undermines theories. And I was interested in getting across some ideas from that 1940s and fifties. Scientists philosophers of science these days [00:03:35] look back on it and think of as being a little bit naive and clueless. And I had at some point in trying to make this stuff appealing in the right sort of way to my students so that they would see it it's really worth paying attention. And just not just completely superseded. I had a bit of a gear shift looking at it, and I realized that in some sense, what this old theory was a theory of, wasn't the thing that we were talking about now, but a different thing. So it wasn't so much about how to assess how much a piece of evidence supports a theory or undermines it. But was it more a theory of just what counts as evidence in the first place? And that got me thinking that this question alone is, could be a important one to, to, to think about now, I ended up as you know, in my book, the knowledge machine, I'm putting my finger on that as the most important thing in all of science. And I can't say it at that point, I had yet had that idea, but it was, [00:04:35] it was kind of puzzling me why it would be that there would, there would be this very kind of objective standard for something counting is evidence that nevertheless offered you more or less, no help in deciding what the evidence was actually telling you. Why would, why would this be so important at first? I thought maybe, maybe it was just the sheer objectivity of it. That's important. And I still think there's something to that, but the objectivity alone didn't seem to be doing enough. And then I connected it with this idea in Thomas Kuhn's book, the structure of scientific revolutions that, that science is is a really difficult pursuit that I've heard. And of course it's wonderful some of the time, but a lot of. requires just that kind of perseverance in the face of very discouraging sometimes. Oh, it's I got the idea that this very objective standard for evidence could be playing the same role that Coon Coon thought was played by what he called the paradigm bar, providing a kind of a very objective framework, which is also a kind of a safe framework, [00:05:35] like a game where everyone agrees on the rules and where people could be feeling more comfortable about the validity and importance of what they were doing. Not necessarily because they would be convinced it would lead to the truth, but just because they felt secure in playing a certain kind of game. So it was a long, it was a long process that began with this sort of just something didn't seem right about these. It didn't seem right that these ideas from the 1940s and fifties could be so, so so wrong as answers to the question. Philosophers in my generation, but answering. Yeah, no, it's, [00:06:11] Ben: I love that. I feel in a way you did is like you like step one, sort of synthesized Coon and popper, and then went like one step beyond them. It's, it's this thing where I'm sure you'd go this, this, the concept that whenever you have like two, two theories that seem equally right. But are [00:06:35] contradictory, that demand is like that, that is a place where, you know, you need more theory, right? Because like, you look at popper and it's like, oh yeah, that seems, that seems right. But then there's you look at Kuhn and you're like, oh, that seems right. And then you're like, wait a minute. Because like, they sort of can't both live in the broom without [00:06:56] Michael: adding something. Although there is something there's actually something I think. Pop Harrington about Koons ideas now. And there's lots of things that are very unpopped period, but you know, Papa's basic idea is science proceeds through reputation and Koons picture of science is a little bit like a very large scale version of that, where we're scientists now, unlike in Papa's story by scientists, we're all desperately trying to undermine theories, you know, the great Britain negative spirits. And with, with, they just assume that that prevailing way of doing things, the paradigm is going to work out okay. But in presuming that they push it to its breaking point. And [00:07:35] that process, if you kind of take a few steps back, has the look of pop and science in the sense that, in the sense that scientists, but now unwittingly rather than with their critical faculties, fully engaged and wittingly are, are taking the theory to a point where it just cannot be sustained anymore in the face of the evidence. And it progresses made because the theory just becomes antenna. Some other theory needs to be counted. So there's at, at the largest scale, there's this process of that, of success of reputation and theories. Now, Coon reputation is not quite the right word. That sounds too orderly and logical to capture what it's doing, but it is nevertheless, there is being annihilated by facts and in a way that's actually quite a period. I think that interesting. [00:08:20] Ben: So it's like, like you could almost phrase Coon as like systemic pop area. Isn't right. To like no individual scientist is trying to do reputation, but then you have like the system eventually [00:08:35] refutes. And that like, that is what the paradigm shift [00:08:37] Michael: is. That's exactly right. Oh, [00:08:39] Ben: that's fast. Another thing that I wanted to ask before we dig into the actual meat of the book is like, wow, this is, this is almost a very, very selfish question, but like, why should people care about this? Like, I really care about it. There's some, and by this, I mean like sort of the, like theories of how science works, right? Like, but I know, I know many scientists who don't care. They're just like, I tried to, I talked to them about that because then they're like, like I just, you know, it's like I do, I do. I think, [00:09:12] Michael: you know, in a way that, and that's completely fine, you know, people to drive a car, you don't know how the engine works. And in fact the best drivers may not have very much mechanical understanding at all. And it's fine for scientists to be a part of the system and do what the system requires of them without really grasping how it works most of the time. 1, 1, 1 way it becomes important is when people start wanting.[00:09:35] Science might not be improved in some ways. So there's a few, there's always a little bit of that going on at the margin. So some string theorists now want to want to relax the standards for what counts as a, as a acceptable scientific arguments so that the elegance or economy of an explanation kind of officially count in favor of a theory as well as, as well as the empirical evidence in the fashion sense. Or there's, there's quite a bit of, of momentum for reform of the publishing system and science coming out of things like the replicability crisis, the idea that actually that, you know, it's talking about science as a game, but science has been gamified to the point where it's being gamed. Yes. And so, you know, there a certain kind of ambitious individual goes into science and yeah, not necessarily. One who has no interest in knowledge, but they, once they see what the rules are, they cannot resist playing those rules to the, to the limit. And what you get is a sequence of scientists sometimes call it the least publishable unit. That's tiny little [00:10:35] results that are designed more to be published and cited in advance of scientist's career than to be the most useful, a summary of research. And then you, and you get time to simply then even worse, choosing their research direction, less out of curiosity, or the sense that they can really do something valuable for the world at large then because they see a narrower and shorter term opportunity to make their own name. Know that's not always a bad thing, but you know, no system of no system of rules, as perfect as people explain the rules more and more that the direction of science as a whole can start to veer a little bit away. Now it's a complicated issue because you changed the rules and you may lose a lot of what's good about the system. Things that you may, it may all look like it's very noble and, and so on, but you can still lose some of what's good about the system as well as fixing what's bad. So I think it's really important to understand how the whole thing works before just charging in and, and, and making a whole series of reforms. [00:11:34] Ben: [00:11:35] Yeah. Okay. That makes a lot of sense. It's like, what are the, what are the actual, like core pieces that, that drive the engine? [00:11:42] Michael: So that's the practical, that's the practical side of the answer to your question. You might, people should care. I thing it's a fascinating story. I mean, I love these kinds of stories. Like the Coon story, where we're at turn, everything turns out to be working in completely different way from the way it seems to be working with that ideology turns out to be not such a great guide to the actual mechanics of the thing. Yeah, [00:12:03] Ben: yeah, no, I mean, yeah. I think that I like that there are some people who just like, think it's fascinating and it's like also just. My, my bias has also the, like how it sort of like weaves between history, right? Like you have to like, really like, look at all of these like fascinating case studies and be like, oh, what's actually going on there. So actually to build on two things you just said could, could you make the argument that with the ref replicability crisis and [00:12:35] like sort of this idea of like P hacking, you're actually seeing, you're seeing what you like th the, the mechanisms that you described in the book in play where it sort of, it used to be that looking at P values was like, like having a good P value was considered sufficient evidence, but then we like now see that, like, having that sufficient P value doesn't, isn't actually predictive. And so now. Everybody is sort of starting to say like, well, maybe like that, that P felt like the using P value as evidence is, is no longer sufficient. And so, because the, the observations didn't match the, the, like what is considered evidence it's like the, what is considered evidence is evolving. Is that like, basically like a case, like, [00:13:29] Michael: exactly. That's exactly right. So the, the whole, the significance testing is one of these, it's a [00:13:35] particular kind of instanciation of the sort of broadest set of rules. We, this whole rule based approach to science where you set up things. So that it's very clear what counts as, as publishable evidence, you have to have a statistically significant results in that P P value testing and stuff is the, is the most widespread of kind of way of thinking about statistical significance. So it's all very straightforward, you know, exactly what you have to do. I think a lot of. Great scientific research has been done and that under that banner, yeah. Having the rules be so clear and straightforward rather than just a matter of some, the referees who referee for journals, just making their own minds up about whether this result looks like a good mind or not. It's really helped science move forward. And given scientists the security, they need to set up research the research programs that they've set up. It's all been good, but because it sort of sets up this very specific role it's possible to, for the right kind of Machiavellian mind to [00:14:35] look at those rules and say, well, let me see. I see some ways, at least in some, in some domains of research where there's plentiful data or it's fairly easy to generate. I see ways that I can officially follow the rules and yet, and technically speaking, what I'm doing is publishing something that's statistically significant and yet. Take a step back. And what happens is, is you may end up with a result, know there's the John you need is one of the, one of the big commentators on this stuff has result. Most published research is false in the title of one of his most famous papers. So you need to step back and say, okay, well, the game was working for a while. It was really, we had the game aligned to people's behavior with what, with what was good for all of us. Right. But once certain people started taking advantage of it in certain fields, at least it started not working so well. We want to hang on to the value we get out of having [00:15:35] very clear objective rules. I mean, objective in the sense that anyone can make a fair judgment about whether the rules are being followed or not, but somehow get the alignment back. [00:15:46] Ben: Yeah. And then, so it's like, so, so that game, that game went out of whack, but then sort of like there's. The broader metagame that is like that that's the, the point of the consistent thing. And then also sort of you, you mentioned string theory earlier, and as I was reading the book, I, I don't think you call this out explicitly, but I, I feel like there are a number of domains that people would think of as science now, but that sort of by your, by, by the iron law would not count. So, so string theory being one of them where it's like very hard, we've sort of reached the limit of observation, at least until we have better equipment. Another [00:16:35] one that came to mind was like a lot of evolutionary arguments were sort of, because it's based on something that is lot like is in the past there there's sort of no way to. To gather additional evidence. W would you say that, like, it's actually, you have a fairly strict bound on what counts as science? [00:16:59] Michael: It is, it is strict, but I think it's, it's not my, it's not in any way. My formulation, this is the way science really is now. It's okay. The point of sciences to is to develop theories and models and so on, and then to empirically test them. And a part of that activity is just developing the theories and models. And so it's completely fine for scientists to develop models and string theory and so on and, and develop evolutionary models of that runway ahead of the evidence. Yeah. I, you know, there where, where, where it's practically very difficult to come up with evidence testimony. I don't think that's exact that in itself is not [00:17:35] unscientific, but then that the question of course immediately comes up. Okay. So now what do we do with these models and, and The iron rule says there's only one, there's only one way to assess them, which is to look for evidence. So what happens when you're in a position with string theory or see with some models and evolutionary psychology in particular where, where it's there's there just is no evidence right now that there's a temptation to find other ways to advance those theories. And so the string theorists would like to argue for string theory on the ground of its it's unifying power, for example, that evolutionary psychologists, I think relying on a set of kind of intuitive appeal, or just a sense that there's something about the smile that sort of feels right. It really captures the experience of being a human being and say, I don't know, sexually jealous or something like that. And that's just not, that is not science. And that is not the sort of thing that. In general published in scientific journals, but yeah, the [00:18:35] question that's come up. Well, maybe we are being too strict. Maybe we, if we could, we would encourage the creation of more useful, interesting illuminating explanatorily powerful models and theories. If we allowed that, allowed them to get some prestige and scientific momentum in ways other than the very evidence focus way. Well, maybe it would just open the gates to a bunch of adventure, idle speculation. Yeah. That was way science down and distract scientists from doing the stuff that has actually resulted in 300 years or so of scientific progress. [00:19:12] Ben: And, and, and your argument would be that like for the ladder, that is well don't [00:19:21] Michael: rush in, I would say, you know, think carefully before you do it. [00:19:25] Ben: No, I mean, I find that that very another, another place where I felt like your framework, [00:19:35] I'm not quite sure what the right word is. Like sort of like there was, there was some friction was, is with especially with the the, the Taconic principle of needing to find like, sort of like very minute differences between what the theory would predict. And the reality is sort of areas you might call it like, like complex systems or emergent behavior and where sort of being able to explain sort of like what the fundamentally, just because you can explain how the building blocks of a system work does like, makes it very hard to make. It does not actually help you make predictions about that system. And I I'm I'm do you have a sense of that? How, how you expect that to work out in with, with the iron rule, because it's, it's like when there are, there are just like so many parameters that you could sort of like, argue like, well, like we either we predicted it or we didn't predict it. [00:20:34] Michael: Yeah, [00:20:35] no. Right. So, so sometimes the productions are so important that people will do the work necessary to really crank through the model. So where the forecast is the best example of that. So getting a weather forecast for five days time, you just spend a lot of money gathering data and running simulations on extremely expensive computers, but almost all of, almost all of science. There just isn't the funding for that. And so you'd never going to be able to make, or it's never going to be practically possible to make those kinds of predictions. But I think these models are capable of making other kinds of predictions. So I mean, even in the case of, of the weather models, you can, without, without, without being able to predict 10 days in advance, as long as you relax your demands and just want a general sense of say whether that climate is going to get warmer, you can make, do with a lot with, with, with many fewer parameters. I mean, in the case of, in a way that's not the greatest example because the climate is so complicated that to, to [00:21:35] even, to make these much less specific predictions, you still need a lot of information and computing power, but I think most, most science of complex systems hinges on hinges on relaxing the, the demands for, for. Of the specificity of the prediction while still demanding some kind of prediction or explanation. And sometimes, and sometimes what you do is you also, you say, well, nevermind prediction. Let's just give me a retrodiction and see if we can explain what actually happened, but the explanation has to be anchored and observable values of things, but we can maybe with some sort of economic incident or evolutionary models are a good example of this weekend. Once we've built the model after the fact we can dig up lots of bits and pieces that will show us that the course of say, we, we, we never could have predicted that evolutionary change would move in a certain direction, but by getting the right fossil evidence and so on, we can see it actually did [00:22:35] move in that direction and conforms to the model. But what we're often doing is we're actually getting the parameters in their model from the observation of what actually happened. So there are these, these are all ways that complex system science can be tested empirically one way or [00:22:52] Ben: another. Yeah. The, the thing that I guess that I'm, I'm sort of hung up on is if you want, like, if you relax the specificity of the predictions that you demand it makes it harder than to sort of compare to compare theories, right? So it's like w the, you have, you know, it's like Newton and Einstein were like, sort of were drastically different models of the world, but in re like the reality was that their predictions were, you need very, very specific predictions to compare between them. And so if, if the hole is in order [00:23:35] to get evidence, you need to re lacks specificity it makes it then harder to. Compare [00:23:41] Michael: theories. No, that's very true. So before you, before you demand, is that theories explain why things fall to the floor when dropped then? Good. Einstein let's go. Aristotle looks. Exactly. Yeah. And one reason physics has been able to make so much progress is that the model, all Sara, the models are simple enough that we can make these very precise predictions that distinguish among theories. The thing in that in complex systems sciences, we often, often there's a fair amount of agreement on the underlying processes. So say Newton versus Einstein. There's what you have is a difference in the fundamental picture of space and time and force and so on. But if you're doing something like economics or population ecology, so that looking at ecosystems, animals eating one another and so on. [00:24:35] That the underlying processes are in some sense, fairly uncontroversial. And the hard part is finding the right kind of model to put them together in a way that is much simpler than they're actually put together in reality, but that still captures enough of those underlying processes to make good predictions. And so I think because the prob that problem is a little bit different. You can, the, the that's, it's less, the, the situation is less a matter of distinguishing between really different fundamental theories and Mora case of refining models to see what needs to be included or what can be left out to make the right kinds of predictions. In particular situations, you still need a certain amount of specificity. Obviously, if you, if you really just say, I'm not going to care about anything about the fact that things fall downwards rather than up, then you're not going to be able to refine your models very far before you run out of. It's to give you any further guidance. That's, that's [00:25:35] very true. Yeah. But typically that complex systems kinds of models are rather more specific than that. I mean, usually they're too specific and they give you, they, they, they say something very precise that doesn't actually happen. Right. And what you're doing is you're trying to bring that, that particular prediction closer to what really happens. So that gives, and that gives you a kind of that gives you something to work towards bringing the prediction towards the reality while at the same time not demanding of the model that already make a completely accurate prediction. [00:26:10] Ben: Yeah. But that makes sense. And so sort of to like another sort of track is like what do you think about like theory free? Predictions. Right? So so like the extremity exam question would be like, could a, like very large neural net do science. Right. So, so if you had no theory at all, but [00:26:35] incredibly accurate predictions, like sort of, how does that square with, with the iron rule [00:26:41] Michael: in your mind? That's a great question. So when I formulate the iron Roy, I build the notion of explanation into it. Yeah. And I think that's functioned in, in an important way in the history of science especially in fields where explanation is actually much easier than prediction, like evolutionary modeling, as I was just saying. Now when you have, if you have the, if you, if your, if your model is an effect, then you're on that, that just makes these predictions it looks, it looks like it's not really providing you with an explanatory theory. The model is not in any way articulating, let's say the causal principles, according to which the things that's predicting actually happen. And you might think for that reason, it's not, I mean, of course this thing could always be an aid there's no, it's not it almost anything can have a place in science as a, as a, as a tool, as a stepping stone. Right. So could you cook, but quickly [00:27:35] you say, okay, we now have we now have we've now finished doing the science of economics because we've found out how to build these neural networks that predict the economy, even though we have no idea how they work. Right. I mean, I don't think so. I don't think that's really satisfying because it's not providing us with the kind of knowledge that science is working towards, but I can imagine someone saying, well, maybe that's all we're ever going to get. And what we need is a broader conception of empirical inquiry. Yeah. That doesn't put so much emphasis on an explanation. I mean, what do you want to do. To be just blindsided by the economy every single time, because you insist on a explanatory theory. Yeah. Or do you want, what do you want to actually have some ability to predict what's going to happen to make the world a better place? Well, of course they want to make the world a better place. So we've, I think we've focused on building these explanatory theories. We've put a lot of emphasis, I would say on getting explanations. Right. But, [00:28:35] but scientists have always have always played around with theories that seem to get the right answer for reasons that they don't fully comprehend. Yeah. And you know, one possible future for science or empirical inquiry more broadly speaking is that that kind of activity comes to predominate rather than just being, as I said earlier, a stepping stone on the way to truly explanatory theories. [00:29:00] Ben: It's like, I sort of think of it in terms of. Almost like compression where the thing that is great about explanatory theories is that it compresses all, it just takes all the evidence and it sort of like just reduces the dimension drastically. And so I'm just sort of like thinking through this, it's like, what would a world in which sort of like non explanatory predictions is like, is fully admissible. Then it just leads to sort of like some exponential [00:29:35] explosion of I don't know, like of whatever is doing the explaining. Right? Cause it just, there there's never a compression. From the evidence down to a theory, [00:29:47] Michael: although it may be with these very complicated systems that even in an explanatory model is incredibly uncompressed. Yeah, exactly. Inflated. So we just have to, I mean, I think it's, it's kind of amazing. This is one of my other interests is the degree to which it's possible to build simple models of complicated systems and still get something out of them, not precise predictions about, about, about what's going to happen to particular components in the system. You know, whether, whether this particular rabbit is going to get eaten yeah. Tomorrow or the next day, but, but more general models about how say increasing the number of predators will have certain effects on the dynamics of the system or, or you know, how the kinds of the kinds of things that population ecologists do do with these models is, is, is answer questions. So this is a bit of an example of what I was saying earlier [00:30:35] about making predictions that are real predictions, but but a bit more qualitative, you know, will. Well one of the very first uses of these models was to answer the question of whether just generally killing a lot of the animals in an ecosystem will lead the the prey populations to increase relatively speaking or decrease. It turns out, but in general they increase. So I think this was after this was in the wake of world war one in Italy George, during world war one, there was less fishing because it's just a sailor, but we're also Naval warfare, I guess, not, maybe not so much in the Mediterranean, but in any case there was, there were, there was less fishing. So it was sort of the opposite of, of killing off a lot of animals in the ecosystem. And the idea was to explain why it was that certain just patterns and that increase in decrease in the populations of predator and prey were served. So some of the first population ecology models were developed to predict. So it's kind of a, and these are tiny. These, this [00:31:35] is, I mean, here you are modeling this ocean. That's full of many, many different species of fish. And yet you just have a few differential equations. I mean, that look complicated, but the amount of compression is unbelievable. And the fact that you get anything sensible out of it at all is truly amazing. So we've kind of been lucky so far. Maybe we've just been picking the low-hanging fruit. But there's a lot of that fruit to be had eventually though, maybe we're just going to have to, and, you know, thankfully there're supercomputers do science that way. Yeah. [00:32:06] Ben: Or, or, or developed sort of a, an entirely different way of attacking those kinds of systems. I feel like sort of our science has been very good at going after compressible systems or I'm not even sure how to describe it. That I feel like we're, we're starting to run into all of these different systems that don't, that sort of aren't as amenable [00:32:35] to to, to Titanic sort of like going down to really more and more detail. And so I, I I'd always speculate whether it's like, we actually need like new sort of like, like philosophical machinery to just sort of like grapple with that. Yeah. [00:32:51] Michael: When you modeling, I mean, festival, they might be new modeling machinery and new kinds of mathematics that make it possible to compress things that were previously incompressible, but it may just be, I mean, we look at you look at a complicated system, like the, like in an ecosystem or the weather or something like that. And you can see that small, small differences and the way things start out can have big effects down the line. So. What seems to happen in these cases where we can have a lot of compression as that, those, although those small, those there's various effects of small, small variations and initial conditions kind of cancel out. Yeah. So it may be, you change things [00:33:35] around and it's different fish being eaten, but still the overall number of each species being eaten is about the same, you know, it kind of all evens out in the end and that's what makes the compression possible. But if that's not the case, if, if these small changes make differences to the kinds of things we're trying to predict people, of course often associate this with the metaphor of the butterfly effect. Then I dunno if compression is even possible. You simply, well, if you really want to predict whether, whether there's going to be an increase in inflation in a year's time or a decrease in inflation, and that really every person that really does hinge on the buying decisions of. Some single parent, somewhere in Ohio, then, then you just need to F to, to figure out what the buying decisions of every single person in that in the economy are in and build them in. And yet at the same time, it doesn't, it, it seems that everyone loves the butterfly effect. [00:34:35] And yet the idea that the rate of inflation is going to depend on this decision by somebody walking down the aisles of a supermarket in higher, that just doesn't seem right. It does seem that things kind of cancel out that these small effects mostly just get drowned out or they, they kind of shift things around without changing their high-level qualitative patents. Yeah. Well, [00:34:56] Ben: I mean, this is the diversion, but I feel like that that sort of like touches right on, like, do you believe in, in like the forces theory of history, more like the great man theory of history, right? And then it's like, and people make arguments both ways. And so I think that. And we just haven't haven't figured that out. Actually split like the speaking of, of, of great man theory of history. The thing, like an amazing thing about your book is that you, you sort of, I feel like it's very humanistic in the sense of like, oh, scientists are people like they do like lots of things. They're [00:35:35] not just like science machines. And you have this, like this beautiful analogy of a coral reef that you, that, that scientists you know, contribute, like they're, they're, they're like the living polyps and they build up these they're, they're sort of like artifacts of work and then they go away and it, they, the new scientists continue to build on that. And I was sort of wondering, like, do you see that being at odds with the fact that there's so much tacit knowledge. In science in the sense that like you F for most fields, I found you probably could not reconstruct them based only on the papers, right? Like you have to talk to the people who have done the experiments. Do you see any tension [00:36:23] Michael: there? Well, it's true that the, the metaphor of the coral reef doesn't doesn't capture that aspect of science. It's very true. So I think on the one hand that what's what is captured by the metaphor is the idea that the, [00:36:35] the, what science leaves behind in terms of, of evidence that can is, is, is interpreted a new every generation. So each new generation of scientists comes along and, and, and, and sort of looks at the accumulated fact. I mean, this is going to sound it, this is, this makes it sound. This sounds a little bit fanciful, but you know, in some sense, that's, what's going on, looks at the facts and says, well, okay, how shall I, what are these really telling me? Yeah. And they bring their own kind of human preconceptions or biases. Yeah. But none of these break-ins the preconceptions and biases are not necessarily bad things. Yeah. They look at it in the light of their own mind and they are reinterpret things. And so the scientific literature is always just to kind of a starting point for this thought, which, which really changes from generation to generation. On the other hand, at the same time, as you just pointed out, scientists are being handed certain kinds of knowledge, [00:37:35] which, which are not for them to create a new, but rather just to kind of learn how to just have a use various instruments, how to use various statistical techniques actually. And so there's this continuity to the knowledge let's, as I say, not captured at all by the reef metaphor, both of those things are going, are going on. There's the research culture, which well, maybe one way to put it. It's the culture, both changes stays the same, and it's important that it stays the same in the sense that people retain their, know how they have for using these instruments until eventually the instrument becomes obsolete and then the culture is completely lost, but it's okay. Most of the time if it's completely lost. But on the other hand, there is this kind of always this fresh new re-interpretation of the, of the evidence simply because the the interpretation of evidence is is a rather subjective business. And what the preceding generations are handing on is, is not, is, should be seen more as a, kind of [00:38:35] a data trove than, as, than a kind of a body of established knowledge. But [00:38:43] Ben: then I think. Question is, is it's like, if, what counts as evidence changes and all you are getting is this data trove of things that people previously thought counted as evidence, right? Like, so you know, it's like, they all, all the things that were like, like thrown out and not included in the paper doesn't like that make it sort of harder to reinterpret it. [00:39:12] Michael: Well, there's, I mean, yeah. The standards for counselors, evidence, I think of as being unchanging and that's an important part of the story here. So it's being passed on, it's supposed to be evidence now of course, some of it, some of it will turn out to be the result of faulty measurements, all these suspicious, some of that even outright fraud, perhaps. And so, and so. To some extent, that's [00:39:35] why you wouldn't want to just kind of take it for granted and they get that, that side of things is not really captured by the reef metaphor either. Yeah. But I think that the important thing that is captured by the metaphor is this idea that the, what, what's the thing that really is the heritage of science in terms of theory and evidence, is that evidence itself? Yeah. It's not so much a body of knowledge, although, you know, that knowledge can, it's not that it's, it's not, it's not that everyone has to start from scratch every generation, but it's, it's this incredibly valuable information which may be, you know, maybe a little bit complicated in some corners. That's true, but still it's been generated according to the same rules that or, you know, 10 to. by the same rules that we're trying to satisfy today. Yeah. And so, which is just as [00:40:35] trustworthy or untrustworthy as the evidence we're getting today. And there it is just recorded in the animals of science. [00:40:41] Ben: So it's much more like the, the thing that's important is the, like the, the process and the filtering mechanism, then the, the, the specific artifacts that yeah. [00:40:55] Michael: Come out, I'll make me part of what I'm getting at with that metaphor is the scientists have scientists produce the evidence. They have their, an interpretation of that evidence, but then they retire. They die. And that interpretation is not really, it doesn't need to be important anymore enough and isn't important anymore. Of course, they may persuade some of their graduate students to go along with their interpretation. They may be very politically powerful in their interpretation, may last for a few generations, but typically ultimately that influence wanes and What really matters is, is, is the data trove. Yeah. I mean, we still, it's not, as you, as you said, it's not perfect. We have to regard it with that [00:41:35] somewhat skeptical eye, but not too skeptical. And that's the, that's the, the real treasure house yeah. Of [00:41:43] Ben: science and something that I was, I was wondering, it's like, you, you make this, this really, you have a sentence that you described, you say a non event such as sciences non-rival happens, so to speak almost everywhere. And I would add, like, it happens almost everywhere all the time, and this is, this is wildly speculative. But do you think that there would have been any way to like, to predict that science would happen or to like no. There was something missing. So like, could, could we then now, like, would there be a way to say like, oh, we're like, we're missing something crucial. If that makes sense, like, could we, could we look at the fact that [00:42:35] science consistently failed to arrive and ask, like, is there, is there something else like some other kind of like like intellectual machinery that also that has not arrived. Did you think, like, is it possible to look for that? [00:42:51] Michael: Oh, you mean [00:42:52] Ben: now? Yeah. Or like, like, or could someone have predicted science in the past? Like in [00:42:57] Michael: the past? I, I mean, okay. I mean, clearly there were a lot of things, highly motivated inside. Why is thinkers. Yeah. Who I assume I'd have loved to sell the question of say configuration of the solar system, you have that with these various models floating around for thousands of years. I'm not sure everyone knows this, but, but, but, but by, you know, by the time of the Roman empire, say that the model with the sun at the center was well known. The muddle with the earth at the central is of course well known and the model where the earth is at the center, but then the [00:43:35] sun rotates around the earth and the inner planets rotate around the sun was also well known. And in fact was actually that this always surprises me was if anything, that predominant model in the early middle ages and in Western Europe, it had been kind of received from late antiquity from that, from the writers at the end of the Roman empire. And that was thought to be the, the kind of the going story. Yeah. It's a complicated of course, that there are many historical complications, but I, I take it that someone like Aristotle would have loved to have really settled that question and figured it out for good. He had his own ideas. Of course, he thought the earth had to be at the center because of its that fit with his theory of gravity, for example, and made it work and having the Senate, the city just wouldn't wouldn't have worked. And for various other reasons. So it would have been great to have invented this technique for actually generating evidence that that in time would be seen by everyone has decisively in favor of one of these theories, the others. So they must have really wanted it. [00:44:35] Did they think, did they themselves think that something was missing or did they think they had what they needed? I think maybe Aristotle thought he had what was needed. He had the kind of philosophical arguments based on establishing kind of coherence between his many amazing theories of different phenomena. Know his. Falling bodies is a story about that. The solar system, as of course, he would not have called it the, the planets and so on, and it all fit together so well. And it was so much better than anything anyone else came up with. He may have thought, this is how you establish the truth of, of of the geocentric system with the earth at the center. So now I don't need anything like science and there doesn't need to be anything like science, and I'm not even thinking about the possibility of something like science. Yeah. And that, to some extent, that explains why someone like Aristotle, who seemed to be capable of having almost any idea that could be had, nevertheless did [00:45:35] not seem to have, sort of see a gap to see the need, for example, for precise, qualitative experiments or, or, or even the point of doing them. Yeah. It's, you know, that's the best, I think that's the most I can say. That I don't, I let myself looking back in history, see that people felt there was a gap. And yet at the same time, they were very much aware that these questions were not being said, or [00:46:04] Ben: it was just it just makes me wonder w w some, some period in the future, we will look back at us and say like, oh, that thing, right. Like, I don't know, whatever, like, Mayans, right? Like how could you not have figured out the, like my antigenic method? And it's just it, I, I just find it thought provoking to think, like, you know, it's like, how do you see your blind spots? [00:46:32] Michael: Yeah. Well, yeah, I'm a philosopher. And we in, in [00:46:35] philosophy, it's still, it's still much like it was with Aristotle. We have all these conflicting theories of say you know, justice. What, what really makes the society just to what makes an act. Or even what makes one thing cause of another thing. And we don't really, we don't know how to resolve those disputes in a way that will establish any kind of consensus. We also feel very pleased with ourselves as I take it. Aristotle's are these really great arguments for the views? We believe in me, that's still sort of quite more optimistic maybe than, than we ought to be. That we'll be able to convince everyone else. We're right. In fact, what we really need and philosophers, do you have this thought from time to time? There's some new way of distinguishing between philosophical theories. This was one of the great movements of early 20th century philosophy. That logical positivism was one way. You can look at it as an attempt to build a methodology where it would be possible to use. [00:47:35] And in effect scientific techniques to determine what to, to adjudicate among philosophical theories, mainly by throwing away most of the theories as meaningless and insufficiently connected to empirical facts. So it was a, it was a brutal, brutal method, but it was an idea. The idea was that we could have, there was a new method to be had that would do for philosophy. What, what science did for, you know, natural philosophy for physics and biology and so on. That's an intriguing thought. Maybe that's what I should be spending my time thinking about, please. [00:48:12] Ben: I, I do want to be respectful of your time, the like 1, 1, 1 last thing I'd love to ask about is like, do you think that and, and you, you talked about this a bit in the book, is that, do you think that the way that we communicate science has become almost too sterile. And sort of one of my, my going concerns [00:48:35] is this the way in which everybody has become like super, super specialized. And so, and sort of like once the debate is settled, creating the very sterile artifacts is, is, is useful and powerful. But then as, as, as you pointed out as like a ma as a mechanism of like, actually sort of like communicating knowledge, they're not necessarily the best. But, but like, because we've sort of held up these like sterile papers as the most important thing it's made it hard for people in one specialization to actually like, understand what's going on in another. So do you think that. That, that, that we've sort of like Uber sterilized it. You know, it's like, we talked earlier about people who want to, to change the rules and I'm very much with you on like, we should be skeptical about that. But then at the same time you see that this is going [00:49:35] on. [00:49:35] Michael: Yeah. Well, I think, I mean, there's a real problem here, regardless, you know, whatever the rules of the problem of communicating something as complicated as scientific knowledge or the really, I should say the state of scientific play because often what needs to be communicated is not just somebody that's now been established beyond any doubt, but here's what people are doing right now. Here's the kind of research they're doing here are the kinds of obstacles they're running into to communicate, to, to put that in a form where somebody can just come along and digest it all easily. I think it was incredibly difficult, no matter what the rules are. Yeah. It's probably not the best use of most scientists time and to try to present their work in that way. It's better for them just to go to the rock face and start chipping away at their and little local area. So what, what you need is either for a scientist to take time out from time to time. And I mean there exists these publications review [00:50:35] publications, which try to do this job. That's true. So that people in related fields, you know, typically in the typically related fields means PhD in the same subjects. They're usually for the nearest neighbors to see what's going on, but often they're written in ways that are pretty accessible. I find. So then you create, you create a publication that simply has a different set of rules. The point here is not to in any way to evaluate the evidence, but simply to give a sense of the state of play for. To reach further a field, you have science journalists or what's going on with newspapers and magazines right now is because it's not very good for serious science journalism. And then you have scientists and people like me, who, for whatever reason, take some time out from what they usually do to really, really look kind of a self-standing project to explain what's going on those activities all to some extent, take place outside the narrow narrow view of the [00:51:35] iron rule. So, and I think, I think it's, it's going okay. Given the difficulty of the task. It seems to me that that the, the, the knowledge of the information is being communicated in a, in a somewhat effective, accessible way. I mean, not that if anything, the real, the real, the real barriers to. Some kinds of fruitful, interdisciplinary thinking, not just that it's hard for one mind to simply take on all this stuff that needs to be taken on no matter how effectively, even brilliantly it's communicated the world is just this very complicated place. Yeah. You know, one, one thing I'm interested in historically not, I mean, just, I find fascinating is that fruitfulness of certain kinds of research programs that came out of came out of finding serious wars, like in particular, the second world war, you threw a bunch of people together and they had to solve some problem, like [00:52:35] building at a bomb , it's usually something, something horrendous or a a device, the device for the guns and bombers and so on that would allow that. To rather than having to bit very skillfully. I forget the word for it. You know, you kind of have to put your guide ahead of where the enemy fighter so by the time that your, your, your bullets get there, the plane arrives at the same time, but they built these really sophisticated analog computers basically would do the job. So the Ghana, some, you know, some 19 year olds, like just pointed the plane again. Yeah. And a lot of problems to do with logistics and weather forecasting. And so on this, these, these, the need to have that done through together, people from very different areas in engineering and science and so on and resulted in this amazing explosion. I think if knowledge [00:53:35] it's a very, it's a very attractive period in the history of human thought. When you go back and look at some of the things people were writing in the late forties and fifties, Computers, how the mind works. And so on. And I think some of that is coming out from this, this kind of almost scrambling process that that happened when, when these very specific kind of military engineering problems are solved by throwing people together who never normally would have talked to one another. Maybe we need a little bit of that. Not the war. Yeah. But [00:54:08] Ben: I have a friend who described this as a serious context of use is it is a thing. And it's, I, I mean, I'm, I'm incredibly biased towards looking at that period. Okay. But [00:54:20] Michael: I guess it's connected to what you're doing. [00:54:23] Ben: Absolutely. Is I do you know who. Yeah. So, so he actually wrote a series of memoirs and I just there reprinting it. I wrote the forward to it. So that's, [00:54:35] so I'm like, I agree with you very strongly. And it is it's. I find, I always find that fascinating because I feel like there's, there's like this. I mean, there's this paradigm that sort of got implemented after world war II, where do you think like, oh, like theory leads to applied science leads to leads to technology, but you actually see all these, these places where like, trying to do a thing makes you realize a new theory. Right. And you see similar thing with like like, like the steam engine, right? Like that's how we get thermodynamics is it's like what, like that's a great piece of work that's right, right. Yeah. So that's, I mean, like that, that absolutely plays to my biases that like, yeah, we. Like not, not doing interdisciplinary things for their own sake. Like just being like, no, like let's get these people that are rude, but like having very serious contexts of use that can like drive people having [00:55:32] Michael: problem to solve. It's not just the case [00:55:35] of kind of enjoying kind of chatting about what you each do. And then just going back to the thing you were doing before. Yeah. Feeling, feeling enriched. Yeah. But otherwise I'm changed it. It's interesting [00:55:46] Ben: though, because the incentives in that situation sort of like now fall outside of the iron rule right. Where it's like, it's like, you don't care. Like you don't care about like, I mean, I guess to some extent you could argue like the thing needs to work. And so if it works, that is evidence that your, your theory is, is [00:56:09] Michael: correct. That's true. But, you know, but I think as you're about to say, engineering is not science and it's not it's the own rule is not overseeing engineering. It's the it's engineering is about making things that work and then about producing evidence for, or against various ideas. That's just a kind of a side effect, [00:56:27] Ben: but then it can sort of like, I guess it can like spark those ideas that people then sort of like take, I [00:56:35] was like, I mean, in my head, it's all of this, like I think of what would I call like phenomena based cycles where like, there's, there's like this big, like cyclical movement where like you discover this like phenomena and then you like, can theorize it and you use that theory to then do like, I dunno, like build better microscopes, which then let you make new observations, which let you discover new phenomena. [00:57:00] Michael: It's really difficult to tell where things are going. Yeah. I think the discovery of plate tectonics is another good example of this sea, of these, all of these scientists doing things that, that certainly not looking into the possible mechanisms for continental drift, right. But instead, getting interested for their own personal reasons and doing things that don't sound very exciting, like measuring the magnet, the measuring the ways that the orientation of the magnetic field has changed over past history. By looking at the, by basically digging up bits of rock and tests, looking at the orientations of the, [00:57:35] of the iron molecules or whatever, and the lock and, you know, it's, I mean, it's not, it's not completely uninteresting, but in itself it sounds like a kind of respectable, but probably fairly dull sideline and geology. And then things like that. We're developing the ability to meet very precise measurements of the gravitational field. Those things turn out to be. Key to understanding this, this amazing fact about the way the whole planet works. Yeah. But nobody could have understood in advance that, that they would play that role. What you needed was for a whole bunch of, that's not exactly chaos, but I kind of I kind of diversity that might look almost, it might look rather wasteful. Yeah. That's very practical perspective to, to blossom. Yeah. This is, [00:58:29] Ben: I, I truly do think that like, moving forward knowledge involves like being almost like [00:58:35] irresponsible, right? Like if you had to make a decision, it's like, it's like, should we fund these people who are going in like measuring magnetic fields just for, for funsies. Right. And it's like, like, like from, from like a purely rational standpoint, it's like, no, but yeah, [00:58:51] Michael: the reason that sort of thing happens is cause a bunch of people decide they're interested in. Yeah, persuade the students to do it too. And you know, whether they could explain it to the rest of the world, actually that's another, there was also a military angle on that. I don't know if you know that, but the, the, some of the mapping of the ocean floors that was also crucial to the discovery of plate tectonics in the fifties and sixties was done by people during the war with the first sonar systems who nobody's supposed to be, you know, finding submarines or whatever, but decided, Hey, it would be kind of interesting just to turn the thing on and leave it on and sort of see what's down there. Yeah. And that's what they did. And that's how some of those first maps started being put together. [00:59:35] That's [00:59:36] Ben: actually one of the, one of my concerns about trying to do science with, with like no networks is. How many times do you see someone just go like, huh, that's funny. And like, like so far you can't like computers. Like they can sort of like find what they're setting out to find or like they have a, or they, they almost have like a very narrow window of what is considered to evidence. And perhaps like through, through your framework the, the thought of like, huh, that's funny is like you're someone's brain, all of a sudden, like take something as evidence that wasn't normally like supposed to be evidence. Right. So it's like, you're doing like one set of experiments and then you just like, notice this like completely different thing. Right. And you're like, oh, like maybe that's actually like a different piece of evidence for something completely different. And then it opens up a rabbit hole. [01:00:31] Michael: Yeah. This is another one of those cases though, with.[01:00:35] Sort of the, some kind of creative cause it, and they do think it's incredibly important that scientists not get distracted by things like this. On the other hand, it would be terrible if scientists never got distracted by things like this. And I guess I, one thing I see with the iron rule is it's is it's a kind of a social device for making scientists less distracted. Well, not putting the kind of mental fetters on that would, would make it impossible for them ever to become distracted. [01:01:05] Ben: And maybe perhaps like the, like the, the distraction and like saying, oh, that's funny. It's like the natural state of human affairs. [01:01:12] Michael: Well, I think so. I think if we, we would all be like Aristotle and it turns out it was better for science fair, actually a little bit less curious and yeah. And it's interesting and variable and we had actually our, so [01:01:24] Ben: one could almost say that like the, the iron rule, like w w would you say it's accurate that like the iron rule is absolutely. But so [01:01:35] is breaking in the sense that like, like if, if like somehow there, like you could enforce that, like every single person only obeyed it all the time science, like we, we actually, we make serendipitous discoveries. And so it's like in order to make those, you need to break the rule, but you can't have everybody running around, breaking the rule all the [01:01:57] Michael: time. All right. Put it a little bit differently. Cause I see the rule list is not so much, it's not so much a rural for life. And for thinking is for, for sort of publishing activity. So you don't, you're not, you're not technically breaking the rule when you think. Huh? That's funny. And you go off and start thinking your thoughts. You may not be moving towards. Yeah. It has the kind of scientific publication that, that satisfies the role. But nor are you breaking. The F, but if all scientists can, as it were live to the iron rule, not just in there, not just when they took themselves to be playing a game in every way that they thought about [01:02:35] they, they, they thought about the, the point of their lives as, as kind of investigators of nature. Then, I mean, that's, people are just not like that. It's hard to imagine that you could really, that would ever really happen. Although, you know, to some extent, I think our science education system does encourage it. Yeah. But if that really happened, that would probably be disastrous. We need, it's like the pinch of salt, you know, if you only want to pinch, but without it, it's not good. Yeah. That [01:03:06] Ben: seems like an excellent place to end. Thank you so much for being part of idea missions. [01:03:35]
undefined
Jan 2, 2022 • 1h 14min

Distributing Innovation with The VitaDAO Core Team [Idea Machines #42]

A conversation with the VitaDAO core team. VitaDAO is a decentralized autonomous organization — or DAO — that focuses on enabling and funding longevity research. The sketch of how a DAO works is that people buy voting tokens that live on top of the Etherium blockchain and then use those tokens to vote on various action proposals for VitaDAO to take. This voting-based system contrasts with the more traditional model of a company that is a creation of law or contact, raises capital by selling equity or acquiring debt, and is run by an executive team who are responsible to a board of directors. Since technically nobody runs VitaDAO the way a CEO runs a company, I wanted to try to embrace the distributed nature and talk to many of the core team at once. This was definitely an experiment! The members of the core team in the conversation in no particular order: Tyler Golato Paul Kohlhaas Vincent Weisser Tim Peterson Niklas Rindtorff Laurence Ion Links VitaDAO Home Page An explanation of what a DAO is Molecule Automated Transcript VitaDAO [00:00:35]  In This conversation. I talked to a big chunk of the VitaDAO core team. VitaDAO is a decentralized autonomous organization or Dao that focuses on enabling and funding. Longevity research. We get into the details in the podcast, but a sketch of how a DAO works is that people buy voting tokens that live on top of the Ethereum blockchain.  And then they use those tokens to vote on [00:01:35] various action proposals for me to doubt to take. This voting based system contrasts with more traditional models of the company. That is a creation of law or contract raises capital by selling equity or acquiring debt, and is run by an executive team who are responsible to a board of directors.  Since technically, nobody runs for you to doubt the way it CEO runs the company. I wanted to try to embrace the distributed nature and talk to many of the core team at once. This was definitely experiment. Uh, I think it's your day. Well, Oh, well, but I realize it can be hard to tell voices apart on a podcast.  So I'll put a link to a video version. In the show notes. So without further ado, here's my conversation with Vita Dao.  What I want to do so that listeners can put a voice to a name is I want to go around everybody say your name and then you say how you would pronounce the word VI T a D a O. Tim, would you say your name and then, and then pronounce the word that [00:02:35] that's kind of how I've done it. Yeah. And so I'm the longevity steward we can help kind of figure out deal flow on, edited out, so. Awesome. All right, Tyler, you're next on. It is definitively Vieta Dell. Yeah. And I also help out with the longevity steward group. I started starting longevity group and I'm the chief scientific officer and co-founder at molecule as well. And then Nicholas you're next on my screen. It's definitely beats it out. And I'm also a member of the longevity working group in this science communication group and also currently initiating and laptop. Great. And then Vinson. Yeah. So it's the same pronunciation weeded out, but I'm helping on the side and also on kind of like special projects, like this incline where that I took around, we had recently and yeah, in Lawrence. Lauren Sajjan Vieta thou. And I [00:03:35] also steward the deal flow group within the longevity working group. And I think we should all now say as a hive mind, Paul Paul has said at the same time, oh, sorry. I'm going to say bye to dad. Mess with her in yeah. Hi everyone. My name is Paul cohost. I would say, be to down. I actually wonder what demographics says, Vida, like RESA. We should actually look into that. It's interest, interesting community metric. I'm the CEO and co-founder of molecule and one of the co-authors of the VW. I also work very deeply on the economic side and then essentially help finalize deal structures. So essentially the funding deals that we've been carry through into molecule and yeah, very excited to be here today. And maybe we can jump back into Lawrence adjusted  we well, [00:04:35] also, so the thing that's confusing to me is that I always assumed that the Vith came from the word vitality. Right. And so that's, that's where the idea of calling it a fight Vita doubt, right? Because like, I don't say vitality, I say fighting. In German, it's actually retaliatory. Yeah. So it's just like the stupid Anglo centrism that is from the Latin, I would say from the word life. Yeah. Cool. So to really sort of jump right in, I think there's the, to like, be very direct, like, can we like walk through the mechanics of how the, how, how everything actually works? Right. So I think listeners are probably familiar with sort of like the high level abstract concept of there's a bunch of people. They have tokens, they vote on deals you give researchers money to, to do work, but like, sort of [00:05:35] like very, very mechanical. How does the dowel work? Could you like walk us through maybe like, sort of a a core loop of, of like what, what you do Yeah. So I mean, the core goal of the DAO is really to try and democratize access to decision-making funding and governance of longevity therapeutics. And so mechanically, there's a few different things going on and anyone feel free to interrupt me or jump in as well. But, so I would start from the base layer is really having this broad community of decentralized token holders, which are ultimately providing governance functions to this community. And the community's goal is to deploy funding that it's raised into early stage. Clinical proof of concept stage longevity therapeutics projects. And these basically fall between these two, let's say points where some tension exists in when it comes to translating academic science. So you have this robust early stage, let's say basic research funding mechanism through things like the NIH [00:06:35] grant funding, essentially. And that gets really to the point of being able to do, let's say very early stage drug discovery. And there's also some sort of downstream ecosystem consisting of venture capital company builders, political companies that does let's say late stage funding and incubation of ideas. They're more well-vetted, but between there's this sort of problem where a lot of innovation gets lost, it's known as the translational valley of death. Yeah. What did we try to do is we try to identify as a community academics that are working and let's say, have stumbled onto a potentially promising drug, but aren't really at the point yet where they can create a startup company. And what we want to do is basically by working together as a community, provide them the funding, the resources, in some cases, even the incubation functions to be able to do a series of killer experiments, really deep risk of project, and then file intellectual property, which in exchange for the funding, the dowel actually, and this is, this is sort of mechanically enabled by a legal primitive that we've been developing a molecule called an IP [00:07:35] NFP framework, which basically consists on one side of a legal contract, typically in the form of a sponsored research search agreement between a funder and a party that would be receiving the funding, the laboratory, and on the other side of federated data storage layer. And so the way this works is basically beat a doubt would receive applications. Some of these projects could, for example, be listed on molecules marketplace have an IPN T created meta dealt with would send funds via the system to the university and in exchange, they would hold this license and essence for the IP, that results from that project. And then within the community, we have domain experts. For example, we have a longevity working group which consists of MDs. Post-docs PhD is basically anyone that has deep domain experience in the longevity space. They work to evaluate projects due diligence and ultimately serve as sort of a quality control filter for the community, which consists of non-experts as well. Maybe just people who are enthusiastic about what. And beyond that, there's also additional domain expertise in the [00:08:35] form of some people who have worked at biotech VCs, for example, people with entrepreneurial experience and through this community, you basically try to form, let's say a broad range of expertise that can then coach the research or work with them and really help the academic move the IP and the research project towards the stage where, where it can be commercialized. And now VitaDAO stewarding this process. They have ownership in the IP and basically what would happen is if that research has out license co-developed sold onto another party, just made productive in essence and. It's successful in commercializing those efforts and received some funds, let's say from the commercialization of that asset, that goes back into the treasury and is continuously deployed into longevity research. So the long-term goal is to really create this sort of self-sustaining circular funding mechanism to continue to fund longevity research over time. And now within that, we could wrap it all into, you know, there's a bunch of like specific mechanics in there. I would love to, to rabbit hole, [00:09:35] I think Vincent, yes, to and on the kind of very simple technical layer, kind of very initially we started off just having this idea and putting it out there and then like having like a kind of Genesis auction where everyone could contribute funds. Like some people contribute 200 bucks and others contributed millions. And in exchange for that. Just like as a, there is an example, like for every dollar they gave, they gave, got one vote in organization. And then this initial group of people that came together to put, to, to pool their resources, to fund longevity, research, got votes and exchange, and actually with these votes, basically they can then what Tyler described make on the, on these proposals that that are vetted through the longevity working group, they can make a vote if it shouldn't get funding. And, and that's of course kind of like the traditional, like model of like a Dow and of like token based governance and boating [00:10:35] and yeah, which we did of course was like kind of like a very easy mechanism that got it started, but then the storm of course can also be useful for different purposes and can also incentive. People working on specific projects, research has also getting told and so kind of getting a governance, right. And organization in exchange for good and contributing work. Nicholas, did I see your hand? Yes. And maybe one thing to add here that takes a bit of a step back. It's adding, adding the question. Why does all of this matter? Why does the style framework Adderall fall? And I think when you, when you look at the way currently academic research works, basically the incentives for the scientists and the moment that something is published in a peer reviewed journal, so that the system is optimized for peer review publication. And then on the other hand, on the translational side, when something, you know,[00:11:35]  Turning into a medicine return on investment. And they're basically calculating a risk adjusted net present value of the project. Now, the problem with a lot of biomedical research is the science has, is done. The paper is published, but a risk adjusted present value of the project is still approaching zero because there there's still some key experiments are missing or to get that experiment off the ground. And actually this is where the doubt can come in and using new technologies to basically financialize the IP and make it more liquid. And may, maybe more specifically the asset isn't created, you know, a lot of research you know, the NIH has not focused on therapies. I mean, not the creation of new therapies where value is actually created. They'll, they'll do clinical trials on existing therapies, but, but you know, the real value inflection points are not done through basic basic research. So, so that's where we hope to solve. Got it. So, [00:12:35] w I think in my, in my mind, the thing that's really interesting about Vieta Dow, as opposed to other dads is, is the sort of like interface with the, with the world of atoms that that's like a pretty, pretty unique and exciting thing. So there's, there's, there's a lot of mechanics there that I'm actually interested in digging into. So like one thing is in order. So, so sort of all, in order to. Give money to a researcher even at some point they need to turn it into dollars or euros in order to buy the equipment that they need to, to do the research. And so are they, they're like taking the, VitaDAO token and then converting that into, into currency. How does, how does that work? Yeah, I can speak to this or Paul or if you want to, if you want to speak to it. So, I mean, I can, I can maybe kick it off. So one of the things that's really important and that we've been really focused on at molecule is ensuring that the process of working with researchers, which goes [00:13:35] well beyond just working with the research, right? You need to work with the university, with the tech transfer office. You need to negotiate a licensing agreement and all of this can happen in a way that is somewhat seamless and it doesn't require them. Let's say having to do all of their interactions with it, let's say a. You know, this sort of ephemeral entity that exists on the Ethereum blockchain. So we've basically created rails by a molecule for things like Fiat forwarding we're negotiations with the TPO for a lot of the legal structures to ensure that it's as smooth as possible, the Vino tokens themselves don't actually play into. We can, we can give those to researchers as an incentive and to people who perform work for the community. But that is not actually the, what is, what is given to researchers. Researchers. When, when a proposal is passed within the community, we have a certain treasury and ether, for example, that we've raised over over a period of time that is liquidated and sold for USD decency. And then that USB-C travels via off rails that molecule has created to ensure that the university [00:14:35] can just receive beyond currency. So I mean, a big part of this. Know, defy in a lot of ways has some advantages in that. It never has to really interact with real world banking systems. This is a challenge in the D space. We still have to interface with tech transfer offices. We still have to, you know, speak to general counsel at universities and make sure that people are comfortable working this sort of way. And I would say this is probably one of the most significant challenges and the reason that, you know, a lot of legal engineering and a lot of thinking went into how to create the base layer infrastructure that allows us to actually operate in this space. So it's, yeah, it's a challenge. It's something that we're always trying to iterate on. I mean, we imagine a future where universities do have wallets. You know what I mean, researchers do have wallets, but it's going to take some time for that future to be realized. And in the midterm, I think it's really important to show the world. The dowels can work effectively, especially these types of dowels that have a core mission, vision of funding research. They can do that productively even given the constraints of the, of the current. And [00:15:35] so that, so, so like negotiating with tech transfer offices like they, I assume need to sign a sort of an analog legal agreement with a analog legal entity. Is that correct? And is that, is that like, is molecule that, that legal entity or like, how does, how does that work? Yeah, so maybe so to reiterate what Tyler said, there's actually nothing stopping that say from a university to directly engage with a doubt. I think it's more that those systems don't exist it, and there's not enough, like precedents to kind of enable those. There's also much larger, for example, question of like, to what extent could it litigate against the patent and actually actually enable, enable this protection. And so if you did operates through a set of different agents so these are analog, real wealth legal partners, and some molecule is one of those legal partners in essence. So we can, we can ensure that we are the licensing party, for example, with a tech transfer office. And then we enter into a sub licensing agreement, for [00:16:35] example, with with B to them. And in the same sense as what were tologist explained, we also then ensure that all of the, the, the payment flows and the. Are compliant to Kahn systems, something that we've realized it's, it's, it's really important to kind of bridge this emerging with three world with the real world to really make it as seamless as possible. And not for example, for us that yeah. University to go through the process of opening a Coinbase account figure out what is USB-C actually. But I mean, fundamentally I I like to use this analogy. If you can make an international EFG with like a big number and a swift number, like actually crypto is much easier than that by now, but it's a much less much less adopted system, even from an accounting perspective. Accounting for funding flows in, in this decentralized system is very simple. Like the, the proof of funds is very easy to provide because you can visually see where every single transaction can be traced back to. But so the way that we've tried to design really the flow of funding within. With Nvidia down within molecules to make it as seamless and [00:17:35] interoperable with the real well today as possible. And also to ensure that we have the highest degree of legal standards, legal integrity. So we work with with specialized IP counsel and IP law firms across the world in different jurisdictions to really ensure also that any IP that we adopt funds and that is encapsulated within these IP NFTE frameworks is future-proof. Because that, that's something that became very apparent for us. When we, when you work with IP, you can't really, you can't really make mistakes in terms of how you protect the intellectual property. And you also have a responsibility to actually the therapeutics that are being developed there, because if you, if anything was to invalidate the IP that could fundamentally influence whether a potential therapeutic can actually ever reach patients. Yeah. And so I think that the, the, the one. The question is there has to be a lot of trust between the Dow itself and sort of the, the organization or [00:18:35] people doing the negotiation and sort of holding the IP and forcing the IP. Because, because there's like at that sort of Dow analog interface there's is my impression is that there's no like enforceable legal contract. Right. So is that correct? I'm just, I'm just trying to like wrap my head around, like the actual. It is an enforceable legal contract, actually. So the initial agreement between let's say molecule and the university is a typical stock standard sponsored research agreement that you would do at sea, between between two parties, like a pharmaceutical company and a university, for example. So these are, these are the same agreements that the universities use. In many case, we plug into their pre-existing templates. Those typically have within them an assignment agreement or an ability to sub-license where the company or whomever is doing this initial licensing then has [00:19:35] the right to license exclusively the, the resulting intellectual property, or in some cases, even the full rights of the agreement molecule now engages in. Fully contractual, fully enforceable, typically in the context of Switzerland where the company is based agreement sublicensing agreement with the Dow via the election by via the election of this agent process. And now, so I would say the weakest part of that, if you want to think about where the sort of core let's say. Yeah, like the breaking points are with in that process would be, would be around the fact that there is required a large amount of trust in the agents, but really what the agent is doing is, is actually putting themselves at risk. They're taking on legal liability in some cases on behalf of the dowel. And so. Something if that Peyton was, let's say that agent made offer something or wasn't able to honor their agreement. I mean, there is full legal recourse that it could be, that [00:20:35] could be taken. But this is, yeah. Again, when you look at Peyton enforceability and Indian electoral property landscape, most of these things like, you know, you find out what works through, through litigation. These things have not been litigated yet. There's not really precedent for enforcement here. But this is also what it takes to innovate in the intellectual property landscapes. So it's, there is a tension between these things, but it, yeah, maybe to your original question, there's a lot of, is a lot of trust, certainly involved in I'm thinking about when we go, we go stuff is that there's like no first principles of it. It's just sort of like poke it and see what happens. Yeah, maybe as an interesting, it will be interesting case studies before it becomes relevant to us because in the space, kind of like some of the core protocols, like units open curve, I actually governed by dolls now. And actually they are now enforcing the IP actually at the courts. So even before it will be come necessary for us, there will be cases and case studies of kind of like it's very big organizations like a human 12 or [00:21:35] curve enforcing and going through the courts like this, even this year or next year already cases that are coming up. So it will be really interesting to see what are the legal precedents or like a Dow and forces is yeah. IP through agents basically. And I think there will be precedent before we will have to kind of in false our IP. Yeah. Well, it's literally saying your name. Well, one thing to add there. So to reiterate what Vincent said as well, I mean, that was a very quickly become powerful economic agents. And I think enforcing enforcing let's say processes in our legal system is often a function of capital. So I think if you did that, for example, was to ever get to a point where it had to enforce one of its one of its IP cases. It would definitely have the financial backing to do so, and it can operate through agents to kind of enforce the validity of its IP. And then the remaining processes that's, that's considered like the relationships between agents are really [00:22:35] subject to the same legal processes that we have today. When two companies enter a entered equilibrium, and if a biotech company enters a sponsored research agreement with the university, the trust agreements that are set up there are, are, are not different. And, and the underlying legal contracts that we using are also the same, I think. Back to Vincent's point, there are actually first cases where Dows are enforcing their IP. This is in the context of fits in ICU, open source software development, where, where a dowel let's say has developed a certain protocol, but that protocol is open source. But it's probably running under a specific software license and the Dow is not choosing to actively enforce its its IP against someone who infringed against that license. I think one additional aspect here is to when we think through trust and where is trust, concentrated and power concentrated from the Dow is to note that that, although there are these agents that are available for a Dodge interact with the real world, the capital's [00:23:35] concentrated within the network of token holders. And, you know, just on a technical level, there's this multisignature wallet that holds all the funds and that's controlled by members of the community. And it's all basically in a token gated way. And that network structure, that social network, which is basically the Dow, I think can be very well compared also to some kind of association where you have people all across the world, collaborating, they're all aligned by, by a token incentive to pursue one shared mission. And then the Dow the network. Start agreements with various agents. So it's not really relying on one particular agent fulfill its mission. If there was a situation which trust or agreement with one individual real-world agent know w would be broken, then still most of the capital wise with the Dow and the Dow would have the ability to engage in a D and an agreement with a different entity. It's not like there's one entity or one vulnerability. When, when you think [00:24:35] through the contact zone between the digital Dow and the physical company, and speaking of agents at what level does the entire membership of the Dow folk, right? Like, are they, are they voting on every decision? Like we want this person as our lawyer, we want this person. Yeah. Yeah. Now basically to make it kind of concrete there's like, of course, like a core team and stewards who actively working and we'll also have of course some yeah, for example, on the, on the longevity side, helping to solve steel flow, doing all of these activities, and then it's mostly on the bigger funding decisions, for example, should we fund this project with automation dollars, but it won't be on, should we hire this designer that will be like autonomy, for example, with the. The design team to hire a designer and budgets that are basically voted through. So it's not, micro-managing kind of in depth sense, but it [00:25:35] kind of more the key overall big decisions, what the community was able to do. So, I mean, early in the, in the community's formation and in the Dallas formation, there was a governance framework that basically laid out a series of, of, of decisions as to how governance actually functions in the doll. And there's in B doll, there's this sort of three tier governance system moving from conversation that is quite stream of consciousness oriented in discord, moving to semi formalized proposals for community input in a governance framework called discourse. And then ultimately. Things that make it past that stage, moving onto this sort of software platform for a token bass boat. And part of that governance framework that was initially created, also invested a certain amount of decision making power within working groups and also set thresholds on what those working groups were able to spend, what sort of budgets they had and where they needed permission from the community ultimately to make decisions. So there might be. No for decisions greater than $2,500. That requires a [00:26:35] soft phone for things more than $50,000 requires a token days vote. And this is really important because as you can imagine, early on the organization, it can be super chaotic and really, really unproductive if every single decision that that was making needs to have this sort of laborious community-wide boat. But this is also a really interesting sort of iterative experiment, but I think many dollars are participating at the moment, which is really trying to figure out to what extent you can involve the community in a productive way in the sort of day-to-day operation. What's differentiates, differentiates a token holder from a contributor, from a core team member, from a working group member. How do people sort of move along that funnel and traverse those sort of worlds in a way where you get the most productive sort of organization? And this is something that is, I would say, being iterated on and improved constantly based on, you know, the, the sort of dialogue happening between the team and. And actually on that note, I have one vaguely silly question, which is why are all Dow's run on? [00:27:35] This is, this is, this is my, my, my biggest, my biggest complaint is I, I cannot pay attention to like streaming walls of text. Yeah. So it's like, how did, how did that emerge? Like, has anybody done a, a doubt, like, just run on like a forum or by email or something? Yeah, it is actually the biggest bag holder in most DAOs that operates. I'm just kidding. Actually. It's it's of course almost like mimetic it's like, that's how, like a lot of crypto projects, even like three, four years ago began to organize. And I think it's, it's ultimately, it's just the tooling. Like they were like slack and discord, this court to coordinate, and this court was much better in like enabling to participate in a lot of different channels very easily. But we're going to be, I think it's a lot about like, even like file sharing. All of these things you need, which go beyond. But ultimately there are kind of like some leading doubts that emerge just as a telegram chat between [00:28:35] five friends. And that I know like the leading, like art collected, I was like, please, it doubt. And that was just like five friends on a telegram or something. So of course you can envision like every possible way and model. Ultimately, I think it's, it's more like a, became a pattern that like a lot of projects organized food like this. Yeah. And I think there's also this like feedback thing that occurs with like the more people that are organizing by a discord in the early days, the more that people started to create like token integrations and token gating and things like snapshot and all of these sort of things where now there's like, because of that, a bunch of tooling from an integration perspective, that is, that is now developed, that makes it easier to operate in a community like that than it would be to have a slack channel, for example. Yeah. The best part, there is a serious lock-in effect. If you start your new Dow, the best choice is to go with discord because that's where all the other books, we, you know, folks that are already active plus you can leverage a lot of bots to allow you to token gate access or [00:29:35] send notifications, similar things. And another question is how did you all become the core team? Just show up Tyler and Paul probably could start telling them. I think maybe one interesting thing is that ultimately like every journey is kind of individually, but ultimately most people are just like saw very initially or like at a similar idea and kind of, it's almost, I think like, like a shelling point where like like also like, like I literally tried to register longevity doubt just the domain two years ago, before we, even before I met anyone who wasn't a Dow. And, and so I think there's like, and I think it's a similar story, even for Tim that, and then ultimately of course, there's like some mechanism of discovering it and, or like hearing him about the idea or meeting, like, ultimately for me, it was meeting Tyler and Paul because of molecule and then for a lot of people, actually, they just saw an interview. They saw [00:30:35] an article about it, jumped into discord, introduced themselves, for example said, yeah, we would love to help on the website. I would love to help on the view flow and then started helping and ultimately through that mechanism. And I think like, People like bubble it up basically, and just started writing an article or doing a low or, but then became more and more integrated parts of most kind of like, like work themselves into it. And also of course, like like a lot of people have never met each other in person or is it like, and, but it kind of like this, this trust I think emerges and builds up doing like just engaging and helping progress the Dow as a whole. But I think it's, it's actually really interesting, exciting to see kind of like just this like global coordination emerging out of like the shared purpose or mission. And a lot of people just stepping up and like initially we didn't have a token, we had $0 and they were like people who like spend weeks, we building a website pro bono without [00:31:35] expecting anything like re like really good research has joining me into this. Before we even had like $1 funding to give towards research. So I think it would have to, yeah, that's kind of also the inspiring part. I think about a lot of dialysis that it just naturally emerged and everyone can do this a bit of like no boundaries, but then yeah, self-selected almost,  On Nicholas raising his hand was going to give him a chance to say something, right? Yeah. So I think there's the saying, I've read a couple of days ago that some ideas are occur in multiple different brains at the same time. And I think that's really what also happens if we, to Dow Vincent let's think about this for some time. Lawrence had basically stopped developing mobile applications, really figured, you know, focused on aging research, Paul and Tyler thought about this topic. Marketplace for ideas, intellectual property. Tim had been, I think, thinking about this idea and, you know, basically crop funding, academic, or just fundamental research as a community for some [00:32:35] time. And I've been sufficiently frustrated with the way academia currently works and have been actually also thinking about, okay, can there be some kind of mechanism where a community bootstraps itself into existence and funds, scientists and entrepreneurs within its community. Everybody pays a little and then you can actually allocate a lot to the really good ideas. And in some ways I think, you know, we all have some kind of predecessor to this idea. And then when we each had these individual time points heard about it, there was just a, who was a very intuitive decision to join. I think it's like a certain amount of serendipity, a certain amount of like Twitter network effects, like a weird variety of things. Like you know, we started out with like just like white paper and an idea. And then, you know, through, through that, got in touch with a couple of different people, but then people just start showing up. I mean, weird. The most interesting thing for me about the Dow experiment is like early on, we had like this [00:33:35] sort of like, okay, people want to be working with group members. This is like pre doubt. Not even like Vincent saying no token yet, nothing trying to figure out, like, how do you organize this community? How do you do something meaningful? We were like trying to collect applications or something. And then it's like, some people would apply and we're like academy, who's going to be good or whatever one person who's now the lead of the, of the tech working group, this guy, Audi Sheridan applied and was rejected, but then just like made himself super valuable. Like he started doing things that were like, no one else could do. Became an invaluable member of the community. And then we sort of realized like, why are we doing this application thing? Like people just show up there's things that need to be done. Sometimes we don't even see what those things are. People have good ideas, they make proposals. And all the sudden, you know, you, it's not like a company where there's a hiring process. There's very little, you can, anyone can show up on the discord tomorrow, identify some pain points, make a proposal, and just demonstrate to all these other people [00:34:35] that they have value to add to the community. And then, you know, there's, there's a sort of process there, but that process is, is still very loose. So I mean, most people who are here even on this call showed up through some like, like Nicholas and Vince were saying, I had been thinking about this before. We're sort of attracted to this magnet that is now a selling point for crypto in longevity and just had really great ideas about how to improve the community and elevated. And that's sort of, that's sort of, for me, the magic, I mean, You know, six months old now, roughly, I guess it'll be about six months and you know, the community is like 3,500 people or so, and, you know, hundreds of researchers, you know, dozens of people who are contributing pretty often, I don't mean some people full time at this point. And that's like a, a growth cycle to go from like a white paper and nothing to, you know, a bunch of money to fund R and D a bunch of intellectual capital you know, pretty strong political force in that amount of time would be [00:35:35] unprecedented. I think, for, for a company, especially something that's like bootstrapping from a community, not raising money from didn't raise money from BCS or anything like that. Just like had an auction for a token. It's to me, this is really interesting, and it sort of proves that, like, in terms of organizing intellectual capital and monetary capital, it's a really, really powerful mechanism. And so sort of related to the company point, or are you, are you worried about the sec. I mean, a huge amount of thought has gone into like the legal structuring and middle engineering and the dowel. So, I mean, the way it basically works is that the intellectual property that the Dao holds in the form of B's IP NFTs are not owned by the token holders. The token holders can sort of govern them by proxy through this governance, token and dividends are not paid out either. So the idea is to create, you know, it's not a nonprofit organization and the. As an organization is trying to make profit to further fund longevity research, but those dividends don't flow to, to token holders. [00:36:35] So there's not, you know, it's, it's, there's several prongs of the Howey test that are essentially being broken under, you know, whether it's things like making profits from the efforts of others and the fact that no one in the organization is directly profiting from, from sort of commercialization efforts the Dow is doing. But yeah, I mean, this is something, you know, thinking about the interaction between the Dow and the sec or, or, you know, like securities concerns played a pretty big role in the design thinking around the entire organization, the structure, because, you know, you can also go different routes. You know, some security token route or, or, you know, this, if you go these sort of routes, you really end up just excluding huge numbers of people from, from participating. So the goal here was like, how do you maximize participation in a way that is still ultimately creating value, but not necessarily creating value? You know, it's plumbed individual token holders, but really for the field of longevity as a whole and to move the needle on research. Got it. [00:37:35] So maybe, maybe to add a couple of points here. So the way that Vita token is fundamentally designed as a governance and utility token and at its highest level, you can think of it as something that is actively used by all members to curate the IP and the projects that they want to fund. And something that taught us that earlier is this, this very strong with typical let's say security, security, like assets, you have direct low dividends. You have very clear expectation of profits. In this case, first of all, you need to actively do something to be a member of VitaDAO and to then actively help to curate the IP. And the rights that come with it be don't token. There's no way that you could like say, okay guys, I'm out. And I want to take my share of IP that I helped create with me, which is also typical thing that you might have. You could have this as a shareholder, or if you're kind of in like more like a limited liability partnership type setting. So in this case, the Dao owns the IP and there's also, no, VDX not any expectations of profits that you could have because first of [00:38:35] all the goal here is to fund, to fund research, and really open up that research and then to try and make it accessible for the wealth which could actually mean open sourcing the research or open-sourcing the IP thus killing its commercial value. So that's a beat that discovered some. And it deem that discovery to be so important that it had to be open sourced and, and made accessible and thus they could never become a patient of all therapeutic down the line. Token holders have full rights to do that. Whereas I think if you, if you had a typical setting where you had a company and it was the whole, the shareholders and those shareholders had a very clear expectation of profits that would never fly in most normal companies. And so because there is no direct expectation of any, any potential returns that are made, there's not even the potential for return per se. And then there's that there's the full of governance option to essentially not commercialize anything. Yeah. Yeah, that's really cool. And actually sort of not quite related, but so, so I, I would say that that therapeutics. [00:39:35] Sort of a very special case in the sense of it's like very IP based there's, there's sort of very much a, like a one-to-one correlation between IP and product and those products can be very lucrative. So, so that's sort of why, you know, the therapeutics as, as an industry work. Do you think that the, the sort of the beat Dow approach could work for research and development outside of the therapeutic world? I guess as you're maybe, maybe rephrase your question, Ben is, yeah. It's just like, I guess the question is, is like the sort of idea that you can create incredibly valuable IP that like. It's fairly unique to the world of therapeutics and in many other sort of technological domains th the value really comes from like building the company around some IP and IP is [00:40:35] not that important. So yeah who wants to go for it? Go for Tyler. No, I was just gonna say quickly. So, I mean, I think absolutely because it also, it doesn't doubt doesn't need to be also IP centric, for example, Bita doc and have the holding data that was being produced by something. And that data could have intrinsic value. Similarly, meted out could try to get involved in manufacturing or create products. I mean, there's many different design flavors for these dowels. And I think the governance framework around this, and let's say the organizational capacity and the coordination capacity can be applied to many different problems in many different industries. And I think even the intellectual property thing does hold true well beyond therapeutics. So with therapeutics, you're right. They're very, very expensive to develop, which is why you tend to get this enforceable monopoly to try and basically incentivize people, developing them, but in textiles or engineering or, you know, [00:41:35] any sort of field where. IP plays a role. You could even apply almost a one for one one-to-one sort of model here, but beyond that there's many different flavors of assets and that sentence that adult could hold the other than probably most excited by is really things like data, which I think can be really, really powerful or software, which could be similarly powerful. And then, which I think a lot of dowels are already, already doing. For example, maybe it also has as one point also in addition to like, even like activities, like funding I P directly and kind of like having like a self fulfilling or like also you know, sustainable funding cycle there. We also, for example, had like these efforts that are completely philanthropic, if you like, and just helping to use also our community and to, for example, put together like this donation round on longevity and like exploring kinetic donations, like basically where, like I also like this idea even like before Kind of be the Dow existed.[00:42:35]  And I was like, okay, now we're like, kind of, there's like enough people and enough attention to do this. And the doll basically donated $65,000. But then for example, we literally donated 400,000 and we helped curate a projects which are all purely philanthropic, which are like open source projects, different even like, like NGOs doing like different projects and and basically helped also get our community together to donate to these different projects. And then talking a little bit for me, it's like, like one example where it's like really powerful because you have this like shining point of like crypto people were interested to fund longevity and they're not just interested to fund IP and FTS in a sustainable loop, but also to explore other funding experiments or other experiments. Like what another one we were discussing is like a longevity prize or like grants and fellowships for young people entering the field. All of that is actually kind of like advancing the whole cause and the whole community [00:43:35] and, and, and the core focus and activity of funding, IP, because with growth, our community and, and yeah, the whole field. So I think that's kind of actually an interesting point is that we are not limited to kind of funding IP, but it's of course, one of the core mechanisms we're engaging in. Yeah. I would add that there's also value in the community itself. Imagine Bitcoin, right? Anyone can fork it Instagram it's, it's a simple app. Anyone could have made a copy, but there's most of the value there and the net there that gets built. So here we have a team, right? The stellar team and the Dow itself is ultimately our R D. Awesome organization here. It got born in a Genesis by itself. It's a smart contract. So it's sort of unique in that way of it. Of course, someone interacted with the smart contract. It can be someone anonymous, but it issued 10% of his tokens, which by the way are [00:44:35] 64 million which is we're on 64 million, which is about the lifespan in minutes of the longest lived person, John Como. And that's sort of Beaky right. We can only extend that if someone lives longer than that, but anyone could buy those tokens, right? It's a fair auction including us, including random people. And then there was a vote to empower a core team like us. Yes, some of it, most of us here got involved before. But the cool part is anyone can start showing up and contributing a lot of value and ultimately the community can decide to do make them a core contributor to make them a steward of even some other efforts, right. Even something that we haven't thought about. There's always room it's permissionless. That's, that's something special definitely a metal experiment right here. And it's an experiment of sort of organizing people towards a common goal and a different way to make experiments, scientific experiments and, and figure out how to advance the therapeutics. We need to extend our healthy. You would actually be [00:45:35] curious. If I could ask you a question, Ben, on, on your thinking on, on poverty, how do you think, like, something like that fit into your thinking on just like new institutions for funding science, because you also mentioned this, like, it could also be a model of course, like we're potentially exploring it all. It's four different areas. And ultimately for me, it's like, if there's like big of enough of a community that is interested to fund something, like, like one of course, very like public example could be something like climate change or something exciting, like space. They would probably be at some point a community that would form resources and community to fund those research areas. It would be curious to hear from you, like kind of yeah. How you think for you to dial in the framework that you're outlining there. Like you're listening to the work? No really well with like pop up on this theme, you're exploring. Yeah. I mean, frankly, the reason I, one of the reasons I wanted to have this conversation was to sort of form those thoughts. So I [00:46:35] will be able to answer that much better after sort of like going after this. Right. So I think the, so just some of the tricky pieces, I think outside the domain of longevity is like longevity is, is very, you know, exciting to a lot of people with money both in the crypto community and outside of it. And so I think that's, you know, it's like, there's, there's lots of people who are excited about space, but from my experience, space, geeks tend to not be that wealthy. And so, so there's a question of just like you can, you can have a very excited community, but I think the real thing is like how much are those people really willing to put their money where their excitement is? That that's, that's a big question. Another question is, is for me that I think about it is like coordination around research. So, so I think another sort of great thing about therapeutics is that you really can, there's sort of like this nice, like one-to-one to one where you can have one [00:47:35] lab develop one therapeutic, which corresponds to one piece of IP, which corresponds to one product. And obviously it doesn't always work that way, but that's, that's sort of like a pretty strong paradigm, whereas with a lot of other technology it's. Sort of that, that attribution chain is very hard to do and it involves lots of different groups contributing different things. And so, and you need someone coordinating them. So this is, this is a lot to say. I think that there's very much something here. That's why I'm interested in it and why a lot more people to learn about it and why we're talking about this. But I, I think it's, it, it w it needs a lot of thought as is. It's not sort of like, I, I don't think that you could like, literally take what you all have done and just like, copy paste it for, for other domains. But that isn't to say that. Modify it and do something. Cause you know I think it's actually really, really pretty. Yeah. I mean maybe if I can speak on that [00:48:35] quick. So I think Dow will be a highly use case specific. It's actually been an interesting site. I've been I've I started writing about thousand mid 2016, 2016. There was an article that I wrote on like, what would happen if we combined let's say AI conscious AI systems with Adar. So kind of having adapt, having operated by autonomous agents in essence. And so what happened after the Dow launched, which was one of the first dollars on Ethereum? It was, it was a big complex autonomous kind of setup where the Dow was almost entirely just controlled through through dot holders. But then that also enabled a. An attack vector that essentially allowed someone to hack those core contracts. And then kind of the Dallas space went into a long period of of considering whether something like this should ever be attempted again. And people became, began to variously, very cautiously build out these systems. And there were, there's a couple of projects that over really over five years already have tried to build like generalizable Bal frameworks. And many of those projects have kind of have [00:49:35] you have failed that it actually providing frameworks that really got to mass adoption. And I think w whoever, someone, when, when you start building a down, it's kind of like, when you say, like, I want to build a company and there's many ways to build companies. And the difficult thing is not incorporating the Delaware or getting the bank account set up. And that's what sometimes people think today when they set up a doubt that like, oh, okay, it's a multisig, it's a discourse. But you obviously need that entire ecosystem that you're building. You need to think about, like, what is the, what is the value creation model for this style? What's its, what's its unique value proposition based on a value proposition, what type of community do I want to build? What type of culture do I need to implement that value proposition that will attract that community to help me? So we've needed that. For example, we've been very conscious about the type of open community that we wanted to build. And then this goes into all sorts of follow on questions. It's like, where do you actually get funding from to do what you do? And based on where that funding comes from, that will influence the culture of the community. For example, if you have a Dow that's funded by [00:50:35] several groups of larger VCs, that thou will be very different from a cultural perspective. And also from its goal is then a down that is funded by an open. Where now the individual members are much more, let's say engaged because they put some of their funding in. So they want to have a say on how to control it and what it gets used for. It's going to be very interesting to see, I think in the coming years, if, if generalizable frameworks and register, like just press a button and like spin up it, that you can already do that. There's many systems that do that, but I keep being surprised that like, they're actually not being very actively used. I think what is really important for example is to build basic infrastructure that can serve industries. And so, for example, if this is something that we we've been very focused on a molecule like drug development, isn't that different, whether you're saying you're developing longevity therapeutics or counts of therapeutics like the base kind of the base infrastructure and how you interact with the real world, for example, through IPS the same We kind of realized like decentralized drug development through Dallas, for example, could only really work if that was not a way to own IP. And then, but now I think for example, I [00:51:35] think a community like meta down will be very different than let's say a Dow that's focused on rare diseases where you're working with several patient advocacy groups. And it's not like there's a huge general excitement, unfortunately about diseases that are, that only affect small patient populations. Whereas aging affects affects all of us. And now the data that we're currently, for example, building out at molecule is called SIDA which will be a Dao that's focused on exploring and essentially democratizing access to psychedelics and mental health. Again, because we feel this is a topic that has a very broad appeal and where we can, where you can very effectively scale culture and also apply and also apply some of the frameworks ta-da. Yeah, maybe just one other thing. I think it's important to highlight in terms of how we think about this as well. Like the reason that dowels are interesting, even for me, like the reason that crypto is interesting is because it's effectively just a sandbox environment to try experiments that create behavioral outcomes like token engineering and token economics is [00:52:35] simply a way to motivate certain outcomes and certain behaviors in real time, sort of building and production, texting and production and academia. If I said, I want to change drug development, I want to change the way that pharmaceutical companies behave. I could probably write a paper in like nature reviews, drug discovery, and maybe kick off a policy discussion that ultimately isn't really going to move the needle at least on like a tangible timeline around how these things get solved. But what's interesting about those is that you can basically say I have this idea. There's the stakeholders that I want to incentivize to behave a certain way and achieve a certain outcome. And you can just like deploy this with software and start doing it. It's really crazy. I mean, the, one of the most interesting comments that Vitalik said, we hosted this topic. comment that resonated was that like, he felt the biggest sort of gift to humanity that corporate provided was this sandbox environment for experiments. And I think [00:53:35] as a scientist, this is one of the things that, that really, really strongly resonates. It's like move beyond the theoretical and go directly to the apply and start testing things in production, seeing what works. And I don't think we can say confidently that like dolls are biotech dolls. They're better than biotech companies. And achieving goals and drug development. But I think in a couple of years, we'll have a bunch of data points to suggest the things that Dallas are really good at, at least with this design implementation we'll know what they aren't good at. And because the organizations are so flexible and because they operate through this very iterative governance model that you have the ability to always be tweaking and always be improving. And so this for me is what's really, really exciting. It's like this crazy experiment that you're doing pulling in people from all over the world, independent of geography, geography. Like I haven't, if there was another tool kit to do it, that was an on crypto. We probably would have built it using that like it's. And, but really that the point here is. I haven't seen a better way to [00:54:35] scale incentives to a large group of people. Then we went three and crypto. So to me, this is, this is the most, yeah, I think we're done when it comes down to the point of the rights before that ultimately it's about a community, even with sidearm, like there's no token, there's nothing. We literally just set up like a telegram chat, invited some interesting people. They self selected themselves into now. It's like 500 people. We hosted like meetups and there's like ideas emerging out of all the people. And ultimately it doesn't really matter like how it's almost implemented or if there's a token, but it was like, what does community is to share the values and the culture of it and like, Like a shared mission also. So I think that's really, for me also, what's interesting to takeaway is that also looking at like the most successful projects in crypto has probably been projects like Bitcoin and Ethereum and, and I think a big part of the team success was its community and its culture persevering through thick and thin, like building and improving the protocol [00:55:35] together, building on it, being incentivized to build on it. I think that's like, like a major takeaway is that it would've made it. It's like all about communities and yeah, shared missions. Did you have your. Yeah, everything I'm curious about is how have tech transfer offices responded to this? I, I assume that there've been many conversations with them. I put cards on the table, don't have the highest opinion of the innovative-ness of tech transfer offices. And so I I'm wondering how, how have those interactions gone? There is surprisingly technophobic organizations for supposed to like, suppose to be like focusing on innovation. Yeah. Supposed to be helping out professors and researchers sort of bring innovation into the real world. But I would say on the whole, you know, not necessarily by fault of their [00:56:35] own, but rather just because tech transfer is largely a failed business model. Instructionally is not operated well. It's a couple of general councils sitting in an office that are not domain experts in any one field have typically grossly inflated ideas of what innovation is worth. It's challenging that said we've been super lucky, lucky to engage. Some amazing people at tech transfer offices that are really, I mean, and this is self selecting, right. If you're inter if you're interacting with us, probably amongst the most forward thinking let's say tech transfer people, so keep a list of them. So that like right. So that like, so, so like then, then if you can get some kind of feedback loop where, like you say, like, okay, these are the best tech transfer offices to work with. And then people start working with them and then all the other tech transfer offices start seeing. Totally. I mean, but this is what happens. [00:57:35] Like the first one does it. And then they've sort of de-risked it for the others. And this is what we see happening with every subsequent one that goes for it. It's easier to have the next conversation. We also learn more about how to work with them, how to structure these deals. I would say the main thing here is that tech transfer is largely not profitable. There's very, very few tech transfer offices in the world that are cashflow positive. Their business model is in danger. Their existence is in danger and they desperately need new ways of innovating the smart there's outside of Harvard, MIT, Stanford, Oxford, Cambridge, not that many that are really doing big things. And I think what we see is that there are people in even smaller tech transfer offices around the world that recognize this and are actually really, really hungry for a different way of doing things. And those are the people we hope to work with. But yeah, you're right. It's not the most, not the easiest. Let's say stakeholder group to engage. Yeah. Sorry, go ahead. Having said that though, [00:58:35] because we've been so this is also for example, a core role that we see that we see at molecule. Again, tech transfer can be standardized, like working with tech transfer. It doesn't matter if you are outsourcing in longevity asset or. And what we've actually made as I got developing systems that are as close as what they're used to today makes life massively easier. So the kind of things to avoid is to create the impression let's say. So even within Veit about in terms of negotiating contracts next steps around the IP it's important to realize that there's not a thousand people in a, in a discord like that will then contact the university or try to get involved in the research, make decisions. It's also then important to realize that these funds are like, it's not, they're not coming from kind of anonymous accounts in this like weird ether that is kind of the cryptocurrency space. But kind of. Give those stakeholders, the assurance that that we using the same process that they used to, that we've developed sophisticated legal [00:59:35] standards. And then all of this can run kind of through the existing banking system once it's, once it's bridged into them. And actually once you provide those assurances, it's surprisingly easy to work with them. In, in some cases, not in all of them, but I think as an organization, we, for example, I think can be much easier for them to work with them. Let's say a venture capital firm that wants to out lessons, the IP is setting up a company has, and then engages in three to six month long negotiations. I think the tech transfer offices that we have engaged, I've been pleasantly surprised how quick and easy they can actually be to work with a Dow or a decent size community. If the right structures and processes are. And like one out of every 20 is just some person who's like, oh my God, this is so cool. I also love when I play around in defy, I'm also into, it happens rarely, but when, when that happens, you're like, okay, this has got to work.[01:00:35]  And also work with, sorry, go ahead. We also work with companies that themselves have negotiated with the TTOs and they can, sub-license a stake. And either first of all, they can also work with the company molecule and the molecule can be, don't even need to know necessarily about via that way initially. Right. Molecule can have a sponsored research agreements with that startup or with the TTO. There was a company they don't TTO is, might prefer to work directly with the company, right. Or even a revenue share. We can have royalty agreements as an, as a, as an FDA as well. With, with the company a startup, right? And if, if, if the deals are too slow, we can work directly with our ups initially. And as things open up and this gets more popular and they see that there's a better place to go. So you have the, the, that was a bidder, you know, maybe other people in the crypto community can become bitters [01:01:35] for these IP NFPS. And it can be a much better way to sort of decide as a market what the value of assets are. And so if you have an asset that, that the market would this market more and more liquid market would value higher, why would you go with the traditional players when you can get much, much better terms? And so I think they will get convinced once they what they see that. Yeah. And I think also one thing that like today we funded like a new project as well. And what the research has said also that he was pleasantly surprised how quickly it went from like application to funding. So I think it was within four weeks or something, which I think is not common for like planning to funding. And I think that's also something that like and a lot of researchers are also really excited to have a community behind them that is really excited to follow the progress, to publish a process, to do interviews and the video about their research and, and connect to the other research we are funding. So I think that's [01:02:35] also like a huge value proposition to the researchers. And speaking of applications is this is a question from on Twitter. All of your proposals seem to have passed with like resounding consensus Not necessarily, not necessarily, no. It's I think there was one or two that would almost almost 50 50, but like really, like I would read on some there's like resounding like almost like a hundred percent voting in favor on two or three, there was like only 60% of voting in favor. And what I think is interesting, what I observed as a pattern is that like on the ones people voted against it was mostly in working group members voting against, but the community was like oftentimes voting in favor. So like, my feeling was like, the community wants to fund a lot of things and then things keep everything that is getting listed for funding should be funded. But then the people that in turn the, like [01:03:35] some of them who might've looked and, and help you diligence. Sign up something might be really excited by it and some might vote against it. And I think that's really the thing, because ultimately you can even see in the voting we voted. So you might like seeing the names of the people who voted and you see, okay, this person that is leading the longevity working group voted against it. And then that's of course a signal for someone to also vote against it. But of course there's also kind of evaluation. Write-ups so you can imagine like a four people looking at a proposal and actually some are really excited by it. And some said we shouldn't fund it. And that would be reflected in the evaluation would reflect the reflected in the voting because the person who are excited by it vote. Yes. And it was not no. And that will also of course go into the voting of the, yeah. Of the normal voter. There might be also a selection bias. Cause we, we only put things that make sense on up to a vote. Right. So [01:04:35] we didn't put any crazy thing, like a head transplant research or even the, okay. Maybe that might be exciting for them. I know some, some, some crazy thing that the community would be like what, that's not research into how to make Lamborghini live longer now. That will obviously be downloaded by the community, right? Yeah. So then it's kind of like a selection criteria. Like it has to fulfill a certain quality criteria if you like. And it needs like the requirement to have like potentially at some point some value that could be captured. And I think a lot of actually things like, like kind of all the things that someone in the community has been excited by God got put on chain. And I think also in the future, there will actually be many more proposals that might be very close calls. And, and so, so just to, to clarify. Like propose sort of funding proposals go through a working group before it [01:05:35] goes up into, to a community vote. Okay. That basically the main experts, like what also taught us at early off, like people are like, we're in this research also investors in the field and present like re deep domain experts who are also of course look at the criteria. So on our actually main page com we also have like a kind of like the way acquirements and like a FAQ for, for applications. And a lot of like, some of the applications also didn't fit the criteria. So we couldn't put them towards the proposal, but yeah. When does science makes sense? I like some people I'm excited by, it will be put up on a P as a proposed. I think what will happen over time. So there's a lot of proposals also that uh, uh, that are being worked on and that are almost like in the funnel. I think what'll happen over time. You'll see a lot more diversity or proposals as well. I personally think it would be cool to just get a lot more crazy or ideas on that because also like in something that we've read as it is only once it goes on chain, does it actually like, cause like the [01:06:35] final debate whether we're doing it or not, but and then I think, what do you also see as there's lots of like, almost like housekeeping proposals but it's important that they're actually put up to all token holders to sign on because finding at the end of the day toggles are really the executive for the organization. And so it's important to even, and then no one would disagree, but if we say, Hey guys, we're going to change our governance process because we realized it's better to do 1, 2, 3, and everyone agrees with that. There's no big disagreement on this, like housekeeping. But in a way that the, that the governance framework is designed we have to put everything on chain and then it gets voted through. We've also read as for example and I mean, these are all like, so veto is not just like this community or this funding vehicle, but it's also a whole set of smart contracts that the Dao actually operates through. So you need to operate, put things to vote and like formally then execute them through those, through those smart contracts. So to speak, I think is also really important in a, in a trust the system, something that we've also realized though, like it's quite, it's a little bit and you learn as you go. [01:07:35] You launch kind of you launch a product and architecture, and then you refine it as you go along. We've also reduced, for example, that can be cumbersome to constantly have people actually use gas. So a boat can cost, depending on the congestion congestion of the film network at Gaston, a vote can cost anywhere between 10 or $20, which can be a lot. And let's say, if you have, let's say, if you have a, a thousand dollars worth of tokens, and let's say you're a very small, a smaller but committed community. That, that that's, that can be a high cost to actually interact and participate in the system. Let's say for larger holders that's less of an issue, but that wouldn't actually then serve the democratic kind of, I think, vision of the organization. So something that we're doing for example, is moving on a gas, this voting system, where essentially, almost imagine you just check all of the balance that everyone has, and then people vote with their balance as opposed to actually on chain moving tokens. So those are continuous improvements that we're doing, and that would actually mean that even more proposals micro life, but it could also mean that that there's a [01:08:35] higher, let's say that there's more discussion, like a smaller discussion around those proposals. Yeah. In, in theory, just do this with a spreadsheet. Like that's also the, of course the meme with with crypto, it's like a spreadsheet blockchain to just spreadsheets in the simplest form. But I think what's really the key thing. And I think. Like web two. And like our old world of banking is a spreadsheet that kind of like is controlled by the bank or the state, for example. And it, and like, if you were like a researcher and your state, doesn't like your research, they will just block your research a spreadsheet if you like with your money's in it. And definitely, I think it's, that's the power is that it's trustless and not owned by your state or university of bank, but that it's like, yeah, it's condomless and trustless. But I think on, on the funding side you could also do it, but then you need to trust someone. So maybe you have the spreadsheet or [01:09:35] maybe you have to access rights to the Google spreadsheet. And I think that's ultimately where it breaks down. It's like, ultimately, you, you couldn't do it in web two way, like a dolls. And I think that's like an interesting, yeah, cool. Well, I think we all need to jump. Is there any, any, any last thoughts that you want to sort of leave in people's heads about this? That we didn't touch? Maybe like one key one instead it's like, like everyone should take a look at the website, read it out at comment, feel free to jump in discord, introduce a 74, if you want to join, because we're really always looking for like more researchers, more enthusiastic to, to join us. And I think it, like, we kind of like the first one who kind of pulled it off with some funding and some first project, I think there will be more and more interesting research projects and research styles. So it's like the whole like decent, less science-based projects emerging. So there's also like. The interesting ones, but we can even, maybe in the show notes, also some, some interesting resources [01:10:35] beyond kind of each dowel also have lists of in general, just decentralized science efforts. We're excited by like, from research by also yeah, funded by the Coinbase founder as a, for decentralized publishing, but it hasn't been designed to surprise us what, like Adams is working on and a bunch of different projects that we can, can leave in the show notes for those who want to rabbit hole into the simplest science, because I think it's really interesting new field emerging, maybe as a last coming from my side as well. I think we're beginning to see that all of this as possible. And like, I think if you dream big enough, like we can actually build these things out and actually make it happen. If you, if any of your listeners have a cool idea about trying this approach in another therapeutic area that they're passionate about or even just. Having ideas about exhilarating systems that could be built to support this. We're already seeing. Yeah. Lots of other builders kind of come into this ecosystem and, and we're really excited to like build [01:11:35] together. The great thing about with three is that it's highly composable and interoperable really in the way of like open API as in a, in, in, in a sense like similar to the open source software is really open and interoperable. So we need keen. If any of your listeners want to get involved in B2B, I have ideas about building other doubts, what maybe they even just want to explore the IP and Ft framework. So something as well as like, if you have a cool research project that you want to get funded you can already get that funded through, through, through an NFT. And that entire infrastructure is built and, and exists and something that we're also looking yeah. That I'm looking forward to is really opening up scientists, scientific funding and making access, make it much more democratic, democratic, and accessible for anyone to come in and fund this. It doesn't have to be adopt like if you want it to, if you want it to finance a specific project, or maybe you were at a couple of friends, start up a small group that starts identifying early stage assets in universities. And then essentially bringing them on chain. Now you can own them. You can transact in [01:12:35] them. And even at a later stage, you could, you could decide to set up a doubt. So some of the founders, for example, that are approaching as a. They're like, oh, I want to start it off because I have this research about it. I mean, like, wait, you really needed the bow for that, but because the value, but you want to create an ecosystem. It's really good to though, to center the down around the use case. But yeah, that's also something important to read as also not everything needs a, yeah, like to put out a call call-out bounty. We give out referral fees. If, if you refer to us research projects that make, or, or non-academic researchers or team, or even a startup that we could do a deal with if we end up funding that we do give out the percentage to be able to bring it in. So we're excited to find out all the. Unheard of on undervalued research into aging, of [01:13:35] course, and longevity from anywhere in the world. Excellent. Well, I really appreciate all of you taking the time, taking more, more than, more than the time. And yeah. Keep up the good work..  

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode