Speaker 1
is exactly the same place of where they want to go, sometimes we just disagree on how we're going to get there, right, and so when, and I did not ask Mark or Kate whether or not they would be cool with me laying this out, but actually I think they would, because I think in the blog post and the episode that we did, I already checked with them, so the, and we've both kind of collaborated on a number of documents before, but so let me try to represent what happened in that conversation is we were, we were actually sitting down to try to discuss whether or like whether or not there was a link between the technical systems that we were developing and the philosophical beliefs that we had, and it turned out that Mark was like, well let me try to lay out what I think and then started laying out as a first approximation, let's take rule utilitarianism, like hold up, this is pretty interesting, so we, we ended up kind of drilling down and then also kind of asked well why is utilitarianism on satisfactory, and I don't remember exactly what it was, but one of the things that ended up coming up was talking about, so what Mark, Kate and I all do is we work on a form of security that's called object capability security, and what's called a system that's called object capability, is that it's basically a whole system where you're trying to design a way for agents to cooperate in a world that may be hostile, right, you know, so networks are hostile, right, you and I are on the Internet, there's a lot of bad and a lot of traditional responses that people come up with that is, you know, like don't build like a system of like zero trust, like Matt, you know, like don't trust anybody and then like, you know, that's this really kind of paranoid world. I'm not interested in that world at all, I'm interested in high, you know, amounts of collaboration, and so are Kate and Mark, and we, and so the types of systems that we're building, are assume a mutually suspicious system you start off with the system you start off with the assumptions that the network is hostile that the world can be hostile, and then you let people to build consent and trust. So there is a document that I was working on that was called kind of a successor to some of the work I did on activity pub, and it used this phrase net networks of consent, right, so consent is a big, you know, word there but how do you have a system hold up consent as much as it can, right, as much as a system can be designed to hold it up, and you can't do it all the way, right, because like there are portions of consent and there actually has to be between two human beings working out what they want to do together, or maybe two non human beings, right, but the, but trust is something that's built, right, and consent is something that's built, right, and so the type of systems that we work on basically have distributed objects that have these things called capabilities, which you can think of as consent mechanisms, so, and just like any reasonable system of consent, they are explicitly granted you hand a capability to another entity in the system and have access to that, but they're also revocable, so that we have patterns by which you can end up, you know, you can end up deciding it's time for me to take away consent, you know, I don't like where this is gone, etc, etc, right, but it's also possible to have accountability and it's also possible, but it's also composable, right, like, so you, one of the things that, and it also has no assumption of a global entity at all, like watching over this thing, and there's also no concept of global identity as in terms of like the identity of the person, there are some identity like mechanisms in the system, but they aren't identity related to what you can do, we don't take it and say, like there's not a default check in the system that says, is it Matt doing this? Well, I trust Matt, right, because the kind of relationships that you and I have are going to be more fine grained than that, right, and so that's the mechanisms of consent type stuff, partly is where this ended up coming out of so actually in a very real sense, the technology stuff that we're doing with object capability security, which by the way is, is what Spritley is doing is taking the lessons of object capability security and kind of this mountain of other computer science research that I read into that it's been sitting there on the shelf, it's like all stuff that anybody could pick up and use, but it's been sitting there on the shelves, it's not the hot thing, because everybody's trying to solve things from the big global players, right, and even most of the decentralized players are still trying to copy the assumptions of the global players and just say, let's just make a YouTube that's decentralized, let's just make a Twitter that's decentralized and et cetera. So, but that didn't really give me the powers that to be able to build the kinds of systems I wanted, so instead I was searching for what can we do that throws all those assumptions away that starts with kind of the vision that we had of the Internet where it's this completely decentralized thing from the beginning, and then start building up assumptions from there, as opposed to trying to backport the bad assumptions that came in when we started with the assumption of centralized