Speaker 2
Otherwise, it wouldn't be part of that program. And all these things can be modeled. Before we get too far from it, I definitely want to, you said definitely something really important that I'd be interested in hearing more about. You mentioned context to the language, and that's definitely something I've seen in practice of one of the, and definitely the atomicity of these statements and things like that. That's one of those big things that we got really big into when working in avionics is that there is a pattern of how you write your requirements. So that there is a, like you have each requirement that has essentially one shall. So you're only stating one requirement in each label requirement. So you're not saying, and this, and this, and this, because those should be individual atomic statements and things like that. And there's definitely a pattern to the grammar of how you state these requirements and things like that. And I have noticed that different companies and different industries have different patterns. Like they'll usually be consistent within the company or sometimes even vaguely consistent across the industry. So avionics or one company that produces avionics may have a certain pattern that they use. And you definitely aim for that because you want your requirements to not look like they were written by any... That was actually one of our goals, both when we were writing code and writing requirements, is when you look at that requirement or when you look at that chunk of code, you should not be able to tell who wrote that code or who wrote that requirement because it should look like it has... People in marketing or advertisement talk about this as voice of the company or something like that, that it should be, it should look consistent as if it was one entity writing these. And I wonder how you handle that between like different industries. Cause I imagine, for example, the healthcare industry and the automotive industry and the avionics industry have different patterns or contexts or grammar that they use. They use it consistently, but they're not equivalent to each other's grammars or context or specifications and things like that.
Speaker 1
Okay. Two responses to that, a short one and a long one. The short one is in the constellation query language, the sentence structures that are possible are more open. It will accept language which is not, it doesn't actually understand English, right? It understands conjunctions like and, and you can't use the word and inside a noun. For example, if you want the noun command and control in a military system, you can't use the word and in there because and is a conjunction, and that breaks clauses apart. Command and control is one unit. And so when I say and, I say that's's the end of the previous clause and start of a new clause. And I'll pass those two clauses separately or match those two clauses separately. So in that respect, it's not natural language because we can figure that out when we listen to someone speaking, but the computer can't figure that out. So I've got some restrictions, some limitations on what natural language wouldn't be accepted. So those rules apply in CQL. But those are not the rules of, if I'm writing a model for insurance industry, those are not the rules of insurance. Those are the rules of writing statements in CQL. As far as insurance is concerned, you can write whatever you like in whatever language you like. And as long as you use it consistently, it'll match up. And the certain phrases that are key, for example, each vehicle is identified by its VIN. Is identified by is a key phrase in CQL. If I say each blah is identified by, that means I'm now asserting the existence of a new type of object called blah. And here is the identification pattern for one of those things. So there's certain phrases that are mandatory. Okay, that's the short answer. The longer answer is I've been interacting for over a decade now with the European Space Agency, and they obviously have extensive, because they're highly, I mean, the procurement contracts spread across dozens of top level companies and hundreds of smaller ones. And there's a big tree of back-to contracts, which are defined in interface control documents. And these interface control documents have to be written in a standardized language so that people know what they're providing. when the eventual module is delivered, whether it be software or hardware, it's got to be able to be bolted in and just work. And so these interface control documents define these interfaces between hardware or software interfaces. And they have a whole multi-hundred page document describing the language to be used in describing interface control document specifications. Well, in the fact-based modeling world, we say, well, if we can have such a regularized set of rules, and given that these things are only constructing a model of what the delivered system will look like, why can't we use a modeling language to do that? And we can verify and generate from it in many different ways, including generating interface control documents. And a few years ago, my contact there actually achieved exactly that. defining the content of messages to be sent across computerized information exchange interfaces of whatever, whether they be radio or, you know, or CAN bus or whatever, any interface to be sent across any interface boundary is defined according to an ICD, which is subordinate to an overriding ICD. So in other words, there is a standard for ICDs. That overriding standard for ICDs is a 650-page document of which over 500 pages was machine-generated from a model. So there's a set of 30 diagrams which contain, in object role modelling, which are in a tool called Norma, which is a Visual Studio plugin. It's open source. And from this model, which defines the structure of messaging and the structure of an information base over which you can send messages, about which you can just send messages, those diagrams were generated into 500 pages of high-quality PDF text, written, of course, according to standards of the ICD. So the text gets literally copy-pasted directly into the overall ICD, which contains front matter and back matter, and it forms the body of the interface control document, which determines all of the Taylor command and all of the... I'm trying to think of the title of the document right now, sorry. No worries. Essentially, yeah, essentially all of the up and down control messages must be defined according to this Telecommand process. And the overarching document for that is machine generated. And so we don't need to use English as our primary input form, but it's our way into thinking about things. When I start to talk to you about, hey, let's build a gadget to do this, I'm going to use English to do it because I can't point to any model that we already agree on., as we talk more, we'll agree on what the terms mean and what the related items are and what is the facts of the matter, if you like. And out of that, we come to a shared understanding which enables us to go away and build software together. Well, all I'm really saying is, let's turn that communication into a model in a suitable modeling language and then we can generate most of the code. We don't need to go away and have misinterpretations as to what the language actually meant. Because we're using the same generated code. And we've got formally verifiable. And this actually goes into formal verification in the mathematical sense. tools like TLA plus are doing amazing things with verifying algorithms, temporal logic in parallel systems. But even earlier tools like Alloy, which is, I mean, Alloy is basically the same thing as a fact-based model. It's CQL is similar in power, but it's expressed in natural language rather than mathematical language. So these systems allow you to find contradictions where your model is not self-consistent, which is marvelous. So there's a continuity here between very soft, fluffy initial discussions, the words we use, and the really hard mathematical analysis of the model to find errors in algorithms and errors in structures. Do
Speaker 2
you see that as being more useful for generation or verification? Because I know that there's usually two approaches to, well, there's a lot of different approaches, but listening to you talk just now, I mean, you mentioned two major things, which were generation of this code. So rather than one of the trite internet phrases is there's the best way we found of unambiguously specifying the behavior that a computer should follow is just called code in that we are trying to specify what the behavior unambiguously is possible of this system should be. And many people see that code as being the most unambiguous way of doing that. But when you have the ability to specify models using these tools, whether it's TLA plus or any other fact-based modeling or things like that, do you see the primary usefulness of these systems being generation of that code? So rather than having a human trying to encode this contextual logic that they understand into the working systems that they're using, rather focus on generation of that ability or the opposite side of that, of still allowing the humans to specify this using code in the way that they'd like, but being able to verify at the end of the day that the different trade-offs that they've made in building this system, whether they're focusing more on throughput or latency or whatever trade-offs they're considering when they're writing this code that might be harder to encode into a model-based system to still be able to verify at the end of the day that these systems are compliant with the initial model or the working understanding that the people have who are designing these systems.
Speaker 1
Yeah, sure. So firstly, TLA plus and alloy and tools like that focus on verification. I don't know of any work to generate code from them. And that's a problem. We need to integrate the tool so that the same language does all the things. And that is what I'm pushing towards with the fact-based modeling approach. In fact, one of the things I'd like to do with CQL is to be able to encode the things that you can say in TLA as an integral part of a CQL model. So you can generate code from it, but you can also run a verifier over it. Now, that said, verification is only possible over problems of limited scope because the combinatoric expansion of, you know, a model expresses a world of possibilities and those worlds get big really quickly. And we don't have infinite compute time to run verification. And we don't have quantitative computers that can parallel explore all possible paths. And so verification is typically being applied over a limited subset of an overall system. Okay, that said, people jump to, and I don't like using the word code. Okay. I actually like code, but the way people use code, they think, when we say we need to teach the kids to code, they're talking about introducing 10-year to the idea of algorithms, of step-by things, of achieving result by starting thinking about how a thing is done step-by You know what? Most of the world is not algorithmic. It's actually structural. And if you can understand the structure of the world, the algorithms are obvious. So, Alloy CQL is focused primarily on what is the structure of the world, as in what is the possible, what is the limit of the possible scenarios this software will ever encounter? If we can describe in a static state every possible scenario, then all the transient stuff is just changing from one possible scenario to the next possible scenario. That is to say, it's a set of transactions. When it comes to verification, what I prefer to do is to focus first of all on, here is the static description of all possible worlds. In other words, here is a description that describes any situation we'll ever have to deal with. And then out of that, here are the atomic steps, the transactions by which our knowledge of that world changes. Then you start looking at things like temporal analyzers. You can say, we've got some rules in this world. Is it possible that any sequence of transactions is sufficiently unchecked that if you do a certain sequence of transactions, you reach a point where you violate one of the rules that you laid down at the start. And that's really all TLA is doing is it's saying, can you reach a dead end state? That's the liveness checking in TLA. Or can you reach a contradictory state where one of the invariants of the world are violated? And that's a wonderful thing to be able to do. It's very powerful, particularly when you've got distributed algorithms, because you've got transactions stepping past each other with partial work. But if you actually have a transactional system, then you'd never have partial work. You see what I'm saying? You can never see a part-complete transaction. And this is why Microsoft blew a few billion dollars a couple of years back on trying to produce software transactional memory. And essentially, when I compile a fact-based model to an in-memory database, what I call a constellation, that actually supports transactional programming at the memory level, at the RAM level, on individual facts. So it's like an OO database that sits in memory and everything is guaranteed self-consistent. It doesn't do the transaction locking. So multi-threaded use is still outside the bounds of what that does. But it guarantees that if I add something to this thing over there, then it automatically appears in that collection over there. And you can never see a state where one of those things is true and not the other one, because they happen for you. It's built into the model. Yeah, it
Speaker 2
produces the scope. Instead of everyone kind of building their own version of that, that is aiming to achieve that, it's providing it either at the hardware or the operating system or the runtime sort of level, correct? Right.
Speaker 1
So you've really got semantic memory. So if I say, if I've got a person in memory and I've got a list of cars in memory and I say, this person owns this car and now this person buys a new car, the owner of that car changes at the same time as the list of cars that person owns changes. Those things are not separable things. And the result of that is that your need for this parallel overlapping analysis like TLA does is much less. Now, the need to actually make those transactions atomic, that's still, you know, actually software transactional memory in that sense is still a difficult problem because what do you do when you get deadlocks? Now the software can't go forwards, it can't go back, and you may have no way of recovering. So the implementations of that are still difficult, but but the analysis problem becomes enormously simpler if you don't think about things in terms of i changed this variable to that i changed that variable to this which is which is and that's what we're teaching kids when we're teaching the code we're saying do this step then do this step then do this step then do this step and the end result will be this behavior well how do you know it's going to get that behavior? Particularly when you get to parallel algorithms, you just can't use intuition anymore. Do you see
Speaker 2
that as being useful for sequential learning? So for example, teaching kids to program, at least to begin with, is a bit about teaching them the domain and the tools that they will use. So for example, syntax and how to think about these items and things like that. Do you see that as being a useful step, but missing the follow through? You know what I mean? Yes, we start with more procedural code and we start about thinking about that, but we're missing the follow through of finalizing that. Because I definitely agree with you. When I think of problem solving, there's usually two things that I think of. I think of my data and my data models. So usually thinking with types and things about that, which matches the schemas and limitations of models of what data can I have or what is valid and what isn't valid and how do I do that? And then state machines of how do I have transitions of data or how do I have transitions of state? And usually when I'm solving problems, I'm thinking usually in one or both of those two models, which is, am I thinking about my data and where it's going and how it's flowing? Or am I thinking about state machines and how things are transacting and how things can progress sequentially, either in parallel or in a linear path of how they can have these combinations of state? Is that just missing the follow through of, okay, we've taught you how to crawl and then walk, but we never really taught you how to run. We just said, okay, crawling and walking is enough. Go be as fast however you can figure out to be fast or things like that. Or do you think that it's a fundamental misapproach of how we're teaching the concepts and fundamental items to students or children or things like that. We should be teaching them about schemas and modeling and that. Or when you say, you know, we're mis-teaching students, is that not a lack of follow-through or a lack of what we think about of the fundamental building blocks? Well,
Speaker 1
yeah, and Nicholas once humorously said that anyone taught first in basic can never be taught to program. And there is a bit of that truth in this, in that when you start to think of things, when your first approach to making a computer do something is to start thinking about procedures, you're already off on the wrong foot. But it's the only foot we know where to start on because the problem is actually deeper than, you know, even the adults trying to teach this stuff don't have the right perspectives to start with typically. And the problem goes back into human language. We're very good at talking about concrete situations. So I can say, slow down. I can't keep up. I've got my ankles. I've sprained my ankle. I can't keep up with group. Please slow down. There's a concrete situation I'm talking about. I can say, don't touch the stove. The stove is hot. It'll burn you. That's a concrete situation. Not an actual situation. You haven't actually touched the stove yet, but it's a concrete situation. What we're not very good at is communicating non-concrete non-actual human language has never had a need to do that and it hasn't evolved good structures for talking about non-concrete non-actual situations but possible situations that is to say
Speaker 2
like don't harm yourself or something the
Speaker 1
idea the idea of something being hot and the idea that that heat can cause harm and that a burn hurts and will stop you from the other things you want to do, those are general ideas which are not concrete situations. They're general ideas. And so human language is very poor at saying for some car and for some person it's possible that that person owns that car. Now, that's a very mathematically precise statement, but my God, how ugly is that sentence? Right? And it's ugly because it's not the kind of thing we need to be able to say in normal life. It's not an evolved feature of language to be able to talk in those sort of general terms. And yet, that's precisely what we're doing when we're designing software. That is all we're doing when we're designing software. We're saying, what are the possible situations? How are we going to name those possible situations? And now when our software later encounters those situations, how's it going to respond? So the response is where we come to code, that's where we come to process. But you can't think about code until you first understand the realm of possibility. And that is the realm of modeling. Until you have a shared model. Now, we think that the idea of a person owning a car is a natural idea. And I don't need to describe that to you. And so we gloss over it. And because our language, our day-to life doesn't call into need, it doesn't call for the need to be able to communicate things like that. Conceptual framework ideas, we pick them up as children incidentally when we're learning a language, but we don't necessarily get a precise definition that matches between two people. Every person's set of connotations for anything is different. So if I ask, if I go to 10 people and say, tell me five things about the sea, you know, one person's a fisherman, they'll tell you there's fish in the sea. One person's a surfer, they'll tell you there's waves on the sea. One person's a sailor, they'll tell you the wind blows across the sea. Everyone will tell you the sea is wet. Some of them, most of them will tell you it's salty. But the point is you're not going to get 50 different things and you're not going to get 10 things. You're going to get things that are different from everyone's perspective. And depending on what you've been doing recently, you'll have a different response to that question. You know, if you went surfing yesterday and I told you about the sea, you'll tell me how the waves were. If you went sailing yesterday, you'll tell me how the wind was blowing over the sea, which direction it was coming from. So context is everything in interpreting language, but how do we create context? When we're designing software, we're designing software in an abstract context with non-actual scenarios. So they're not concrete. There may be concrete scenarios, but those are the individual things. When we go to the abstraction over all the possible concrete scenarios, and they're not actual because we don't have examples. And this is why things like paper prototyping help a lot because I can suddenly say, I can put myself in a situation of using this piece of software. I can see the screen i say i'm going to push that button first i know what i'm going to do because the software if it communicates to me gives me an idea where to start but when we're designing new software we've got nowhere to start and starting with starting with procedures is the wrong place to start and teaching kids to start thinking procedurally is the wrong place to start we start thinking structurally what
Speaker 2
is the structure of the problem? And things like that of kids implicitly pick these things up, like the, well, sometimes not even implicitly, that things are hot. And if they touch hot things, that it will burn them. Some kids learn that actively. And I'm sure I learned that actively at some point personally, but I mean, is teaching kids about, or even, you know, new students who may not even be children, people who are just new to programming teaching is teaching them procedural logic, just the easiest thing that we can put in their hands and that we hope that they start picking up these context, uh, contextual items over time. Is that just one of those things where we don't know how to teach them that the stove is hot. All we can do is, you know, give them hands and let them wander around until they burn their hands or start seeing, you know, other experiences where they see someone else burn their hands or see other people avoid the stove. Because I know children are really good at that of, you know, once they see things, they contextualize it and they're able to follow along with it, even if they're never the ones that burn their own hand. If they see someone else burn their hands or if they see other people avoiding the stove when it's hot, that they learn these things indirectly. Is that one of those things that we need to turn around at some point and go, okay, here is the reason why you've been taught to avoid these things or why you've taught to do certain things where that, or even just more simply, if we shouldn't be teaching them procedural logic, what should we be teaching them as fundamental concepts?
Speaker 1
Well, firstly, the people teaching them are trying to give them an idea of what a computer is capable of doing. So the first introduction of a computer is, well, what do computers do? Well, they process things. In other words, they execute procedures. And so if you're going to get a computer to do something, you've got to know what it's capable of doing first. And so we try to give them exposure to procedures because that gives them some insight into what the computer is capable of doing and how it goes about doing things. But we're not teaching them about how people want to think about and interact with software. We're teaching them about what the computer is capable of doing in response to that, which is able to create those experiences. So we need to teach first about the kind of experiences of using a software. In other words, we should be trying to teach user experience. What does a person want to see? What's in their head when they walk up to the computer for the first time wanting to achieve a certain task? If we say, let's talk about what's in the person's head, what do they imagine as computers? How do they imagine the computer's going to deal with solving this problem? What are they going to expect to see? How are they going to know that what they see reflects a path to a solution for getting a job done? So this is structural thinking. This is not procedural thinking. It's where do you start looking at the thing? But teachers are saying, well, the kids need to learn to code. The way to do that is to... And what they think that means is they need to teach computers to do things. So the first thing you've got to do is to work out what a computer is capable of doing. And computers don't do things in terms of process. So they start by teaching process. And that's the wrong approach. Software is not soft because we don't build it soft. We build it hard. In other words, we build in sets of processes that are very difficult to rearrange without introducing bugs. And all of our focus on most of the last few decades of work in the software world has actually made software harder and harder. I don't mean more difficult. I mean, it's made it less soft. We're making software less and less soft because we've got infinite layers. If you look at the way HTML was when it was first born and the way web development works now, the number of things you have to know to be able to build a web application now is just astronomical. And we're making software. And when you build all those things, everything works together in synergy with everything else and you get software. And it's a miracle because you've got 47 different layers and all those different technologies involved. And if you change any one of them, the whole thing stops working. And this
Speaker 2
is what I mean by software. What do you think is the total complexity of the the whole system because there's i mean there's a whole uh there's a github repo i forget its name but it's it's just called something like what happens when you load a web page and it tries it tries to increasingly difficultly like deeply and deeply all the way down to the physics layer of how does loading that web page work of you know how does the request go out how does the network layer work How does the response from the server look? How does the parsing of the HTML or the CSS and the JavaScript and things like that is it, if we are able to formally specify all of those components, you know, the network layer, the physics layer, the communication layer, you know, all seven layers of the OSI model and the browser's rendering model and how the DOM works in a browser and how those things, I mean, those are definitely, I admit those are sufficiently complex things because there's just loading that one page is only, you know, lighting up a very small subset of the total capability of all of those systems that you're traversing across. But if you were to specify the model for those, would those necessarily be more reasonable to understand for one human? If I had generated models for everything down to the physics layer of how the electrons move across the fiber optic or the electrical lines and things like that of when I traverse from analog to digital, would that make it any easier for one person to be able to understand all of those things? Or is this just a necessary split that we've reached a layer of complexity, which is useful to us because we are operating, you know, we're only lighting up one small part of each of those domains that we're traversing across, that it's still probably going to require specialization of different people who are specialized at the network layer of the web page layer or the browser level or things like that would, would some way of modeling in those, would that actually, do you think, make that easier? Is this just when you have to traverse those nine layers, it would make it more easy and consistent for you to be able to traverse those nine layers as you, or thousands of layers, as you have to traverse across them? I'm
Speaker 1
not even sure how to begin answering that question. It's a big one. Well, the thing is, we've created this mountain of technology on top of bits and bytes. In other words, the underlying data has no type. And we've created languages around these things and expectations around algorithms expected for processing those languages. And some of the languages are more structural oriented. And HTML originally was a tag oriented thing. Now it's more strictly nested than it was, but the parsers always parsed it as if it was nested, hierarchical, but you didn't actually have to use a closing list item, for example, for an opening list item in HTML the way it was in the early years, you know, when the internet was in short trousers, as I like to say. So we never really, we never really, you know, HTML started as a byte stream. And in the byte stream, you put some tag markers. And those tag markers are instructions to some processing engine, which you're supposed to understand how it works, to do something and to reconstruct some sort of hierarchy. In other words, the HTML was not intrinsically hierarchical. It was intrinsically a byte stream. It was interpreted into being a hierarchy or interpreted into a two-dimensional representation on a screen. So if we had a structural approach, we wouldn't have started with bits and bytes, or we would have used those bits and bytes to encode high-level structures. And so the structural approach, as opposed to an algorithmic approach, is the algorithm sees this tag and it does this thing, would have yielded an entirely different technology stack. I'd like to see a world where we don't have file systems anymore. We have object systems, The very idea of a file system at its heart is broken because it means you need some algorithm to interpret the contents of any file. And the meaning of that series of bytes is dependent on the software that you apply to it. And, yeah, it's a great underlying thing because information starts in bits and bytes. But the structure of the world is not bits and bytes. The structure of the world is things. And we need to begin again with re-encoding our expectations for data to reflect the structure of things. And when we have structures, then we can have software that says, when you see this kind of structure, you can do that kind of manipulation on it. And now software is soft because it doesn't know, it think in terms of processing of a series of bytes. It thinks in terms of matching and transforming a structure. And that's an entirely different approach to software. That would actually produce software that is really soft in the sense of being adaptable to anything that matches the structure. There's been some wonderful technology down this path, but most of it's stayed in the realm of academia. You know, languages like TXL, which do this for languages described by a formal grammar and allow you to transform from one input source to an output source and guarantee that at every stage in transformation, you have a tree structure that reflects or that can be that can be re-emitted in correct grammatical form according to the rules of the grammar that you're transforming within fascinating technology difficult to use because i want to transform from one grammar to another grammar and to do that i've got to unify the two grammars but my point is that these tools are tools for structural thinking, not so much for procedural thinking. And I'd like to see some thought being given to go right back to the beginning of data. Yeah.
Speaker 2
I was going to say, I'm reflecting on this because I was more recently thinking of things like, one of those things is that these systems are non-static and they certainly need to evolve over time. And one of the things that I've seen is typically there's usually a pushback on, and whether it's just because people don't know how to approach this or whether they have a need for speed, as you will, of need for iteration speed and things like that. You look at systems that were designed to be strongly typed or strongly hierarchical and things like that. And there's usually some kind of pushback eventually if these systems live long enough. So the one that comes to my mind right now is Bluetooth, where there is this idea of characteristics and essentially endpoints that have a schema that are associated with them. And the Bluetooth spec spends a lot of time talking about how you define a heart rate monitor or a temperature sensor and things like that. But then you have like the backdoor that everyone actually uses. When everyone ships an actual Bluetooth device, they use like an opaque binary endpoint, which is just a byte sync and source. And, you know, their embedded device sends some opaque byte stream that only their proprietary application knows how to parse. And then their proprietary application just sends it. And they either use something like Nordic serial port endpoint, where they're just using an actual port interface, or they just use an opaque set of proprietary endpoints and things like that, where it makes it difficult to interwork with this. And I've definitely dealt with this in IoT, where everyone in IoT loves the idea of a schema. And that's how you get portability behind devices and things like that, because everyone has a schema, and you can have interoperability, and you don't need to do that. And actually one company I worked at was working very hard to try and bolt that on afterwards of providing a schema and normalization that you could use so that you could have interconnection of devices and be able to do that. So you're not thinking about talking to X company's temperature sensor or Y company's temperature sensor or Z company's temperature sensor, but rather abstract over that. But I've seen always a pushback because everyone goes, well, I want that, but I want, but also my special thing and my special thing and things like that. And is, is that one of those just pendulums back swinging back and forth in industry of everyone goes from very weekly type languages with no schemas and no schema on their data and things like that to, to much more strongly type languages and strongly schema data and things like that of just, this is one of those industry tick and talk back and forth things, or is this one of those things that you think really needs to be addressed in some way? Well,
Speaker 1
this is absolutely right. And the same thing happens with USB, right? I've got a nano VNA here. If I plug it onto my USB socket, it comes up as a serial device. And I've got to now know the serial protocol to talk to it, to command it to do things. I can't ask it, what are you? What commands do you have? Tell me the structure of your, how do you think about the world? What things do you have available to you? And what control settings do they have? What are the parameters on those things? And by the way, if something's connected on one of your ports, what is that? Can you get it to answer the same questions? So what we need is, and this is what I was talking about with radio blocks, is having a thing where I can actually plug these blocks together. Or the same thing with software-defined radio. When you have large signal processing networks of computers and hardware devices all plugged together, what I want to be able to do is to say, hello, who's out there? And the first guy that I'm going to answer me, and I can say, okay, what are your ports? Okay, now ask the same question on your ports. Tell me who's out there on the other side of that second port, and I want to discover what capabilities they've got next. And, you know, all the way through to an antenna system, I want to know that I've got an Al-Dazimus rotator on a three-meter dish on the other side of a chain of signal processing that's five nodes away. And so this kind of network discovery, enumeration, capabilities discovery, in other words, what are all the configurable parameters? Now let's configure them to a certain way. Okay, I want you to point it at Sirius, and I want you to make a certain capture on a certain frequency band, and I want to analyze it with an FFT. And I don't want to do that. I want you to do that because you've got the hardware to do a good FFT. So in other words, all the signal processing type stuff, these networks should be discoverable. And the software I'm using to do that shouldn't even need to know anything about radio astronomy.