AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
On this week’s show we are joined by Equal Expert’s Lewis Crawford to talk about his experiments in building autonomous teams of AI agents.
AI Generated transcript follows…
Matt: [00:00:00] Hello and welcome to episode 281 of WB40, the weekly podcast with Matt Ballantine, Chris Weston and Lewis Crawford.
Chris: Well hello everybody, welcome back. We’re here for another episode as we helter skelter towards Christmas. We’re nearly there, aren’t we Matt? But here we are, we’re going to finish off the year, I think, tonight with a With this episode after last week’s, , Just you and me, Matt. We did an [00:01:00] Ask W40. Well, that was a good fun last week.
But this week we have a guest, do we not?
Matt: We do have a guest and we’ll be talking to him very shortly. It’s very exciting. , 281. , just, you know, passing reference, the first of the two bus routes that go past my house. And we’ve got another one in the new year as well. Very exciting. , I, uh, yeah, these are the sorts of things that increasingly, , intrigue me as I get older, which is basically a sign that I’m not long for them.
for the world, quite frankly. , have you been having an enjoyable week in this run up to the festive festivities?
Chris: Very busy. , Work really. It’s just one of the, I think, I think we talked maybe last week about the fact that this is the last week where you can get anything done. , and then everybody goes away for essentially until January.
, So yeah, it’s been pretty busy. , I can’t say I’ve been anywhere exciting, at least nowhere I can remember. , So yeah, not too bad. What about yourself?
Matt: What have we been doing? Wedding anniversary last week. Very exciting. , [00:02:00] managed to get to 15 years, which is, Congratulations, Matt. Thank you very much.
We went for an enormous meal at an Italian restaurant in Richmond. We went on for nine whole courses, and then I didn’t need to eat for the next day. , and to be fair, I didn’t really sleep much that night because of the fact I’d eaten nine courses. Quite small, but still, you know, cumulatively, a lot of food.
, took the boys to see their first ever 1 1 victory at, uh, Vicarage Road at the weekend. Which, if you understand football, you’ll know what I mean. And if you don’t, it makes no sense at all, but, , that was very exciting. And then the start of the youngest’s 13th birthday celebration. So, as of Wednesday, we will have two teenagers in the house.
I can barely contain my excitement at that prospect. I took some of his friends and him to an escape room on Sunday. And it’s the first time I’ve been to an escape room. And I guess, it tries to be something a bit like the Crystal Maze, but it ends up being like, for those of you of a certain vintage, you’ll get this.
The last bit of Ted Rogers 3, 2, 1, where quite frankly none of it [00:03:00] made any sense whatsoever, but at the end you were just glad it was over, and , there was no dusty bin, sadly. So that was , the weekend.
Chris: And are they still there? Is that, is that the, I mean that would be the reason to take the children to an escape room, wouldn’t it?
Matt: So, so, sadly, sadly not. It was an interesting place though, because there’s a university in Kingston, and I, I presume that many of the staff are drama. because they had to put on very hammy performances when they call into the various rooms over the , tannoy system to be able to give clues or just to be able to tell people to stop bashing that thing because it’s not supposed to be bashed in that way.
We got all of the, all of the children out successfully. And , Then made the meat pizza which you know again this into tonight It’s amazed though the propensity for 13 year olds to be able to talk endlessly about computer games and nothing else and a group of them together whoever invented roblox has pretty much systematically programmed an entire generation of people as far as I can work out It’s quite terrifying.
There we go. [00:04:00] Anyway lewis, , welcome to the show. , how’s your last week been?
Lewis: Oh, Mix, to be honest. I mean, you’ve been talking about escape rooms there. It was about three weeks ago we went to a virtual reality escape room. So, how about that? Wow. It’s literally where everyone has their headsets on and various different puzzles.
But in virtual reality
Matt: basically. Were you in the same augmented reality? Yeah, were you in the same physical space to be in the virtual reality
Lewis: there? Yes, yes, we were. I mean like it’s, there’s a kind of like a demarked area and , I guess it’s more augmented reality because you’re aware of walls around you and you’re aware of where And the people are around you and yeah, it was very exciting.
I was thinking it was called a bank heist and we had to escape out of a vault and Stop a robbery that was also going in progress as well.
Matt: Oh, very very good How did you find the experience of wearing?
Lewis: Better than I thought it was going to be. I have to say, I mean, I’ve played computer games since ZX Spectrum days and things, [00:05:00] and I’ve been aware that virtual reality headsets exist, .
But, um, it was a lot better than I was expecting, I
Matt: It’s pretty immersive, isn’t it? It’s, um,
Lewis: yeah. Well, it’s, it’s, you know, not to give too much away, but, um, they start off by giving you a little training session where you go up in a lift and the lift door’s open and you’re literally on top of a building with a plank in front of you and you have to walk along the plank and it is remarkably difficult.
You know exactly what it is, but, you know, it was, it was, yeah, I didn’t want to jump.
Matt: No, I can understand that. What’s the end of the plank? Yeah, absolutely. I think the first time I ever tried one was with the sort of first generation of the Oculus and the HTC headsets about seven or eight years ago when they were like firmly tethered to a fairly powerful PC.
And one of the first things I did was go into Minecraft. And one of the first things I managed to do completely unwittingly in Minecraft was dig a hole underneath myself. And that, just immediate [00:06:00] vertigo, was terrifying. I’m amazed I’ve ever put one on ever since. But, it is, there’s, there’s a, yeah, just the experience of it, we were talking about it a wee bit last week, and the, the, the magic of the experience is really quite something.
The, whether that actually translates into anything useful or not, I think is going to be the interesting thing, and it seems to be another one of those. Well, maybe the year after next it will hit the big time and people will find a use for it But we will see we continue to watch it with interest anyway, 2023 has been a year very much focused in the world around ai so we thought we’d finish the year with a bit of a conversation about some experiments that you’ve been doing, Lewis, with some AI.
So let’s crack on.[00:07:00]
As I mentioned, this year has very much been a year of artificial intelligence. And there’s been a lot of hype. There’s been a lot of people using generative tools like ChatGPT. There’s been a lot of Whiffle, I think is the polite way of putting it. There’s been an awful lot of things generated that may or may not have been true, but you know, who needs truth in 2023, let alone in 2024.
, but in, conversations over the [00:08:00] last few weeks, I picked up on something that, you’ve been, , doing Lewis, which felt really interesting in a way that kind of cuts through some of the. , Ridiculous hype that there has been this year around, particularly generative, , AI, in that you have been Building an experiment, and it is, from the outset, very much an experiment, , to be able to get generative stuff to be able to produce, in, , core code, one sort or another.
But the way that you’ve been doing it is not by just sitting in front of a chat GPT and saying, please could you write me a BBC basic program to play Doom, please. I might try that later. , but instead you’ve been creating virtual teams. You’ve been setting things up in a way that’s been about having,, relatively autonomous agents acting in roles to be able to perform tasks.
So not just a single prompt, but something more complicated. , can we start with a bit of [00:09:00] background as to, , how this came about? What the, the original genesis of this idea was?
Lewis: So, ironically enough, , Minecraft figures in it. So when, when ChachiPT sort of first came on the scene, , last year, , one of the early papers that was using, , ChachiPT was a, thing called a Voyager, , which was I think some folks from NVIDIA and, , some folks from a university in the US, but essentially they, they were using, , agents in Minecraft, but using, , GPT, so originally it’s had GPT 3.
5 and then GPT 4, , to control, , the, the players, if you like, in Minecraft. But what was unique is that, , these, these agents could have skills associated with them. So they would have like a daily rotor of activities that they would have to cycle around and they, they’d be able to plan tasks like I need to, I want to cut down a tree because that’s on my rotor of activities and things, but I don’t have the skills to cut down a tree.
[00:10:00] So we would use the generative AI, so GPT. To essentially, , grab some code and put that back into its own code base. , and again, the way they were doing that was, was quite innovative as well. , so using a thing called vector databases where you can essentially, , have a, a text description of, , an activity and then store it in a vector database.
So rather than it being like a. a normal database that you would just search by keywords or terms, , this is based on similarities. , so again, using embeddings is the phrase to, to calculate these vectors. ,
Matt: , so the vectors being the, the distance between different things in the database?
Lewis: The words, essentially. So, , all sentences and things, like, as the words are hung together, , you can calculate, , the relationships between the words, , and then store those relationships in the database. So those are the vectors. So that if you then use completely different words, , you can find out what the closest sentence is that would match it.
And this underpins. Yet another one of the big hypes of this year, which is this whole idea of [00:11:00] RAG. Now, we don’t need to go down that, but RAG is Retrieval Augmented Generation. So it’s this idea that you can do a search, in a vector database and get various different results back and then generate text and context around it, , so that it appears to be, , like a human sentence that’s returned.
, but rather than get too, , embedded in, in RAG search terminology, just to Minecraft thing, this idea of having different agents which were able to use GPT to essentially enhance their own code base. So pull down the Python script required to be able to build an axe, to cut down a tree, and then store that in the database as a tool for cutting, say.
, now then another agent, , also. has access to exactly the same database and it may need to kill a spider. So it’s going to try and look through its database of all of the skills it’s got and it could say, well, that’s close as much because I don’t have many skills. So I’m going to try and see if I can use this axe to kill the spider.
And [00:12:00] it would either be successful or it would fail. And if it was successful, then it would put that back into the database to say that the skill is now applicable for that activity as well. Intelligent agents or autonomous agents have been a dream for, for many, many years, but this is the year that I’ve definitely seen that , these things are real and can individually, , empower themselves, but really , the main power is when they collectively work together.
, for, ,task planning, execution, and, another way of, , building in guardrails towards the end of, , a process as well is to use these agents. So you would have someone who plans, someone who actually does, and then someone who checks the work of those previous stages as well, , to ensure that, , for guardrails for all kinds of things, but bias particularly, and ensuring that the result actually matches to what the original target was.
Matt: Interesting. So essentially these are code things that are learning as they go through. They are able to be able to spot what they might be able to do and then they can work out if they can do it or not by testing. And then if they [00:13:00] can’t, potentially they can go away and find a code library to be able to fill the gap of the skill that they don’t have.
Lewis: Exactly, yes. So, moving on from the Minecraft example, there’s been various frameworks, so LangChain kind of came on the scene as a way of being able to, have sort of sequences of activity using a large language model to then be able to query a database or maybe pull down a website or all kinds of other , tasks.
But then, , superseding that, although it does actually use our chain as well, , there’s a particular framework from Microsoft called AutoGen, and that is currently the best framework for these autonomous agents, basically, you can create an agent, give it a specific persona, if you like, like, you’re an intelligent agent that can, understand French or, or it could be, anything that you can use live language models for, , and build a number of these agents, which can then have a group chat facility and interact with each other, not through some API, but through [00:14:00] English as the, as the actual, or any language for that matter.
But. Literally text based communication between different agents.
Matt: , so you then had a piece of work you were doing for a client and you had a particular challenge around generating synthetic data for the client?
Lewis: Yeah, I mean, it’s a little bit more context. I mean, the client , has had cutbacks as, as many, organizations have had, and then like in the recent months and things.
And so, the team that I was working with will let go. And, not long after that, we also had to, have like a celebration of all of the work that we’ve been doing. So, a little bit tongue in cheek. I built, a bunch of agents, but I, I named them individually as members of the team that had departed.
So we had like a concept of a delivery lead. We had a concept of the ba, a concept of a, , a Python engineer and a concept of a DevOps engineer. And so I’d labeled them as, as. You know, the team that had gone, and one of the things the team had been working on was synthetic data. So the task I gave all of these [00:15:00] autonomous agents was basically to , create a couple of tables, a customers table and a transactions table and , the populated with realistic synthetic data, but also link the tables so that if you go, , a customer ID and the transaction table, and it had to link to a customer in the customer table.
So it was kind of like a non trivial, I mean, it wasn’t just, you know, so generating random, data sets or anything like that. And to my surprise and slight horror, it worked incredibly well. So, , the, the planner agent, which I’d labeled as a, as a business analyst, , essentially set out, each of the tasks quite to do it and also which agent is gonna be responsible for it.
, So it would say that, engineer Joe had to write the Python code using a particular library, sort of phaco in this case, to, to generate the tables. We then had a persona of a data scientist who was gonna analyze the output of, of the, . Synthetic data and ensure that it fit the business rules that we’d established , which is that you know In [00:16:00] the transactions table, they had to marry back up to the customers table and things , so and then You know, what’s really clever as well is that the execution environment?
It can actually run docker containers in the background and populate these docker containers with all of the libraries necessary to execute the code , and it went ahead and Um, and generated two tables, , the data scientist read through those different, , tables and ensure that, actual referential integrity was there.
, and then essentially sign up was given that, these tables had generated exactly what was required. So, by trying to celebrate the fact that this team had gone, I actually proved that we didn’t need them. Well, which was not the intent, but, ,
Matt: I had to say just as an aside that the, the line, it was a little bit tongue in cheek.
Sounds like the perfect start of some sort of sci fi disaster movie where it’s that, that, that’s how it all started. , but that, I mean, creating data like that, the., I’m presuming this is used for testing or to be able to, to
Lewis: yeah, I mean, I think [00:17:00] a synthetic data would be a podcast on its own in terms of, the amount of use cases and things for it and like how, incredibly useful synthetic data can be, because if you think of, , data sets, which have been anonymized, there are many, many cases where it’s actually quite easy to, de anonymize them, um, either just by the fact that they haven’t been thoroughly anonymized you can use extra data sets from outside of that to de anonymize.
But synthetic data by its actual nature is, it’s, it’s fully made up. However, there are very complex rules in terms of, how. The data is produced so that, statistically, it is almost impossible to distinguish it from a real data set. And this opens up huge amounts of, potential for, , what they call, code low, deploy high, in terms of security.
So you can code in relatively low security environments using huge amounts of synthetic, highly realistic data, , and then trust your deployment pipelines so that your code goes through to a highly secure environment. , which very few people have access to [00:18:00] and that’s the environment where the code will execute against the real data But you have full confidence that the code is going to function because you’ve been able to test it with highly accurate synthetic data going back to the team They they they did various different sort of statistical techniques for generating this data by analyzing the source systems and calculating but then another approach is to actually use generative, , techniques, , for generating an entire row, , at a time of, by basically giving it samples of, , what a, a data set could look like, and then using generative AI to just continue that pattern down.
, and then bring on the statistical analysis of the data sets to ensure that it actually conforms to the original data set in the first place.
Chris: This is all extremely interesting. I think we’ve talked about, obviously this year, we’ve talked about AI and generative AI a lot. And I do remember thinking to myself in the early days of the tools that were coming out, these are all kind of first order things.
And we’d see more interesting applications. Once we got to those second order
processes where you’re using the [00:19:00] AI to Then deal with the output of previous AI. Are we doing this so that we can make it more like our own interaction so we can understand it more easily because The thing about a Gen AI tool is that it hasn’t got any particular skill, but you can ask it to, , mimic a certain thing, absolutely, you can say to a prompt, act like a business analyst and, and go through this, this set of requirements or, think like a estate agent, how would you market this building or whatever it might be, but when you do that, what you’re really doing is you’re constraining it down to a certain set of ideas.
Are we actually just saying to these different tools, right? You do this thing, so forget everything else you know and just do that. And then you do this thing, so forget everything else you know and just do that. Lifting them up into a, no we’re not training them up to a, to a standard.
Actually what we’re doing is we’re carving away all the other parts and leaving, leaving the, [00:20:00] what’s left. Just so that we can see the interactions.
Lewis: I think it’s the interactions that I’d like to sort of focus on here because, um, that’s the really crucial thing about large language models is that the innate knowledge is the language.
, they happen to have consumed the entirety of Wikipedia, but we don’t trust them to give us honest answers because of the hallucination problem and things. But what they do have is innate knowledge of the language. And so therefore that allows you to build interactions.
Using just simply language. , so, previously, if you, if you wanted two systems to talk together, you’d have to have like a very watertight contract through an API or some mechanism like that. But I can see in the not too distant future, actually, you’re going to have large language models, which are.
, using things like the, the RAG that I mentioned previously. So, , it’s knowledge base is an actual knowledge store. It’s not built into the model itself. So there is a capability of, , understanding what I’m asking for converting that into an actual request, whether that’s [00:21:00] SQL, whether it’s Cypher in terms of graph network or any other retrieval technique, pulling back real information and then processing that, , again, using language, , as a result.
So if, for instance, , like the many organizations invested heavily in terms of APIs so that you could have like an enterprise service bus and, going back 20 years in terms of how, , different departments could communicate effectively with each other. So rather than having information being spread around, I can see, a future whereby you, you simply say, don’t talk to my API, talk to my wise language model, and then whatever you’re requesting, , it will be able to deal with and then process and then respond accordingly, which means that the different kinds of interactions will become almost infinite.
I mean, it’s anything you can ask for, , it is able to respond to and then find the appropriate, , information and return it back. And this doesn’t need a human in the middle, but the advantage is that a human can actually see what is going on, , with these interactions, but you know, the interactions will [00:22:00] be frighteningly fast, so far, far quicker than we could possibly speak or read or anything like that.
Chris: Not maybe as fast as an API designed to do one thing.
Lewis: Uh, yeah, maybe. I mean, they’re, they’re getting faster all the time. I mean, like the APIs run on normal hardware, whereas LLMs are having dedicated hardware built for them. So Chips designed specifically for this use case?
Chris: Yes, indeed they are. Yeah, I, I’d say I, I agree.
It’s a very interesting set of experiments, and I think it is, it’s all about where language fits into those things, isn’t it? Because, you know, as you say, the LLM doesn’t know anything. It doesn’t have any knowledge. It doesn’t have any. Anything at all other than a statistical likelihood of this particular word will fit in this particular place.
So if we throw enough computing power at it, we are definitely getting some, some very smart seeming answers out of it. The [00:23:00] question is, is where we use it in the most, maybe we’re using it to learn about the actual tasks that we’re setting it. You know what I mean? That we are able to, as you say, because we can see the interactions, we can understand the interactions.
It gives us a better chance of actually understanding what, what question we’ve actually asked rather than the question we think we’ve asked of a, of a, of a task or a process.
Matt: I’m interested in the way that language between bits of technology works because there’s an ambiguity in human language.
Which is part of his wonder, you know, humor is often based on the fact that words are ambiguous the, the, the way in which we interact is, you know, there’s inherent wonder in the fact that it isn’t absolutely logically tied down at all points. Lawyers wouldn’t have a job if it were, let’s be honest. , but there’s a difficulty there , , I was doing a bit of writing this morning, about how humans.
Might have some barriers to be able to adopt the use of generative [00:24:00] AI if we are to think of them as assistants to us, because humans have barriers to adopting collaboration with other people. And there’s lots of work that’s been done around collaboration in teams. And there’s a particular thing, a guy called Morten Hansen, who’s a business professor in the US, originally from Norway, I think.
And he talks about there being four systemic Barriers to collaboration in organizations and the one of those four that I find the most interesting is thing called the transfer barrier and the transfer barrier is where people find it difficult or often even actually impossible to collaborate with one other, one another because they have different language and that’s not that one speaks French, one speaks German, although obviously That, that would be a big inhibitor to people being able to work together.
But more that we have different lexicons, we have different vocabularies, different professions have different sets of language that we use. And the example, I’ve probably used this on the show before, [00:25:00] when I went to work in the housing industry a few years ago, I had about three months where I was utterly confused because I kept hearing the word developer and I thought people cutting code and everybody around me was thinking people with bricks.
Because developer in housing means people who build physical items. Very, very confused. Had to rewire my brain to be able to stop sort of subconsciously hearing that. I’m interested if though, if, if we create agents on the basis of large language models and say you are a I don’t have a product owner and you are a project manager and you are a developer.
Might we start to see some of those confusions coming in where the different agents, because they’re working within the constraints of their individual professions, we’ll start to have some of those. , Barriers to interacting effectively because they’ll interpret words in the wrong way.
Lewis: These are very nascent technologies right at this point in time. , but. , With the, the size of the context windows that are being able to be [00:26:00] processed on every iteration, , means that you can throw an awful lot of context into every interaction.
So, by that I mean, I what is it, Claude 2. 1 has got 200, 000 token context window. Which means you can put an entire PhD thesis into it every single time you ask it a question. So that’s a lot of context to be throwing every single time. , so I do believe that prompt engineering will disappear as, as quick as it’s come about.
But right now, prompt engineering is, , essentially the, before it was called prompt engineering, it was few short, , examples and things where you give it, you know, a few examples and then it would be able to continue. But it’s not really few anymore because you’re giving it up to 200,000 tokens, every single time.
To give it that context so it can come up with an idea so that it knows it’s a developer that knows Java as opposed to a developer that looks for building sites and, other, opportunities. but another kind of like frightening source is that, , language models could be used to, I mean, English.
isn’t [00:27:00] exactly the best, , or, or the finest communication technique, despite the fact that we’ve got Shakespeare and Griswold and all the rest of that kind of thing. The language models themselves will develop over time far more efficient ways of being able to explain exactly, , what it is.
And I’m not talking binary streams, but there will be an evolution of language models into a language that we Cannot comprehend because we don’t have 200, 000 token context windows or 2 million token context windows and won’t have an ability to understand, you know, 64 bit encoded tokens as they get thrown around.
But the amount of preciseness that will be in future versions of language models, , will be, , I mean, I think that’s another interesting philosophical thing to think about as the evolution of language is going to be taken away from us.
Matt: And increasingly the vocabulary that will be developed to be able to be more and more precise I guess will be interesting within that
Lewis: German like words
Matt: Long [00:28:00]
Chris: Not you realize this this context problem that you describe is kind of, it’s been around for a long time, right?
All the old, , speech recognition programs would always have a problem if you said court and it wouldn’t know whether you meant court, like court of ball or court as in where you go , , to face the beaker about your speeding fine and all that kind of thing. , but the context issue has been attacked very, very well, , as Lewis has said.
But it seems like, and it was kind of one of those things that says, you know, how can we. Make computers more intelligent. The more intelligent they are, the more able they are to understand context. Whereas you spent three months not understanding that a developer could be more than one thing.
Matt: Yeah, no, absolutely.
No, this is
Chris: my failing. I think this is a, you know
Matt: But it was the subconscious thing. the computer’s problem. No, no, no, but it was the subconscious. It’s the heuristics that we use to be able to be the most super of supercomputers, which is people to describe them as bugs. They’re not. They’re heuristic models for the way [00:29:00] in which we’re able to process far greater amounts of information than otherwise we’d be able to do.
And it just takes a bit of reprogramming and unfortunately it took me quite some time because my, you know, brain’s less plastic than it used to be and that’s the way it goes when you get older. , so Lewis, when you, if you, if you look at the communications that are going on between these agents at the moment within the, this, this experimental thing that you’ve created, can you By looking at them, do they make sense?
Lewis: Yeah, I mean, it literally looks like a transcript of a Teams, as in a Microsoft Teams conversation, or Slack. Essentially what you’re looking at is literally the agents spewing out English, very interpretable as to, you know, what they’re processing, what they’re doing, and then the next agent in line picking up that and continuing with it.
Matt: And in terms of the language that’s used, this isn’t as That’s a sarcastic question as well. I’m just fascinated by it. Are they polite to each other, or is it just all very matter of fact?
Lewis: Oh, very [00:30:00] polite. And in fact, in certain circumstances, if you’re not using GPT 4, like the GPT 3. 5, because it’s slightly cheaper to use, it gets into loops where they thank each other, continuously.
And you have to, like, actually break the program because they are. , literally caught in that loop.
Matt: Oh, that’s brilliant. I love this idea that we will be safe from the , the coming supercomputer menace because they’re all being too polite to each other. It’ll be like the road system in Guernsey where there’s a thing called filter in turn and you can have people reaching what would in any other place be around about and they sit there for days waiting for each other to go first.
It’s brilliant. , and that sort of interpretability at the moment that’s something that you think Language will be used, but you could see how it would just devolve away from stuff that we understand.
Lewis: Yeah, I mean, even talking about language models themselves is now, , you know. So, last week, , with the release of Gemini as a multimodal.
So, LLMs are out, MMMs are in,[00:31:00] , so it’s not just language anymore, it’s actual visual representations, it’s audio, , all kinds of different, , sort of forms of communication are being bundled together into these multimodal models. And to be honest, I’m, I have no idea what’s next, I’d like to claim, you know, some kind of insight to it, but the pace of change is absolutely astonishing at the moment.
Matt: Yeah, absolutely. And then if it’s sort of left to their own devices, because presumably you kind of trigger this, and then they start working away to be able to, it starts working away, let’s not anthropomorphize it, it starts working away. And the sorts of approach that it takes, is it recognizable as a You know, is it an iterative agile type approach that it takes?
Is it a command and control approach that it takes?
Lewis: Again, there’s various different, , constructs that you can put these agents into. So, , you can just, you know, simply have two agents where one is demanding and one is responding. continuously. The more [00:32:00] interesting ones, from my perspective, are literally called group chat, where you have different personas that you set up and, , the central message bus, if you like, is accessible to all, and they will decide if they need to respond at this point in time, given the context given their own role and given the context of what has previously gone, and they will literally either respond or not, , in those group chat scenarios.
Matt: That’s amazing. And having done this so far, , has any of it left you concerned or worried or?
Lewis: No, not concerned, because again, I obviously see the one time it works, but the 200 times it didn’t. I’m still seeing that at this stage. , but I’m really interested in, the implications moving above just, coding, if you like.
, so as I say, I’ve taken on this idea of, , business analysts and solution architects and, , a whole [00:33:00] chain of, knowledge workers , that can interpret a requirement can determine that, , things like NFRs are missing. So it could go back to users and interactively request more information until it believes a requirement is fully known.
It can then pass that on to the next, , item in the chain, which will then, break down all of the tasks that are required and propose a, potential solution architecture. Where it gets really interesting is if you have, , encoded your solution architecture in such a way that it is machine readable.
So there are various different YAML tools that , can be used to generate solution architecture documents. These multimodal models can actually interpret. , the images of the the solution architecture, but , i’m still at the stage before that which is that we will have a library Of solution architectures in some kind of machine readable format such as yaml And you’ll be able to take that and modify it and and check it back in And so you can do as is and a 2b Architecture that is auto generated based on all the requirements which have been gathered up [00:34:00] front And so I’m literally, I mean, some of my background is, I guess, sort of sleuthing out to chill and then distributed computing before that.
So I’m desperately trying to work myself out of the job.
Matt: I’m also fascinated by the idea that being able to use it from a management. Theory kind of perspective being able to say well What if you make that person a bit passive aggressive or what if you make that actor? Somewhat unsure about themselves because they’ve had a bad week at home or and then how does that start?
You know, you could do some really interesting things there about being able to do kind of attitudinal modeling within it just to be able to see the impact as if you were thinking about being able to make changes to team structures and whatever
Lewis: I hadn’t considered that from a kind of , psychology perspective almost, isn’t it?
And trying to understand how, how these things would interact. , I’m not sure that the different types of psychologies that people have would be encoded within the model itself to, , necessarily come out with something really unique. So, by that I mean, , The fact that it understands what a requirement is [00:35:00] and then it can produce tasks to, provide a solution to that.
, I don’t think there’s necessarily enough, in there because this particular, , robot’s having a bad day or, or whatever. , that it’s gonna produce a particularly bad design or, that kind of thing. , but I mean having like some kind of, , element of randomness in there. I mean , this leads to the big philosophical question that, you know, is it, , actually an AI in the general sense of, is it really an intelligent?
And my firm belief is no, absolutely not, you know, and we’re nowhere near the, , AGI despite, I think it’s Elon Musk saying it’s three years away. , I still, , despite the fact that every week I am constantly surprised, , I still don’t believe we’re anywhere near that. , I, do think where we are though is, , absolutely being able to automate the mundane.
So every aspect of human life that we think of as, you know, mundane and maybe a bit boring, that is, you know, going to be wiped away by, um, the ability to interpret, task, plan, and then execute.
Matt: [00:36:00] Fantastic. Thank you. What a way to be able to end this year’s cavalcade of podcast marvels. I was going to try to alliterate and fail miserably. That’s where you need a GPT, [00:37:00] isn’t it? You can come up with alliteration like it’s coming out of your bottom. Lewis, thank you very much for joining us this week.
Have you got an exciting few weeks ahead?
Lewis: I think there’s, as Chris said at the very beginning, this is the last week of actually getting work done, and then next week I think there may be quite a few lunches.
Matt: Yes, we’re into lunch season. , How about you Mr West?
Chris: Well, I’ve got to go to that London on Wednesday, which will be interesting, so I’ve got quite one or two interesting conversations to be had, bit of a catch up with some people I’ve not spoken to for a while, including , you know, some people who’ve been on the podcast in the past.
So yeah, that’ll be nice. It’s, we’ve got a little Juma festive gathering, you know, in the office on Thursday, which will also be nice. And yes, then a whole bunch of things to close down and finish off and, and submit and get done before we get to next week when there will be [00:38:00] things going on, but hopefully it’ll be a little bit quieter.
Matt: Well, it’s actually, it’s the, it’s the Equal Experts London party this Wednesday, but, sadly I can’t make it because happily it’s my son’s 13th birthday and priorities. , but I will be seeing some people in town on Wednesday. . Tuesday, tomorrow evening, , although probably now as you listen to this if you listen to it when it’s come out, , is, , TBD, Paul Armstrong’s, , mini conference taking place over in North Greenwich, so we’ll be going over for that.
The last one was fabulous and it included, , the author of the book Wasteland, which is probably the best book I’ve read this year, all about the way in which our Rubbish gets dealt with and fascinating terrifying book And so we’ll have to see who Paul has in store for us. And then next Monday It’s the WB40 signal group annual Christmas shindig Which Cy Cornwall has been there actually marvel in being able to do all the organization for so, [00:39:00] There will be some of us meeting up if you’re not a member of the signal group You are welcome to join if you drop us a line on LinkedIn or we still got an account on That x thing, but we don’t really use it anymore or , if you go to the website You’ll be able to get details on how to get in touch with us and you can be added to the signal group and then About one in five people survive the initial fire hydrant and stay on I think is about the batting average there So that’s , that’s good fun.
And then it’s into the christmas festivities season. So, Various things. And then we’re busy planning what is happening with the show in 2024. I’ve got the entire month of January already sorted, which I feel incredibly, , doesn’t happen very often. Does it? I know it’s great. Um, so we’ve got some fantastic guests, , and more to come.
So, , with that, , wish you all a very, , happy. end of the year break depending on how you choose to celebrate it and we will be back on the 8th of January into our eighth year my goodness of this [00:40:00] mad crazy thing that is WB40. So until then have a good break and we’ll see you in 2024 which is technically the future.