

WB-40
Matt Ballantine & Chris Weston
Conversations on how technology is changing how we work with guests most weeks helping us to navigate.
Episodes
Mentioned books

Feb 24, 2024 • 40min
(288.1) Playful bonus episode
A special bonus episode for you as Lisa and Nick Drage host a conversation with Matt who has a new set of playing cards to flog. We talk about his Business Meerkat deck, and more broadly about the power of playfulness in work.
Order your own set of the Business Meerkat cards at https://theplaycards.myshopify.com/collections/business-meerkat
You can also get the Deckible version at https://www.deckible.com/card-decks/3Ny-business-meerkat-tarot-inspired-cards-to-spark-conversations-in-organisations-matt-ballantine
Normal service will be resumed on Monday.

Feb 19, 2024 • 50min
(288) Crowdfunding
Matt Webb, creator of the Poem/1 clock, discusses his journey from idea to Kickstarter campaign, sharing insights on hardware development challenges, community feedback, and financial considerations. The podcast also explores app development, coding projects, AI assistance in poetry creation, and upcoming exciting events in the guest's schedule.

Feb 5, 2024 • 41min
(287) Strategic Storytelling
On this week’s show Matt and Michelle are joined by Natalia Talkowska to talk about the importance of “the hook”, telling stories, creativity and other tales.
You can find Natalia’s business at natalkadesign.com
Matt mentioned his Business Meerkat cards (pre-order them here) and also the TBD Conference he is speaking at at the end of February (tickets available here).
He also talked about Marcus Brown’s Presentation Canvas which you can download and explore here.

Jan 29, 2024 • 39min
(286) Heat Exchange
On this week’s show, Chris and Julia interview Mark Bjornsgaard about his fascinating renewable energy and data hosting business Deep Green.

Jan 27, 2024 • 29min
(285) A new era
From next week, there is a new sound to WB-40. We are delighted to share that Julia Bellis, Lisa Riemers and Michelle Minnikin will join the team to share hosting duties with Chris and Matt.
On this bonus show we share the thinking behind the changes, and the wonderful Nick Drage asks all of the hosts some probing questions about who they are and what they think. Particularly about crisps.

Jan 22, 2024 • 40min
(284) Good Girl Deprogramming
On this week’s show we are joined by psychologist and author Michelle Minnikin to talk about her new book, Good Girl Deprogramming.
You can find out more about Michelle’s work at https://michelleminnikin.com/

Jan 17, 2024 • 46min
(283) Accessibility
On this week’s show we speak with Lisa Riemers about challenges of digital accessibility, and what to do about it.
You can find Lisa’s collection of useful links here: https://lisariemers.com/index.php/2023/05/12/accessibility-resources/
and you can book to attend her talk here:
Register on Eventbrite
We spoke with AbilityNet on Episode 208

Jan 9, 2024 • 43min
(282) Psych Safety in Action
This week’s guest is Equal Experts Psychological Safety Lead, Julia Bellis.
We talk about the misconceptions associated with psych safety, building safety in client-facing teams, and the challenges of hybrid and global working, amongst many other things.
A couple of books were mentioned: The Culture Map by Erin Meyer and The Fearless Organization by Amy (not Adrian) Edmondson.

Dec 12, 2023 • 39min
(281) Intelligent Agents
On this week’s show we are joined by Equal Expert’s Lewis Crawford to talk about his experiments in building autonomous teams of AI agents.
AI Generated transcript follows…
Matt: [00:00:00] Hello and welcome to episode 281 of WB40, the weekly podcast with Matt Ballantine, Chris Weston and Lewis Crawford.
Chris: Well hello everybody, welcome back. We’re here for another episode as we helter skelter towards Christmas. We’re nearly there, aren’t we Matt? But here we are, we’re going to finish off the year, I think, tonight with a With this episode after last week’s, , Just you and me, Matt. We did an [00:01:00] Ask W40. Well, that was a good fun last week.
But this week we have a guest, do we not?
Matt: We do have a guest and we’ll be talking to him very shortly. It’s very exciting. , 281. , just, you know, passing reference, the first of the two bus routes that go past my house. And we’ve got another one in the new year as well. Very exciting. , I, uh, yeah, these are the sorts of things that increasingly, , intrigue me as I get older, which is basically a sign that I’m not long for them.
for the world, quite frankly. , have you been having an enjoyable week in this run up to the festive festivities?
Chris: Very busy. , Work really. It’s just one of the, I think, I think we talked maybe last week about the fact that this is the last week where you can get anything done. , and then everybody goes away for essentially until January.
, So yeah, it’s been pretty busy. , I can’t say I’ve been anywhere exciting, at least nowhere I can remember. , So yeah, not too bad. What about yourself?
Matt: What have we been doing? Wedding anniversary last week. Very exciting. , [00:02:00] managed to get to 15 years, which is, Congratulations, Matt. Thank you very much.
We went for an enormous meal at an Italian restaurant in Richmond. We went on for nine whole courses, and then I didn’t need to eat for the next day. , and to be fair, I didn’t really sleep much that night because of the fact I’d eaten nine courses. Quite small, but still, you know, cumulatively, a lot of food.
, took the boys to see their first ever 1 1 victory at, uh, Vicarage Road at the weekend. Which, if you understand football, you’ll know what I mean. And if you don’t, it makes no sense at all, but, , that was very exciting. And then the start of the youngest’s 13th birthday celebration. So, as of Wednesday, we will have two teenagers in the house.
I can barely contain my excitement at that prospect. I took some of his friends and him to an escape room on Sunday. And it’s the first time I’ve been to an escape room. And I guess, it tries to be something a bit like the Crystal Maze, but it ends up being like, for those of you of a certain vintage, you’ll get this.
The last bit of Ted Rogers 3, 2, 1, where quite frankly none of it [00:03:00] made any sense whatsoever, but at the end you were just glad it was over, and , there was no dusty bin, sadly. So that was , the weekend.
Chris: And are they still there? Is that, is that the, I mean that would be the reason to take the children to an escape room, wouldn’t it?
Matt: So, so, sadly, sadly not. It was an interesting place though, because there’s a university in Kingston, and I, I presume that many of the staff are drama. because they had to put on very hammy performances when they call into the various rooms over the , tannoy system to be able to give clues or just to be able to tell people to stop bashing that thing because it’s not supposed to be bashed in that way.
We got all of the, all of the children out successfully. And , Then made the meat pizza which you know again this into tonight It’s amazed though the propensity for 13 year olds to be able to talk endlessly about computer games and nothing else and a group of them together whoever invented roblox has pretty much systematically programmed an entire generation of people as far as I can work out It’s quite terrifying.
There we go. [00:04:00] Anyway lewis, , welcome to the show. , how’s your last week been?
Lewis: Oh, Mix, to be honest. I mean, you’ve been talking about escape rooms there. It was about three weeks ago we went to a virtual reality escape room. So, how about that? Wow. It’s literally where everyone has their headsets on and various different puzzles.
But in virtual reality
Matt: basically. Were you in the same augmented reality? Yeah, were you in the same physical space to be in the virtual reality
Lewis: there? Yes, yes, we were. I mean like it’s, there’s a kind of like a demarked area and , I guess it’s more augmented reality because you’re aware of walls around you and you’re aware of where And the people are around you and yeah, it was very exciting.
I was thinking it was called a bank heist and we had to escape out of a vault and Stop a robbery that was also going in progress as well.
Matt: Oh, very very good How did you find the experience of wearing?
Lewis: Better than I thought it was going to be. I have to say, I mean, I’ve played computer games since ZX Spectrum days and things, [00:05:00] and I’ve been aware that virtual reality headsets exist, .
But, um, it was a lot better than I was expecting, I
Matt: It’s pretty immersive, isn’t it? It’s, um,
Lewis: yeah. Well, it’s, it’s, you know, not to give too much away, but, um, they start off by giving you a little training session where you go up in a lift and the lift door’s open and you’re literally on top of a building with a plank in front of you and you have to walk along the plank and it is remarkably difficult.
You know exactly what it is, but, you know, it was, it was, yeah, I didn’t want to jump.
Matt: No, I can understand that. What’s the end of the plank? Yeah, absolutely. I think the first time I ever tried one was with the sort of first generation of the Oculus and the HTC headsets about seven or eight years ago when they were like firmly tethered to a fairly powerful PC.
And one of the first things I did was go into Minecraft. And one of the first things I managed to do completely unwittingly in Minecraft was dig a hole underneath myself. And that, just immediate [00:06:00] vertigo, was terrifying. I’m amazed I’ve ever put one on ever since. But, it is, there’s, there’s a, yeah, just the experience of it, we were talking about it a wee bit last week, and the, the, the magic of the experience is really quite something.
The, whether that actually translates into anything useful or not, I think is going to be the interesting thing, and it seems to be another one of those. Well, maybe the year after next it will hit the big time and people will find a use for it But we will see we continue to watch it with interest anyway, 2023 has been a year very much focused in the world around ai so we thought we’d finish the year with a bit of a conversation about some experiments that you’ve been doing, Lewis, with some AI.
So let’s crack on.[00:07:00]
As I mentioned, this year has very much been a year of artificial intelligence. And there’s been a lot of hype. There’s been a lot of people using generative tools like ChatGPT. There’s been a lot of Whiffle, I think is the polite way of putting it. There’s been an awful lot of things generated that may or may not have been true, but you know, who needs truth in 2023, let alone in 2024.
, but in, conversations over the [00:08:00] last few weeks, I picked up on something that, you’ve been, , doing Lewis, which felt really interesting in a way that kind of cuts through some of the. , Ridiculous hype that there has been this year around, particularly generative, , AI, in that you have been Building an experiment, and it is, from the outset, very much an experiment, , to be able to get generative stuff to be able to produce, in, , core code, one sort or another.
But the way that you’ve been doing it is not by just sitting in front of a chat GPT and saying, please could you write me a BBC basic program to play Doom, please. I might try that later. , but instead you’ve been creating virtual teams. You’ve been setting things up in a way that’s been about having,, relatively autonomous agents acting in roles to be able to perform tasks.
So not just a single prompt, but something more complicated. , can we start with a bit of [00:09:00] background as to, , how this came about? What the, the original genesis of this idea was?
Lewis: So, ironically enough, , Minecraft figures in it. So when, when ChachiPT sort of first came on the scene, , last year, , one of the early papers that was using, , ChachiPT was a, thing called a Voyager, , which was I think some folks from NVIDIA and, , some folks from a university in the US, but essentially they, they were using, , agents in Minecraft, but using, , GPT, so originally it’s had GPT 3.
5 and then GPT 4, , to control, , the, the players, if you like, in Minecraft. But what was unique is that, , these, these agents could have skills associated with them. So they would have like a daily rotor of activities that they would have to cycle around and they, they’d be able to plan tasks like I need to, I want to cut down a tree because that’s on my rotor of activities and things, but I don’t have the skills to cut down a tree.
[00:10:00] So we would use the generative AI, so GPT. To essentially, , grab some code and put that back into its own code base. , and again, the way they were doing that was, was quite innovative as well. , so using a thing called vector databases where you can essentially, , have a, a text description of, , an activity and then store it in a vector database.
So rather than it being like a. a normal database that you would just search by keywords or terms, , this is based on similarities. , so again, using embeddings is the phrase to, to calculate these vectors. ,
Matt: , so the vectors being the, the distance between different things in the database?
Lewis: The words, essentially. So, , all sentences and things, like, as the words are hung together, , you can calculate, , the relationships between the words, , and then store those relationships in the database. So those are the vectors. So that if you then use completely different words, , you can find out what the closest sentence is that would match it.
And this underpins. Yet another one of the big hypes of this year, which is this whole idea of [00:11:00] RAG. Now, we don’t need to go down that, but RAG is Retrieval Augmented Generation. So it’s this idea that you can do a search, in a vector database and get various different results back and then generate text and context around it, , so that it appears to be, , like a human sentence that’s returned.
, but rather than get too, , embedded in, in RAG search terminology, just to Minecraft thing, this idea of having different agents which were able to use GPT to essentially enhance their own code base. So pull down the Python script required to be able to build an axe, to cut down a tree, and then store that in the database as a tool for cutting, say.
, now then another agent, , also. has access to exactly the same database and it may need to kill a spider. So it’s going to try and look through its database of all of the skills it’s got and it could say, well, that’s close as much because I don’t have many skills. So I’m going to try and see if I can use this axe to kill the spider.
And [00:12:00] it would either be successful or it would fail. And if it was successful, then it would put that back into the database to say that the skill is now applicable for that activity as well. Intelligent agents or autonomous agents have been a dream for, for many, many years, but this is the year that I’ve definitely seen that , these things are real and can individually, , empower themselves, but really , the main power is when they collectively work together.
, for, ,task planning, execution, and, another way of, , building in guardrails towards the end of, , a process as well is to use these agents. So you would have someone who plans, someone who actually does, and then someone who checks the work of those previous stages as well, , to ensure that, , for guardrails for all kinds of things, but bias particularly, and ensuring that the result actually matches to what the original target was.
Matt: Interesting. So essentially these are code things that are learning as they go through. They are able to be able to spot what they might be able to do and then they can work out if they can do it or not by testing. And then if they [00:13:00] can’t, potentially they can go away and find a code library to be able to fill the gap of the skill that they don’t have.
Lewis: Exactly, yes. So, moving on from the Minecraft example, there’s been various frameworks, so LangChain kind of came on the scene as a way of being able to, have sort of sequences of activity using a large language model to then be able to query a database or maybe pull down a website or all kinds of other , tasks.
But then, , superseding that, although it does actually use our chain as well, , there’s a particular framework from Microsoft called AutoGen, and that is currently the best framework for these autonomous agents, basically, you can create an agent, give it a specific persona, if you like, like, you’re an intelligent agent that can, understand French or, or it could be, anything that you can use live language models for, , and build a number of these agents, which can then have a group chat facility and interact with each other, not through some API, but through [00:14:00] English as the, as the actual, or any language for that matter.
But. Literally text based communication between different agents.
Matt: , so you then had a piece of work you were doing for a client and you had a particular challenge around generating synthetic data for the client?
Lewis: Yeah, I mean, it’s a little bit more context. I mean, the client , has had cutbacks as, as many, organizations have had, and then like in the recent months and things.
And so, the team that I was working with will let go. And, not long after that, we also had to, have like a celebration of all of the work that we’ve been doing. So, a little bit tongue in cheek. I built, a bunch of agents, but I, I named them individually as members of the team that had departed.
So we had like a concept of a delivery lead. We had a concept of the ba, a concept of a, , a Python engineer and a concept of a DevOps engineer. And so I’d labeled them as, as. You know, the team that had gone, and one of the things the team had been working on was synthetic data. So the task I gave all of these [00:15:00] autonomous agents was basically to , create a couple of tables, a customers table and a transactions table and , the populated with realistic synthetic data, but also link the tables so that if you go, , a customer ID and the transaction table, and it had to link to a customer in the customer table.
So it was kind of like a non trivial, I mean, it wasn’t just, you know, so generating random, data sets or anything like that. And to my surprise and slight horror, it worked incredibly well. So, , the, the planner agent, which I’d labeled as a, as a business analyst, , essentially set out, each of the tasks quite to do it and also which agent is gonna be responsible for it.
, So it would say that, engineer Joe had to write the Python code using a particular library, sort of phaco in this case, to, to generate the tables. We then had a persona of a data scientist who was gonna analyze the output of, of the, . Synthetic data and ensure that it fit the business rules that we’d established , which is that you know In [00:16:00] the transactions table, they had to marry back up to the customers table and things , so and then You know, what’s really clever as well is that the execution environment?
It can actually run docker containers in the background and populate these docker containers with all of the libraries necessary to execute the code , and it went ahead and Um, and generated two tables, , the data scientist read through those different, , tables and ensure that, actual referential integrity was there.
, and then essentially sign up was given that, these tables had generated exactly what was required. So, by trying to celebrate the fact that this team had gone, I actually proved that we didn’t need them. Well, which was not the intent, but, ,
Matt: I had to say just as an aside that the, the line, it was a little bit tongue in cheek.
Sounds like the perfect start of some sort of sci fi disaster movie where it’s that, that, that’s how it all started. , but that, I mean, creating data like that, the., I’m presuming this is used for testing or to be able to, to
Lewis: yeah, I mean, I think [00:17:00] a synthetic data would be a podcast on its own in terms of, the amount of use cases and things for it and like how, incredibly useful synthetic data can be, because if you think of, , data sets, which have been anonymized, there are many, many cases where it’s actually quite easy to, de anonymize them, um, either just by the fact that they haven’t been thoroughly anonymized you can use extra data sets from outside of that to de anonymize.
But synthetic data by its actual nature is, it’s, it’s fully made up. However, there are very complex rules in terms of, how. The data is produced so that, statistically, it is almost impossible to distinguish it from a real data set. And this opens up huge amounts of, potential for, , what they call, code low, deploy high, in terms of security.
So you can code in relatively low security environments using huge amounts of synthetic, highly realistic data, , and then trust your deployment pipelines so that your code goes through to a highly secure environment. , which very few people have access to [00:18:00] and that’s the environment where the code will execute against the real data But you have full confidence that the code is going to function because you’ve been able to test it with highly accurate synthetic data going back to the team They they they did various different sort of statistical techniques for generating this data by analyzing the source systems and calculating but then another approach is to actually use generative, , techniques, , for generating an entire row, , at a time of, by basically giving it samples of, , what a, a data set could look like, and then using generative AI to just continue that pattern down.
, and then bring on the statistical analysis of the data sets to ensure that it actually conforms to the original data set in the first place.
Chris: This is all extremely interesting. I think we’ve talked about, obviously this year, we’ve talked about AI and generative AI a lot. And I do remember thinking to myself in the early days of the tools that were coming out, these are all kind of first order things.
And we’d see more interesting applications. Once we got to those second order
processes where you’re using the [00:19:00] AI to Then deal with the output of previous AI. Are we doing this so that we can make it more like our own interaction so we can understand it more easily because The thing about a Gen AI tool is that it hasn’t got any particular skill, but you can ask it to, , mimic a certain thing, absolutely, you can say to a prompt, act like a business analyst and, and go through this, this set of requirements or, think like a estate agent, how would you market this building or whatever it might be, but when you do that, what you’re really doing is you’re constraining it down to a certain set of ideas.
Are we actually just saying to these different tools, right? You do this thing, so forget everything else you know and just do that. And then you do this thing, so forget everything else you know and just do that. Lifting them up into a, no we’re not training them up to a, to a standard.
Actually what we’re doing is we’re carving away all the other parts and leaving, leaving the, [00:20:00] what’s left. Just so that we can see the interactions.
Lewis: I think it’s the interactions that I’d like to sort of focus on here because, um, that’s the really crucial thing about large language models is that the innate knowledge is the language.
, they happen to have consumed the entirety of Wikipedia, but we don’t trust them to give us honest answers because of the hallucination problem and things. But what they do have is innate knowledge of the language. And so therefore that allows you to build interactions.
Using just simply language. , so, previously, if you, if you wanted two systems to talk together, you’d have to have like a very watertight contract through an API or some mechanism like that. But I can see in the not too distant future, actually, you’re going to have large language models, which are.
, using things like the, the RAG that I mentioned previously. So, , it’s knowledge base is an actual knowledge store. It’s not built into the model itself. So there is a capability of, , understanding what I’m asking for converting that into an actual request, whether that’s [00:21:00] SQL, whether it’s Cypher in terms of graph network or any other retrieval technique, pulling back real information and then processing that, , again, using language, , as a result.
So if, for instance, , like the many organizations invested heavily in terms of APIs so that you could have like an enterprise service bus and, going back 20 years in terms of how, , different departments could communicate effectively with each other. So rather than having information being spread around, I can see, a future whereby you, you simply say, don’t talk to my API, talk to my wise language model, and then whatever you’re requesting, , it will be able to deal with and then process and then respond accordingly, which means that the different kinds of interactions will become almost infinite.
I mean, it’s anything you can ask for, , it is able to respond to and then find the appropriate, , information and return it back. And this doesn’t need a human in the middle, but the advantage is that a human can actually see what is going on, , with these interactions, but you know, the interactions will [00:22:00] be frighteningly fast, so far, far quicker than we could possibly speak or read or anything like that.
Chris: Not maybe as fast as an API designed to do one thing.
Lewis: Uh, yeah, maybe. I mean, they’re, they’re getting faster all the time. I mean, like the APIs run on normal hardware, whereas LLMs are having dedicated hardware built for them. So Chips designed specifically for this use case?
Chris: Yes, indeed they are. Yeah, I, I’d say I, I agree.
It’s a very interesting set of experiments, and I think it is, it’s all about where language fits into those things, isn’t it? Because, you know, as you say, the LLM doesn’t know anything. It doesn’t have any knowledge. It doesn’t have any. Anything at all other than a statistical likelihood of this particular word will fit in this particular place.
So if we throw enough computing power at it, we are definitely getting some, some very smart seeming answers out of it. The [00:23:00] question is, is where we use it in the most, maybe we’re using it to learn about the actual tasks that we’re setting it. You know what I mean? That we are able to, as you say, because we can see the interactions, we can understand the interactions.
It gives us a better chance of actually understanding what, what question we’ve actually asked rather than the question we think we’ve asked of a, of a, of a task or a process.
Matt: I’m interested in the way that language between bits of technology works because there’s an ambiguity in human language.
Which is part of his wonder, you know, humor is often based on the fact that words are ambiguous the, the, the way in which we interact is, you know, there’s inherent wonder in the fact that it isn’t absolutely logically tied down at all points. Lawyers wouldn’t have a job if it were, let’s be honest. , but there’s a difficulty there , , I was doing a bit of writing this morning, about how humans.
Might have some barriers to be able to adopt the use of generative [00:24:00] AI if we are to think of them as assistants to us, because humans have barriers to adopting collaboration with other people. And there’s lots of work that’s been done around collaboration in teams. And there’s a particular thing, a guy called Morten Hansen, who’s a business professor in the US, originally from Norway, I think.
And he talks about there being four systemic Barriers to collaboration in organizations and the one of those four that I find the most interesting is thing called the transfer barrier and the transfer barrier is where people find it difficult or often even actually impossible to collaborate with one other, one another because they have different language and that’s not that one speaks French, one speaks German, although obviously That, that would be a big inhibitor to people being able to work together.
But more that we have different lexicons, we have different vocabularies, different professions have different sets of language that we use. And the example, I’ve probably used this on the show before, [00:25:00] when I went to work in the housing industry a few years ago, I had about three months where I was utterly confused because I kept hearing the word developer and I thought people cutting code and everybody around me was thinking people with bricks.
Because developer in housing means people who build physical items. Very, very confused. Had to rewire my brain to be able to stop sort of subconsciously hearing that. I’m interested if though, if, if we create agents on the basis of large language models and say you are a I don’t have a product owner and you are a project manager and you are a developer.
Might we start to see some of those confusions coming in where the different agents, because they’re working within the constraints of their individual professions, we’ll start to have some of those. , Barriers to interacting effectively because they’ll interpret words in the wrong way.
Lewis: These are very nascent technologies right at this point in time. , but. , With the, the size of the context windows that are being able to be [00:26:00] processed on every iteration, , means that you can throw an awful lot of context into every interaction.
So, by that I mean, I what is it, Claude 2. 1 has got 200, 000 token context window. Which means you can put an entire PhD thesis into it every single time you ask it a question. So that’s a lot of context to be throwing every single time. , so I do believe that prompt engineering will disappear as, as quick as it’s come about.
But right now, prompt engineering is, , essentially the, before it was called prompt engineering, it was few short, , examples and things where you give it, you know, a few examples and then it would be able to continue. But it’s not really few anymore because you’re giving it up to 200,000 tokens, every single time.
To give it that context so it can come up with an idea so that it knows it’s a developer that knows Java as opposed to a developer that looks for building sites and, other, opportunities. but another kind of like frightening source is that, , language models could be used to, I mean, English.
isn’t [00:27:00] exactly the best, , or, or the finest communication technique, despite the fact that we’ve got Shakespeare and Griswold and all the rest of that kind of thing. The language models themselves will develop over time far more efficient ways of being able to explain exactly, , what it is.
And I’m not talking binary streams, but there will be an evolution of language models into a language that we Cannot comprehend because we don’t have 200, 000 token context windows or 2 million token context windows and won’t have an ability to understand, you know, 64 bit encoded tokens as they get thrown around.
But the amount of preciseness that will be in future versions of language models, , will be, , I mean, I think that’s another interesting philosophical thing to think about as the evolution of language is going to be taken away from us.
Matt: And increasingly the vocabulary that will be developed to be able to be more and more precise I guess will be interesting within that
Lewis: German like words
Matt: Long [00:28:00]
Chris: Not you realize this this context problem that you describe is kind of, it’s been around for a long time, right?
All the old, , speech recognition programs would always have a problem if you said court and it wouldn’t know whether you meant court, like court of ball or court as in where you go , , to face the beaker about your speeding fine and all that kind of thing. , but the context issue has been attacked very, very well, , as Lewis has said.
But it seems like, and it was kind of one of those things that says, you know, how can we. Make computers more intelligent. The more intelligent they are, the more able they are to understand context. Whereas you spent three months not understanding that a developer could be more than one thing.
Matt: Yeah, no, absolutely.
No, this is
Chris: my failing. I think this is a, you know
Matt: But it was the subconscious thing. the computer’s problem. No, no, no, but it was the subconscious. It’s the heuristics that we use to be able to be the most super of supercomputers, which is people to describe them as bugs. They’re not. They’re heuristic models for the way [00:29:00] in which we’re able to process far greater amounts of information than otherwise we’d be able to do.
And it just takes a bit of reprogramming and unfortunately it took me quite some time because my, you know, brain’s less plastic than it used to be and that’s the way it goes when you get older. , so Lewis, when you, if you, if you look at the communications that are going on between these agents at the moment within the, this, this experimental thing that you’ve created, can you By looking at them, do they make sense?
Lewis: Yeah, I mean, it literally looks like a transcript of a Teams, as in a Microsoft Teams conversation, or Slack. Essentially what you’re looking at is literally the agents spewing out English, very interpretable as to, you know, what they’re processing, what they’re doing, and then the next agent in line picking up that and continuing with it.
Matt: And in terms of the language that’s used, this isn’t as That’s a sarcastic question as well. I’m just fascinated by it. Are they polite to each other, or is it just all very matter of fact?
Lewis: Oh, very [00:30:00] polite. And in fact, in certain circumstances, if you’re not using GPT 4, like the GPT 3. 5, because it’s slightly cheaper to use, it gets into loops where they thank each other, continuously.
And you have to, like, actually break the program because they are. , literally caught in that loop.
Matt: Oh, that’s brilliant. I love this idea that we will be safe from the , the coming supercomputer menace because they’re all being too polite to each other. It’ll be like the road system in Guernsey where there’s a thing called filter in turn and you can have people reaching what would in any other place be around about and they sit there for days waiting for each other to go first.
It’s brilliant. , and that sort of interpretability at the moment that’s something that you think Language will be used, but you could see how it would just devolve away from stuff that we understand.
Lewis: Yeah, I mean, even talking about language models themselves is now, , you know. So, last week, , with the release of Gemini as a multimodal.
So, LLMs are out, MMMs are in,[00:31:00] , so it’s not just language anymore, it’s actual visual representations, it’s audio, , all kinds of different, , sort of forms of communication are being bundled together into these multimodal models. And to be honest, I’m, I have no idea what’s next, I’d like to claim, you know, some kind of insight to it, but the pace of change is absolutely astonishing at the moment.
Matt: Yeah, absolutely. And then if it’s sort of left to their own devices, because presumably you kind of trigger this, and then they start working away to be able to, it starts working away, let’s not anthropomorphize it, it starts working away. And the sorts of approach that it takes, is it recognizable as a You know, is it an iterative agile type approach that it takes?
Is it a command and control approach that it takes?
Lewis: Again, there’s various different, , constructs that you can put these agents into. So, , you can just, you know, simply have two agents where one is demanding and one is responding. continuously. The more [00:32:00] interesting ones, from my perspective, are literally called group chat, where you have different personas that you set up and, , the central message bus, if you like, is accessible to all, and they will decide if they need to respond at this point in time, given the context given their own role and given the context of what has previously gone, and they will literally either respond or not, , in those group chat scenarios.
Matt: That’s amazing. And having done this so far, , has any of it left you concerned or worried or?
Lewis: No, not concerned, because again, I obviously see the one time it works, but the 200 times it didn’t. I’m still seeing that at this stage. , but I’m really interested in, the implications moving above just, coding, if you like.
, so as I say, I’ve taken on this idea of, , business analysts and solution architects and, , a whole [00:33:00] chain of, knowledge workers , that can interpret a requirement can determine that, , things like NFRs are missing. So it could go back to users and interactively request more information until it believes a requirement is fully known.
It can then pass that on to the next, , item in the chain, which will then, break down all of the tasks that are required and propose a, potential solution architecture. Where it gets really interesting is if you have, , encoded your solution architecture in such a way that it is machine readable.
So there are various different YAML tools that , can be used to generate solution architecture documents. These multimodal models can actually interpret. , the images of the the solution architecture, but , i’m still at the stage before that which is that we will have a library Of solution architectures in some kind of machine readable format such as yaml And you’ll be able to take that and modify it and and check it back in And so you can do as is and a 2b Architecture that is auto generated based on all the requirements which have been gathered up [00:34:00] front And so I’m literally, I mean, some of my background is, I guess, sort of sleuthing out to chill and then distributed computing before that.
So I’m desperately trying to work myself out of the job.
Matt: I’m also fascinated by the idea that being able to use it from a management. Theory kind of perspective being able to say well What if you make that person a bit passive aggressive or what if you make that actor? Somewhat unsure about themselves because they’ve had a bad week at home or and then how does that start?
You know, you could do some really interesting things there about being able to do kind of attitudinal modeling within it just to be able to see the impact as if you were thinking about being able to make changes to team structures and whatever
Lewis: I hadn’t considered that from a kind of , psychology perspective almost, isn’t it?
And trying to understand how, how these things would interact. , I’m not sure that the different types of psychologies that people have would be encoded within the model itself to, , necessarily come out with something really unique. So, by that I mean, , The fact that it understands what a requirement is [00:35:00] and then it can produce tasks to, provide a solution to that.
, I don’t think there’s necessarily enough, in there because this particular, , robot’s having a bad day or, or whatever. , that it’s gonna produce a particularly bad design or, that kind of thing. , but I mean having like some kind of, , element of randomness in there. I mean , this leads to the big philosophical question that, you know, is it, , actually an AI in the general sense of, is it really an intelligent?
And my firm belief is no, absolutely not, you know, and we’re nowhere near the, , AGI despite, I think it’s Elon Musk saying it’s three years away. , I still, , despite the fact that every week I am constantly surprised, , I still don’t believe we’re anywhere near that. , I, do think where we are though is, , absolutely being able to automate the mundane.
So every aspect of human life that we think of as, you know, mundane and maybe a bit boring, that is, you know, going to be wiped away by, um, the ability to interpret, task, plan, and then execute.
Matt: [00:36:00] Fantastic. Thank you. What a way to be able to end this year’s cavalcade of podcast marvels. I was going to try to alliterate and fail miserably. That’s where you need a GPT, [00:37:00] isn’t it? You can come up with alliteration like it’s coming out of your bottom. Lewis, thank you very much for joining us this week.
Have you got an exciting few weeks ahead?
Lewis: I think there’s, as Chris said at the very beginning, this is the last week of actually getting work done, and then next week I think there may be quite a few lunches.
Matt: Yes, we’re into lunch season. , How about you Mr West?
Chris: Well, I’ve got to go to that London on Wednesday, which will be interesting, so I’ve got quite one or two interesting conversations to be had, bit of a catch up with some people I’ve not spoken to for a while, including , you know, some people who’ve been on the podcast in the past.
So yeah, that’ll be nice. It’s, we’ve got a little Juma festive gathering, you know, in the office on Thursday, which will also be nice. And yes, then a whole bunch of things to close down and finish off and, and submit and get done before we get to next week when there will be [00:38:00] things going on, but hopefully it’ll be a little bit quieter.
Matt: Well, it’s actually, it’s the, it’s the Equal Experts London party this Wednesday, but, sadly I can’t make it because happily it’s my son’s 13th birthday and priorities. , but I will be seeing some people in town on Wednesday. . Tuesday, tomorrow evening, , although probably now as you listen to this if you listen to it when it’s come out, , is, , TBD, Paul Armstrong’s, , mini conference taking place over in North Greenwich, so we’ll be going over for that.
The last one was fabulous and it included, , the author of the book Wasteland, which is probably the best book I’ve read this year, all about the way in which our Rubbish gets dealt with and fascinating terrifying book And so we’ll have to see who Paul has in store for us. And then next Monday It’s the WB40 signal group annual Christmas shindig Which Cy Cornwall has been there actually marvel in being able to do all the organization for so, [00:39:00] There will be some of us meeting up if you’re not a member of the signal group You are welcome to join if you drop us a line on LinkedIn or we still got an account on That x thing, but we don’t really use it anymore or , if you go to the website You’ll be able to get details on how to get in touch with us and you can be added to the signal group and then About one in five people survive the initial fire hydrant and stay on I think is about the batting average there So that’s , that’s good fun.
And then it’s into the christmas festivities season. So, Various things. And then we’re busy planning what is happening with the show in 2024. I’ve got the entire month of January already sorted, which I feel incredibly, , doesn’t happen very often. Does it? I know it’s great. Um, so we’ve got some fantastic guests, , and more to come.
So, , with that, , wish you all a very, , happy. end of the year break depending on how you choose to celebrate it and we will be back on the 8th of January into our eighth year my goodness of this [00:40:00] mad crazy thing that is WB40. So until then have a good break and we’ll see you in 2024 which is technically the future.

Dec 5, 2023 • 52min
Ask 2024
On this, the penultimate show of 2023, Chris and Matt answer audience questions about the year ahead…
Automatically created transcript…
Matt: at the beginning of 2023 on show 248 no less if you want to go back and listen to it. We did an episode where we thought about what might be the things that a CIO or a CTO might be being asked about in the year ahead.
So we’re going to start with a question from Mr. Chris King, who in his inimitable style asks, Review your predictions for 2023 and own your poor judgments, you cowards. Not so much a question as a statement. We’ve all seen those ones when people stand up at the bit where they’re supposed to be asking questions at events.
Anyway, Chris, we are going to go. Step by step through what we said in that first show at the beginning of the year that we thought would be important topics for Technology management in this year and see how well we did First up artificial intelligence and we said that you should probably start really playing with it with some seriousness How do you think we did on that Chris?
Chris: Well, I think we did pretty well in as much as everything is now, , AI, crazy, isn’t it? I’m not entirely sure we knew that LLMs were about to be launched upon the world and create such, , havoc. But, do you know what? I’ll take that as a win, Matt. You know, I think we were right.
Matt: Yep, I think so. The next one hybrid working,, and our conclusion was that this was something you just needed to be able to work out how to do.
, because if you hadn’t by now, then you’re in real trouble. Interesting one, this, with , continuing calls from large organisations to be able to return to the office. , but I saw some, , analysis that was from the US market that said now that return to office is pretty much flat lined. So as much returning to the office is going to happen has happened.
What do you make of it?
Chris: I would agree, you know, that’s, it’s kind of, you know the pendulum’s going to swing back. , but I’m going to, I’m going to mix my metaphors horribly as usual. So the, , genie was out of the bottle after COVID, wasn’t it? A lot of the things that stopped us working from home or working remotely, , because it just wouldn’t work, , actually did work to a, to a point and people found ways around the problems.
And I always. said that it was, it was like the elastic would pull back, but the elastic has been overstretched. He’s never going to go back to his original. point, but, but it always feels like it’s going back because you, as you, as you head towards the, what was the status quo, everybody then decides it’s going to go exactly that way it came from.
Well, it never does. So I think , the Hooke’s law, if that’s the thing that I remember from my physics in school, I think the Hooke’s law thing has been, tested and, and the hybrid thing is really important because guess what? We aren’t. All in the office all the time.
More of us are at home or working remotely. And more of us are working from locations other than a head office. So, finding ways to combine those people who are in the office and people who aren’t in the office, or aren’t in, you know, a dedicated company office, continues to be the challenge. And I don’t think we’re far wrong with that, Matt
, The next one that we were talking about was the looming recession, and how do you recession improve yourselves? So, what do you think?
Matt: It feels like we’ve been waiting for the recession all year, and whilst it hasn’t arrived, it certainly doesn’t feel like it’s gone away. And I’ve seen from a number of points, and particularly talking to freelance people, this year has been absolutely horrible.
, actually just in the last few days, I’ve seen people who were very successful freelance consultants, thought leaders in their world and all that, and talking about how they’ve run out of savings. , so whilst technically two consecutive quarters of contraction of the economy might not have been met, it does not feel like a strong and, growing economy in which we’re operating in the UK at the moment.
It feels like nobody’s making decisions. I think that there is, in a public sector, we’ve I’ve seen a lot of, , retrenchment in spending from departments that we’ve been working with. And I think that’s a, common thing. , and it also feels like there’s a kind of waiting for the new government thing.
And, and I don’t think that’s just public sector. I think in private sector as well, there’s that kind of. Something’s got to change, but we don’t know what it is, but we’ll just wait for a bit, shall we? And it’s a very, very hard market to be in.
Chris: ,
I’d definitely reflect that in terms of, I think, at the start of the year, one of my, , strategic planning Axioms was to say, look, everybody’s going to want to do more with less.
Everybody’s going to need to save money, be more efficient. I mean, of course, we all try to do that all the time, right? But this is going to be really important because I thought a recession was almost certain. , as you say, technically it hasn’t happened, but I think the way I would describe the business, , outlook this year and the market generally has been soft.
So projects have started or they’ve been mooted and then the budget has been approved in organisations and then it’s been unapproved. So you’ve seen quite a lot of people, and I’ve heard from quite a lot of people who, I’ve applied for jobs this year and got the jobs or I got through to the next stage or whatever and then everything goes quiet and then it turns out that the job has gone away.
And that’s really frustrating as a, as you know, if you’re a job seeker, especially if you’ve kind of mentally checked out of wherever you are or maybe you don’t have a job, it’s Really terrible. So I, I’ve seen a lot of that and I’ve seen a lot of people blaming recruitment teams or whatever. But guess what?
Recruiters don’t get paid until they play somebody. And for recruiters to have to go through that process as well, to interview a whole bunch of people, put people forward and then suddenly the job goes away. It’s just as, it’s just as annoying for them. So yeah, the market has been soft. The, the ability to make decisions of organizations has been kind of questionable.
And I really can’t see that changing in the short term simply because as you say, there are some unknowns going on, , in terms of the political landscape. So, so yeah, I think we’ve still got that. How do we keep costs down? How do we change our business models? How do we, , just act in a more, you know, faster and leaner and more effectively?
I think that that’s just going to be a continuing pressure for.
Matt: So the next one that we had was about the metaverse. And we concluded at the beginning of the year that 2023 was probably going to be the year when the metaverse didn’t really do very much. That feels like a reasonably prescient prediction.
Chris: The what a verse was it? What was it? I mean You bought , a headset didn’t you? And they are really cool, right? The, the, what was it called?
Matt: The, um, Oculus. Well no, it’s the, uh, the, oh dear, see I can’t even remember the name of it. It’s the, the, the Oculus, but it’s not the Oculus, it’s the Meta. Because they re branded it.
Chris: Whatever, it’s, it’s, but, but, do you know what? It’s really good, isn’t it?
Matt: It’s great.
How much have you used it in the last three months?
Every so often I will put power into it to be able to update the software on it. And then I’ll put it on my head for about half an hour and then I’ll take it off again.
Although, actually, interestingly, one of my coffees in the last week, 131 and counting, , was with somebody who works for Mural. And he was talking about some of the work that they’re doing with Meta around platforms for collaboration and collaborative working and what you would do on a whiteboard, but in virtual space.
And really interestingly, what he was talking about was one of my big criticisms of the whole Meta collaborative workspace in virtual reality is it’s just like, And what’s the point in having just right reality when you don’t have the constraints of things like gravity? From what he was describing that they’re doing at the moment, it sounds like some of that is being worked on and bankrolled. The question will be whether enough will be able to be continued to be able to put into that for it to sustain without any revenue from it whatsoever.
Because I can’t see revenue coming in 2024 either.
Chris: Yeah, it’s a real gamble, all of that, because people’s habits are hard to change. , and even though the technology is really cool, it’s not compelling enough in terms of what you can do with it. wHether that changes in the next 12 to 18 months is another matter.
I don’t think so, but I think we were right last year, Matt. Indeed. And the last one was, we, we were talking about China, Matt. We said that, , that should be on our risk register in terms of, I guess, um, I’m trying to remember why we thought China was going to be on our risk register, but I guess through either, use of intellectual property or, you know, the expansion of the Alibabas and, , you know, the Baidus and those kind of organizations.
Matt: Yeah, and I mean the other big thing was the way in which the U. S. banned the use of American technology by Chinese firms, so people like Huawei who, I was reading their latest smartphone, the P60 or something, has, satellite calling built in. You know, the Huawei make incredible mobile phones.
Unfortunately, they’re not allowed to use Google software. So therefore they have incredible phones with no software. And that’s their problem at the moment. , I think, China is as much of a risk and a threat and a challenge. And is this, massive, manufacturing base of all sorts of stuff.
It’s interesting how you’re seeing increasing things like, , Chinese brand, , vehicles on the roads. It’s the way in which Chinese stuff is increasingly creeping into our day to day life as opposed to Chinese manufactured stuff with Western brands on it. And, , I think really in the last year, the combination of the continuing conflict in Ukraine and then what’s been happening in the Middle East has To some extent, diverted attention away from China, but it’s still a big lumbering threat to who knows what.
, because everything that we seem to consume is made there these days.
Chris: Yeah, that’s right. You know, as you say, the world, , geopolitics has changed, hasn’t it? So, we probably missed out on that one as much as it wasn’t the big issue that we thought it might be. But as you say, that may be just other stars have shone brighter, , this year.
Matt: Right, so having hopefully shown, , some credentials in not being able to make complete arses of ourselves, , for , the year that has gone, let’s have a think about the year ahead and maybe some of the things that might be happening there. With obviously the big, big illuminated caveat that past performance is not an indication of future performance.
, so the first question we’ve got comes from a friend of the show, Lisa Remers. And it is this. I have heard some horror stories of people using LLMs to fill out job applications, which look great at a distance, but lack the specificity needed. Real examples or well, any sort of reassurance is not a pack of lies.
But with applicant tracking systems also auto rejecting things without the keywords, how can we fix the job application process?
Chris: Well, okay. You know, this is, comes pretty much straight to that point that you often make Matt about us heading towards a time when we’ve got AI writing stuff for AI to read and there’s nobody in the middle of it.
But I think this does hit another point. And again, I’ve talked about, just talked about recruitment, a lot of recruitment, especially the kind of cheap and nasty recruitment does use tools, applicant tracking systems to then just. Whizz through keywords and figure out, you know, which app, which application should be surfaced to the top.
And if you are using pretty dumb systems to do that, then people will game it. They’ll know that that happens and they’ll just game it. So it’s kind of not surprising that people would then use tools to make that easier. The question we’ve got to ask ourselves is when we And I would like to think that our recruitment team is, , a better recruitment team.
Focused on outcomes more than just lobbying CVS at the wall until one sticks. , and I would hope that what that will do is it will drive out the, the bad recruitment and the bad recruiters and the people who don’t add any value beyond. Having a portal that people can apply to and the real recruiters that understand the market that can give advice that can really help to get the right person to fill a role will actually maybe be more valued, right?
So,, to fix the job application process, we’ve got to be just got to value the job, right? If you don’t value the role enough to try and put the right person into it. And if you think of it just as filling a gap, I think , that’s part of the problem. But maybe, , the way that AI is going, the way that LLMs are going and that these tools are heading, they’re actually removing some of those roles that actually could be filled by anybody.
And it’s the, experience and wisdom and knowledge and analytical ability of people that, that will be more valuable. So maybe, yeah, maybe this comes full circle and it, it all works out. Or maybe I’m just a hopeless optimist. What do you think?
Matt: Possibly the hopeless optimism. I think that the, the year ahead in the next few years, this, so the thing, and I’ve written about this recently actually, that the, , the idea that we have a world which is consisting of interactions and transactions, and the interactions cannot be scaled in anything other than a linear .
fashion because they represent interaction between two people and transactions can be scaled in a exponential way because they are purely mathematical constructs and and so on and so the example i’ve always given is that if you go to tiffin is to buy an engagement ring that’s a good example of a very high touch interaction and if you buy that same ring on the internet that turns it into a transaction and what you lose along the way is a bunch of social and cultural significance And if you’re getting married and you choose the latter over the former, you will notice the difference in cultural, , and social value.
At the moment, it might change over time, but people don’t think about this stuff. Anyway, what I think we’re starting to see with the way in which, natural language processing and, the generation of content through, , Artificial intelligence type technologies is actually a simulation of interaction.
Now a CV, the point of a CV was never as a data transmission protocol. It was part of the dance of recruitment and it was actually something that somebody gave to somebody else. But what has happened at scale already is the applicant tracking systems and all the gizmos have been stuck into those have removed much of the interaction and it’s led to a world where people think that you don’t need those interactions for people to be able to recruit.
I can think of one of my clients in particular this year who believed that and thought they could just do their own recruitment with one HR person and LinkedIn. And they have spent months and months and months and months not finding candidates because it is so much more than that. But this isn’t just a question about recruitment for me.
I think that we will see loads of examples where, because it’s easier just to automate the existing process and to be able to actually think about fundamentals, we will end up with AI talking to AI and madness will ensue. And we will see this in all sorts of places. And for the year ahead, this is where it’s going to start.
And I just hope there are enough of us saying this is insane. Stop it immediately for us to not get into a world where it won’t be that we will have, , a super intelligence that we can’t control. It will be that we will have mass loads of stupid that we can no longer control. And we’ve already seen that, you know, things like financial markets.
, we have mass loads through high. Frequency trading and whatever, and we’ve had a number of instances over the years now where stupidity at scale has almost collapsed the entire global economy. And it’s that stuff where previously it was only applied to things that were about numbers, but are now going to get applied to things that traditionally have been about interaction between people that we need to be really, really wary of in the next few years and in the next year in particular.
Chris: Cool. So. Having knocked that one , , into the basket. Let’s move on to the next one. So this is, , John Wilshire. We’re still talking LLMs here. Matt and John, , has got an example that he’s picked up from the internet, , about what It’s called an SEO heist, so search engine optimization heist using AI, where apparently these people, they exported a sitemap from a competitor’s website, they turned all their URLs into article titles, they created a couple of thousand articles from those titles using AI, so a bunch of thousand , worthless crap, I suppose, but you know, enough to then bend the, the, the light of, the search engines towards them.
And, and apparently 18 months later they’ve stolen 3.6 million total traffic, 490,000 k. Monthly traffic, I guess those are visits or visitors or something. So John said, in the light of LLMs being used to hack the web in that way, what becomes of the concept that we think of the web in 2028 24? At what point is it just bots writing for bots?
So
Matt: I think it has been for years. I think if you look at the whole world of search engine optimisation and the particular form of this that annoys me more than anything else. I don’t really look at recipe books these days. I go onto the internet and search for recipes and there are good reputable sources of recipes like BBC Good Food and Jamie Oliver and various other places where you start with the list of ingredients And then you can work out from that whether it’s worth looking at.
However, the internet is stuffed to the gills of recipes that start off with about 2, 000 words of preamble talking about what it is that they were doing on the day when they thought about this recipe and yadda yadda yadda and it goes on and on and on and you have to scroll through pages and pages and pages of absolute guff.
Before you get to the actual recipe because those pages and pages of guff are then loaded with adverts and those adverts are where this website gets his money from and that’s fine, except it means that it’s an awful experience because it is exactly this. It is nonsense content that has been generated Possibly by bots, or they possibly by people acting with the intelligence of bots following horribly algorithmic kind of ways of thinking to create content that is there to enable as much advertising space to be sold as possible, but not in the way that newspapers and magazines used to do it, which was to have compelling content, but just by having loads of swarth that you have to be able to try to pick through.
And the idea that the internet is not already full of this stuff. I think is quite preposterous. It’s been full of this stuff for as long as people have been selling advertising space at scale on the internet and it’s got worse and worse and worse and worse and will bots make it worse? Yes, of course it will.
Would it mean that we’ll get to a point where we can’t actually find anything useful? Possibly. And that’s where it gets interesting, because I think maybe what the search engines need to do to be able to try to help you get to content that isn’t just bot created swarf. Because if you want bot created swarf about a subject, just go to ChatGPT and ask for it, you know, at the time you need it, rather than going to a webpage that’s been published already.
I’ve gone into angry old man mode now.
Chris: You really have. , I mean, which is what I do with recipes actually, because I don’t, uh, I don’t go to recipes anymore. I, I look in the, in the fridge and say, okay, I’ve got half a, half a bit of garlic and some cheese and , some onions and, and then I’ll go to the chat and I’ll go, right, this is what I’ve got.
What can I make?
Matt: And it was like a trip to the supermarket.
Chris: It’s quite good at that. It will come up with something. And I’ve got a slow cooker and I’ve got these things. What can I do? I think you’re right. I think that it has been like this for some time, but it’s getting worse. And it just depends on the way you look at the web, isn’t it?
I think Google has got worse over the years. I think Twitter or whatever you call it now. It’s pretty much unusable these days. I mean, it’s just no fun. So, I think we just end up moving to different places, right?
So, I use Blue Sky a lot more these days. Probably as much as I use Twitter. And that’s quite a nice place to go. It’s still not got quite as much content as Twitter. And the kind of people, , that are on Twitter aren’t always on something like Blue Sky. But enough of the kind of people I want to hear from are.
Right? So that, that makes it a much more valuable experience now. And the only reason I go to Twitter is because it’s still where a lot of the sort of news organizations, etc. are post. But yeah, I think, you know, it’s, it’s an interesting, , proposition
we’re getting computers to write things for other computers to read. , but I do think when you do that, actually very quickly becomes obvious that there’s very little value in it. And then the people that. start to do something slightly different are the ones that get the value from it. My son, for example, who’s at college, he uses GPT all the time and he uses it as a kind of learning assistant really.
He’s doing his college course and the, the, the tutor will say, can you do this, right? Do a presentation about, you know, how marketing is used in business or whatever. And he’ll use GPT. But he’s seen daft people use GPT and essentially just copy and paste the output into a presentation and then come unstuck when they get asked about it.
He knows that what he’s got to do is use it as a structure, you know, for a structure, ask it some refining questions, ask it to explain some of the stuff he doesn’t understand, and actually use it. To learn, right? And, and sometimes you’ll come unstuck doing that, but then again sometimes you can come unstuck going to Wikipedia,
sometimes, even when you and I were younger, Matt, and if you had to, find something out, you might look in a book, , Sometimes you look at a book, and actually, the book would be 30 years old, and when you actually tend to talk about it, your, the knowledge you’ve got would be completely out of date because you just read the wrong book.
It’s, you know, it’s, it’s always been possible to be wrong. You just need to cultivate your judgement so you are less likely to be wrong.
Matt: So another question, I think, sort of in this field as well from Elias Williams, um, who asks about the commoditization of software.
He asks the barriers to writing software are getting lower and being accelerated. wHat do we think the impact that this could have on things like software as a service businesses, development shops, consulting and in house development teams?
Chris: I think it’s a similar answer as well, right, in as much as this has happened a lot in the last few years.
When I was, when I were a lad, what we used to call programming before we were coders, , you would sit down and you would write code from first principles, really. You wouldn’t have much to go on. You might have a manual or something, or a book of words we used to call it, which is like, you know, the function guide of the language you’re using and it would have little snippets of code to explain how the function works, so you might use that as your starting point and then build from there.
And then, one marvellous day, you get Usenet and you start to ask newsgroups, you have newsgroups where programmers would , congregate and you’d say, oh, how, does anybody know how we do this, or has anybody got, and some people might then supply a bit of snippet of code. And then, a few years ago. Things like Stack Overflow came around, and we got a lot more code that was being shared.
A lot more examples that could be used, and we were reusing software more, code libraries started to appear. We were, that’s, that’s accelerated coding, no end, right? And, but you’ve also, also ended up with people not really understanding how the program they’ve written works, even though it does.
They can compile it and sort of put the seal on it and then just run away and hope it never goes wrong because they don’t actually know how it works. And I think it’s just an extension of that. It’s great that some of the more, you know, the easier bits of coding, which are just a bit of a grind, can be taken away by things like LLMs and advanced software.
But if you asked an LLM or For an AI system to write a complex piece of software, you’d really want to look at it before you pass it off because, you know, that’s so risky to just give it to somebody else to do, who can’t explain how it works or you’re going to use it just to accelerate the, easy stuff and then, and then add your own value on top.
So for me. It’s just the continuation. Software gets more and more complex. So we’re barely keeping up with the increasing complexity of software, frankly. So I don’t think it’s going to change that much. What about you?
Matt: , I think I would advise you to listen to next week’s show when we’ve got Lewis Crawford, who’s a colleague of mine, who has been doing some experiments recently building virtual software development teams in AI environments.
So actually, just experiments is not to replace developers yet, but being able to set up agents that act as a team and being able to get them to be able to build things and what he’s been finding through doing that is absolutely fascinating., we’ll talk about this on the show more next week.
Chris: Okay, so we’ve got another question from , Nick Drage, who asked us about what technology will have AI levels of impact on common discourse in 2024, regardless of its actual effectiveness?
Is web 3 due a comeback?
Matt: , I think that the technology which will have aI levels of impact on common discord in 2024 is going to be AI. I think we’ve got another year of this at least. , I’m picking up on his point about, you know, the regardless of actual effectiveness, there’s definitely useful stuff that you can do with AI technologies that are around at the moment.
, but there are huge, great gaps in them becoming. properly operationalized and being able to put in to organizations to be able to do things that will deliver, large scale value. At the moment they’re delivering value to software companies because they’re being used as a way to be able to sell more software.
, if you think about how long it takes for organizations to be able to make change happen, the idea that LLMs and the like are going to accelerate the ability for organizations to implement change, I think is really quite. optimistic to say the least. , ,
Chris: I would say also, from my point of view, now, common discourse might mean, you know, down the pub, I’m not entirely sure we’re talking about that level, but I think automation is getting to the point where, and again, I think it’s the impact of the vendor, really.
I think Microsoft, that low code stuff, the Power Platform, what I kind of call tactical automation, where people can just automate a small bit of their process. The kind of thing that people used to call me up for when I was an IT manager, and 20 years ago, and they’d say, no, Chris, this is really, it takes me three days to produce this report.
Can’t we do something about it? And I would then spend. A couple of weeks working with them and then they would have a report that they could run in 10 minutes and it would save them, genuinely save them days and days of work. And that was great. Used to, I used to enjoy it, used to make their lives much, much, much easier.
, but they had to come to somebody like me to do it and we still kind of have to do that to a, to an extent. And the idea that you can then take to somebody, you know, here’s a tool,
, I read something, , around, , I think it was Austin, , who, some engineers who had to get an email and that is take the email and transcribe the email into their head.
ERP system. I think it was an SAP system before they could start work on something. And this was like the morning job and it was the job that they really hated it ’cause it was just a re-keying job. And then they automated it, , for a fairly naughty tool. And that to me is a nice, is, is a little bit of tactical automation.
It would never get automated from the top down. It would never be something as part of a big digital transformation project. But actually, if you are , an end user with a, a bit of drudge that you can automate away. With a few hours work and maybe a few pointers from somebody. Suddenly that’s massive because everybody can do it.
And will that happen this year? Don’t know. But I think it’s on the horizon. I think that kind of tactical automation thing where you’re not trying to solve one massive bottleneck, but you are continually Shifting smaller bottlenecks. I’ve just got to hunch that’s on, that’s what’s going to be big.
Matt: Also makes me think though, which year will it be where bad data becomes the public discourse?
Because a lot of this stuff depends on the idea that your data is reasonably in a good state. How many organizations data is in a good enough state to be able to leave automation to machines and machines alone.
Chris: Well, I don’t know. I think maybe it’s that big transformational top down automation that requires data to be in a good state.
Because when you’re getting down to that lower level, that’s where the people, they’ve kind of come up with ways to manage bad data because they deal with it all the time. They get an email from somebody, they kind of go, oh no, they didn’t mean that, they didn’t mean this. Right. And they, and, and they kind of build their own process in for managing that. Because at the end of the day. We can keep saying, oh, we need our data caught to show people to wear blue in the face, but unless there’s a really good incentive for lots of people to act upon that, it’s not really going to happen. We kind of have to accept that sometimes we’re just going to have to deal with it. And that’s part of the process, is dealing with the fact that we know we’re going to have bad data.
And at least if you understand what can be bad and what’s likely to be bad, you can put systems in to mitigate it.
Matt: I suppose that the challenge will be in a world that feels increasingly like it has become binary, that something is right or wrong, left or right, red or blue, , where you have that necessity to understand that the answers might be fuzzy, or one of my favorite things at the moment is when somebody says, is it X or is it Y and pointing out that it could be both at the same time and that’s a perfectly valid answer.
I don’t know whether the constructs in which some of these technologies are going to be placed, say like making a decision about whether somebody should have asylum granted or not. Let’s automate that, stick some data into it. Bish bash bosh and we’re away. Those kind of decisions are not simple, they’re not binary and if you’re just relying on a black box that pops out an answer at the end that’s where it gets worrying for me.
Chris: Yeah, so I’ve got a plan now for my next project, which will be my best selling book, Embrace the Purple. Right, so it’s not blue and it’s not red. Embrace the Purple.
Matt: I’ve got one question that’s sort of out on its own a little bit, but I thought it was interesting to be able to ask. It’s from an anonymous listener. , How do you go about shifting a broad company culture to one where people put their cameras on by default in remote or hybrid meetings? Our little part of the company does this by default, but in the wider group, large meetings are seas of avatars.
People want to work flexibly, but then don’t show up when they do. I know there’s a neurodiversity angle to this, but is there also a neurotypical argument to be made for most of the population? Am I wrong to pursue it? Should I just let it go? What do you reckon?
Chris: I think my, , opinion on this has changed over time, and once upon a time I was beating people up to put their cameras up and say, come on, just, you know, get with it.
And, we’re all, better off if we can see each other. And actually I think I’ve probably toned that down a bit recently , because I do think that everybody’s different and people do have, you know, reasons for not Wanting to have their camera on sometimes they’re good reasons.
Sometimes they’re not, I think it is down to the individual. I had a couple of conversations with somebody quite recently. Somebody I don’t know very well. , but they were quite in depth conversations. And when we started, he said, look, do you mind if we turn our cameras off? This is because.
I’ve been doing this , , face to face and , when I do it on teams, I find it’s just too distracting to be staring at the other person all the time. , and when you’re in a, in a physical location, you don’t necessarily look at this, like gawp at the other person all the time. So we did that and we did that on two separate occasions. Actually, I thought it was really odd at first, but I could see the value in it. And as much as I was concentrating on the conversation and the words, not trying to read the expression or the innermost thoughts of the person at the other end and looking for their reaction, and, you know, I think maybe let it go. But what about you?
Matt: See, I’ve gone the other way. I would have said each to their own and all that a few years ago. Now I’d say, would you allow somebody to walk into all meetings and stick a paper bag over their head?
No, you wouldn’t. Okay? So, I think that there is something more problematic, though, if you’ve got a culture where people default to turning cameras off. And You don’t have to stare at each other all the time. The one thing I would say though, is that often when you ask people why do you not want the camera on, it’s because I don’t like looking at myself.
And I find it really interesting that the tools that we have don’t make it immediately obvious how to turn off your own camera view. It’s quite hard to find that setting. And then there’s some tools, like the one that we’re using to record this, where you can’t turn your own camera off at all. And that’s a bit odd, because in a meeting room, I don’t know if you’ve ever been into a meeting where you’ve been sitting opposite a mirror,
so in a restaurant or places where there are mirrors on walls, I cannot stand ceilings or no, no, no, no, it’s a different thing. , but I cannot stand being in a room where I can see myself in a mirror. And that’s exactly the same thing that’s going on with being, on a teams or a zoom or a meet call and seeing your own image and the camera completely get that.
Turn your camera, self view off. But no, I, think that there’s, there’s something going wrong in a culture of people predominantly don’t have their cameras on.
Chris: That’s very interesting. I mean, some would say, I personally, it’s a bit of a bonus for me to see myself whilst I’m talking
Matt: to other people. But, um, I
Chris: could understand that maybe some other people would think
Matt: Think not, faces already.
Interestingly, we did try a tool to record this for a while some years ago, which didn’t have camera view. But we found it very difficult to be able to do this without being able to see each other.
And particularly when we’ve got a guest, we’re able, I think, to be able to signal to each other without making big gesticulations when it’s somebody else’s turn to ask a question or any of that. And I don’t think we can do this recording unless we’ve got video.
Chris: Let’s quickly move on to our next one. We’ve got a question from Steve Parks, our favorite marketing agency guru. And he’s talking about the fact that it’s election year in the US and the UK, and asks what part would the changed and changing landscape play in shaping the campaign’s coverage and results this time?
Some previous campaigns have seen a new generation of civic tech and entrepreneurs. And what might we see come back to tech from the campaigns this year?
Matt: So, it’s going to be interesting having election campaigns without a meaningful Twitter. , It’s certainly for me for the last, I, I, I’ve started using Twitter I think in 2009, so 2010 election it was very nascent, 2015 it was very much part of it, 2019 it went a bit batshit, , and I, I, I look at Twitter a few times a day now but I’m not posting on there at all now and to not have The political discourse on there is going to be interesting.
I don’t really look at Facebook very much anymore. It’s going to be interesting to see because quite a lot of my, social media uses now migrated into LinkedIn. It’s going to be interesting to see whether there is politics on LinkedIn for the elections. Politics generally. is reasonably frowned upon still, I think, on LinkedIn.
But when there’s a general election at play, it will be interesting to see how that one pans out. Politicians do use it, and you do get occasionally incredibly misjudged, , pieces of social media stuff going out on LinkedIn by politicians. , there is then the emergence of video. So for the last few years, video has been the big growth area.
I think politicians have got a fantastic opportunity to make enormous arses of themselves with TikTok in particular and no doubt many of them will.
And the other thing that I is going to be really interesting is the way in which so much So the social network stuff has gone more private and it’s gone private in groups in WhatsApp in groups in signal and so on and those are not penetrable by , traditional advertising mechanisms, political parties, and so will they find ways to be able to kind of socially engineer their messages into private groups, because I think those are going to be important places of influence in some instances, although they might just be, , groups of people who all have the same sorts of views.
I’m thinking of things like, you know, WhatsApp street groups and that kind of stuff. I think it’s going to be interesting because of the way in which particularly Shifter video and the collapse of Twitter and how that will play out for electioneering.
Chris: Yeah, I mean, I think it’s now a part of the political game, isn’t it?
And are people out there who track the spend by political parties in social media. There are people like WhoTargetsMe, so that’s sort of an extension you can put on your browser, your Chrome browser, that will then, it basically looks to understand who’s targeting your feed on Facebook from political parties, and they sort of generate, they gather lots of data about that.
, When we had the by election here in Tamworth, , a few weeks back. I mean, I don’t use YouTube a lot, but every time I went on it, I seemed to get an advert from the Labour, , candidate. So I think YouTube and those sort of channels, Instagram, are being used, and will be used. I think Facebook is actually quite an interesting one at the moment, just because of the demographic.
It’s not really a channel that kids use very much, you know, I think if you’re under Probably Facebook isn’t a thing that you use. It’s used by quite a lot of people who are older and even older than you and I Matt. And that’s your voting demographic. So I reckon Facebook in terms of advertising and trying to target is still important.
But I do think that we’ve got over that kind of madness of the Cambridge Analytica and the idea that you can. micro target people and, and, you know, really, really make a difference in elections. I think, I think, going back to that data quality thing, I just don’t think we’ve got the information as much as we pretend we have.
I don’t think we’ve got the information or the tooling to say to make that much of a difference. I think the Cambridge Analytica thing personally was a bit of a beat up. Maybe I’m wrong. Maybe the next thing that will come along will be, will prove me wrong.
Matt: Yeah, it’s an interesting question, isn’t it?
Because, , being able to target micro demographics is definitely what the online advertisers or the online advertising platforms will claim to be able to give you. The question is, can you identify the right demographic to which you need to push your messages? The other interesting thing for me, I don’t get very much election literature through the door because we’re in a safe Lib Dem seat, the Tories don’t bother and Labour definitely don’t bother.
, will I actually see anything at all? Will I even notice there is an election on from the advertising that’s pushed in front of me? That would be a good indication about whether targeting is working or not.
Chris: Oh yeah. I do think that if you look at, like, that psychological view, , that the swing seats, you know, the, real kind of crunch seats are going to probably attract a great deal more.
Matt: You are not gonna be able to move for the stuff. ’cause although there’s a big minority, you know, that’s very much a swing seat that you are in.
Chris: Yeah, absolutely. It will be. I mean, the, the, the conservatives will be ex hoping to win this back at the general election. , labor will wanna hang onto it, so there’ll be a few seats that get a lot more focus.
, it will be left down to the, , individual conservative associations or constituent labor parties or whatever the Libs call themselves. in the areas to do that. They’re, they’re individual. And it’ll just be down to how much money they’ve got locally. So, yeah, again, another, another interesting evolution of our politics.
Matt: And to the second part of Steve’s question, do you think there’s going to be anything that will come back from these election campaigns back into technology or into the, to the broader business world?
Chris: I think that’s a really difficult question. , I think what we’ve seen quite a lot of what Steve calls Civic Tech.
In previous election campaigns in the UK, but I cannot honestly say I’ve seen anything come out of it that’s, kind of entrepreneurial or made a, made a big difference. I might be wrong, but I might be missing something, but, but that’s not something on my horizon.
Matt: So, the last question of this, the last AskWB40 of 2023.
And, , from, , one of our recent ish guests, Jarnel Chudge. , There are many challenges and wicked problems that the world is facing, and more and more people raising questions about the ability of the current and prevailing, and or prevailing models and systems of commerce to respond to them, regardless of the belief by the tech bros in their domination of the technology landscape.
So, the question to you. Matt and Chris is, what do you think that each person could do that would make a difference from anywhere and everywhere in the world?
Chris: Well, difficult question. , short answer is, you know, we’ve got a massive problem in as much as there’s going to be mass migration on the planet in the next 10, 20 years. Lots of it driven by climate change. , we’ve got to do something about that. And it’s a big problem, but we can do something about it. In our own way, we can push for the things that we use, the things that, the services we use, the companies we work with to manage their impact on the environment.
So for example, We’re building in, , the green, , software foundation,, principles into our software development, , process, , low carbon cloud, things like that, where you run your loads at what time at which data centers around the world, it can make a difference. So I think.
You know what, we’ve all got our little things that we can do in our, um, in our roles and in our lives. And it is just about trying to find the thing that you can do.
Matt: Know what you’ve got some ability to be able to influence in whatever small sort of way you can and take that opportunity to do it.
, which, , I think you and I both try to be able to do our bit for what we can when we can. And, so stay tuned for a bit more on that for what we’ll be doing in 2024. There’s, , something afoot, but not until next year.
Chris: So there we are, Matt. We’ve made yet another rod for our backs for next year. Let’s see how that goes. but let’s think maybe a little nearer to now. What’s your next week like?
Matt: Uh, apart from the, the worky stuff as we start to close out the calendar year, , I will be attending, , Vicarage Road. to watch Watford play football for the first time this season.
I know I’m a total fair weather these days, but there we go. I’m taking both of my children to the, to the match, , assuming that they both,, hold true to their promise. And, um, so what’s been a very successful run over the last couple of months is almost certainly going to come to an end. Which would be fun.
Apart from that, we have, , we have a meeting, just picking up on the last point from the last question, we have a meeting Monday, a week today, which will be to explore some of these ideas we’ve got for the show in the new year. And, other than that, I have a relatively kind of calm week as we taxi, inevitably, into the Christmas break.
How about you?
Chris: Yeah, I’m not going very far this week. I’ve got a little few little things to do. , it’s going to be, it’s going to be another busy week with work. I’ve got still got to finish off some decorating. ,
Matt: And, but is it the actual fourth bridge you’re decorating? Is it?
Chris: Oh, listen, believe me, if I could do any quicker, I would.
I’m not an expert at decorating. , it does take me a while. And. , this week, there’s quite a lot going on, but it’s kind of head down, ass up kind of thing, so I think it’s going to be one of those weeks that goes by fairly quickly and then it’ll be on to the weekend again, so I don’t, I have a very boring answer for this one, this
Matt: time.
That’s fair enough, you know. , Well, we’ll be back for the last show of 2023, , next. Monday, , when we’re going to be all to plan, going to be meeting with Lewis Crawford to talk about virtualized software development teams in the world of AI, which should be fascinating if not slightly scary, , until then, thank you for joining us and we will be back next week.
Chris: Thank you for listening to WB40. You can find us on always on the internet at WB40podcast. com. , And on all good podcasting platforms as ever. Tell your friends about us. We’ll go to your local park. Scroll WB40 on passing dogs. Anything you can do to increase our listenership will be well rewarded.
Not rewarded now, of course, but maybe spiritually one day.