Environment Variables cover image

Environment Variables

Latest episodes

undefined
Apr 3, 2025 • 44min

The Week in Green Software: Data Centers, AI and the Nuclear Question

Host Anne Currie is joined by the seasoned Chris Liljenstolpe to talk about the latest trends shaping sustainable technology. They dive into the energy demands of AI-driven data centers and ask the big question around nuclear power in green computing. Discussing the trajectory of AI and data center technology, they take a look into the past of another great networking technology, the internet, to gain insights into the future of energy-efficient innovation in the tech industry.Learn more about our people:Anne Currie: LinkedIn | WebsiteChristopher Liljenstolpe: LinkedIn | WebsiteFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterResources:AI’s Growing Energy Appetite – The Need for Transparency [05:24]How DeepSeek erased Silicon Valley's AI lead and wiped $1 trillion from U.S. markets | Fortune Asia [17:35]The SMR Gamble: Betting on Nuclear to Fuel the Data Center Boom [22:53]AI’s Growing Footprint: The Supply Chain Cost of Big Tech Events:Webinar: Data-driven grid decarbonization | Electricity Maps - March 19 at 5:00 PM CET, VirtualCloud Optimization 2025 – FinOps, GreenOps & AI-Driven Efficiency - March 20 at 4:00 PM GMT, Amsterdam Code Green London March Meetup (Community Organised Event) - March 20 at 6:30 PM GMT, LondonGreen Software Ireland | Meetup - March 26 at 8:00 PM GMT, VirtualIf you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRNSCRIPT BELOW:Christopher Liljenstolpe: The US grid's gonna be capped by 2031. We will be out of power in the United States by 2031. Europe will be out first. So something has to give, we have to become more efficient with the way we utilize these resources, the algorithms we build. Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.I'm your host, Chris Adams. Anne Currie: Hello, and welcome to This Week in Software, where we bring you the latest news and insights from the world of sustainable software. This week I'm your guest host Anne Curry. As you know, I'm quite often your guest host, so you're not hearing the dult tones of the usual host, Chris Adams. today we'll be talking to Chris Liljenstolpe.Christopher Liljenstolpe, a leading expert in data center architecture and sustainability at Cisco Networks. Christopher is also the father of Project Calico and co-founder of Tigera, and he's a super expert in cloud infrastructure in green computing. But before I introduce him, I'm going to make it clear I've known Chris for years because he, and he's worked very closely with my husband, so we know each other very well.So that might explain why we seem like we know each other quite well. Who knows. What I do know from Chris is that it's impossible to say what we'll be talking about today. We will go all over the place. But Chris, do you wanna introduce yourself?Christopher Liljenstolpe: We might even cover the topic at hand, although that is an unlikely outcome. But who knows? That might be a first. That would be a first, but it might be an outcome. Anne Currie: So introduce yourself. Introduce yourself.Christopher Liljenstolpe: Sure. So, as Anne said, my name's Christopher Liljenstolpe. I am currently senior director for data Center Architecture, and sustainability here at Cisco, which means, once again, I failed to duck. So I'm the poor sod who's gotten the job of trying to square an interesting circle, which is, how do we build sustainable data centers, and what does a sustainable data center look like?At the same time, dealing with this oncoming light at the end of the tunnel that is certainly not sunshine and blue birds, but is a locomotive called AI. And it's bringing with it gigawatt data centers. So, you know, put that in perspective. Mintel, two years ago we were talking about a high power data centermight be a 90 kilowatt rack data center, or a 100 kilowatt rack data center, or a 60 kilowatt rack data center. And about two years ago we went to, okay, it might be 150 kilowatt rack data center, and that was up from 30 kilowatts from years ago. Took a very long time to get to 30 kilowatts. That was good. From two years ago to nine months ago.Nine months ago it went from 150 kilowatts to 250 kilowatts. So it took us decades to get from two kilowatts to 90 kilowatts to 150 kilowatts. And then in a year we went from 150 to 250, maybe 350. Jensen last week just took us to 600 kilowatts a rack. So yeah, that light at the end of the tunnel is not sunshine at the end of the tunnel.So yeah, how do we do sustainable data centers when you've got racks that need nuclear power plants that need strapped into each and every rack? So, you know, I'm the one who gets to figure out, you know, what does a gigawatt data center look like and how do you make it sustainable? So that's my day job.And then, and this really becomes a system of systems problem, which is usually what I end up doing throughout most of my career. Put the Lego blocks together, build system of systems, and then figure out what Lego blocks are missing and what we need to build. So, I did that with Anne's husband on a slightly different space, which was how do you build very scalable networks with millions of endpoints for Kubernetes?And now I'm doing this for data center infrastructure. Anne Currie: Which at least is absolutely fascinating. So for listeners, a bit background on me. I'm one of the authors of O'Reilly's new book, Building Green Software. I'm also the CEO of a learning and development company Strategically Green with the husband who used to work with Chris. So, in Building Green Software, Chris was a major contributor to the networking chapter.So if you are interested in some of the background in this, and the networking chapter is very high level, you don't need to know any super amazing stuff about it, it'll ramp you up on the basics of networking. So take a, have a look, have a read of that. If you want a kind of, a little bit of a lightweight background to what we'll be talking about today.But actually what we're talking about today is not networking. It is, it was a part of, it is obviously at a key part of any data center, but that's not really where your focus is on the moment. It sounds like, more like energy is what you are caring about at the moment with DCs. Is that true or both? It'll always be both, but... Christopher Liljenstolpe: It is, it's both. Energy starts behaving a bit like networking a bit at this level. And it's getting the energy and getting the energy out as well. The cooling is actually a real interesting part of it, butwe really start thinking about the energy as an energy network. You almost have to, when we start thinking about energy flows this size, and controlling them and managing them.But, then there's other aspects to this as well. Some of the things that are driving this insane, I'll be right out and say it, this insane per rack density. Why do we need 600 kilowatt racks? Do we need 600 kilowatt racks? But let's assume we do need them. Why do we need them? We need to pack as many GPUs as closely together as possible.That means that we need, and why do we need to do that? We need to get them as closest together as possible because we want them to be network close for very high speed so that they, we have a very high performance cluster or closely bound cluster so that you get your ChatGPT answers very quickly,and they don't hallucinate. So that means putting lots of GPUs and a very high bandwidth memory very close to one another. And when you do that in networking, you want that to be in copper and you want that to be a very specific kind of networking that really ends up using a whole lot of energy unless you pack it very closely together.So that 600 kilowatts is actually the low power variant. If we stretched further out, it would be by another order of magnitude, because we'd have to go into fiber. So we pack it very close. And that means we end up packing a lot of stuff very closely together that drives a lot of power into one rack, and it takes a lot of power to get the heat back out of it again.So it would be worse if we stretched it further out, but it's a networking, it's partially a networking thing that's driving this, actually. So is there one of the things, levers we can try and pull, is there a better way of doing this networking to cluster these things tighter together? So it always comes back to the network, one way or the other. Anne Currie: It does indeed always come. So although I live in a networking household, this I'm not so familiar with it, I don't know how this works. Is this that the GPUs have to talk together very fast, so there's almost no transit time elapsed, transit time in messages between the machines.Is that why the networking is so important? Christopher Liljenstolpe: You wanna get as many GPUs talking as closely together as possible. More specifically GPUs and their high bandwidth memory. So the HBM stacks, the high bandwidth memory stacks and the GPUs. The minute that you have, the way, and one good question, if this isn't a good architecture or not.There are basically in a aI infrastructure, there's three networks that tie the infrastructure together. This what's called the scale up Network, which is the very high speed network that stitches, some number of GPUs together, and that's on the order of, today, anywhere from 3.6 terabits per second, upwards to what's coming down the road,about 10 terabits a second of what's called non-blocking traffic network between the GPUs in a scale up cluster. And that could be anywhere from eight GPUs up to now within the next year or two, 500 and some odd GPUs in that cluster. So in that realm, you could have up to 500 GPUs all talking to each other at 10 terabits a second, or eight terabits a second depending on the GPU manufacturer, et cetera.And that's the highest performing part of the network. Then those clusters are talking to other GPUs and other clusters at usually around 800 gigabits a second. So that's a huge step down in performance. And then those GPUs are talking to the outside world, all those GPUs are going to the outside world at the servers, those things are in the server.Then usually those are packaged for eight GPUs in the server. Those servers driving to the outside world at 800 gigabits a second per server. And that's how they get their data. That's how they get their requests and how they give their answers. so 800 gigabits a second. Anne Currie: I'm gonna stop now and ask a stupid question, which, say a very simple question. So stepping back, a network, and I'm not a net network expert, so I might be able to say something totally stupid here. So, networks, there are two, at least two very important things about networks.One is the bandwidth. The bandwidth is how much enormous, how much data can you get down the pipes from one place to another? And the other is latency. How long does it take to do it? So I think what you are saying there, if I understand it correctly, is AI really needs high bandwidth.And that's what's driving it. It's not latency, it's bandwidth. Christopher Liljenstolpe: It's, yeah, no, you are correct. And people get that wrong. Because there's such high bandwidth, the latency doesn't matter as much, head end latency, because the amount of data being moved is big and the bandwidth is high. There is a little bit of a latency hit, but high performance computing is more latency sensitive.If you've got a very high bandwidth network, the data packets are actually pretty small. So the latency isn't as big a hit. The third is congestion. Congestion kills an AI network. And this is the problem. So if I can take the whole model that I'm computing against and put it in that scale up domain,then everything can talk to everything at full bandwidth and there's no congestion. But if you remember those GPUs that are in the high bandwidth domain, there's eight today, or maybe 72 or 36 or 256 or maybe 500 and some odd if Jenssen's build is correct and some of the other things we're working on with some other vendors might be correct.So that's a lot of bandwidth. If you can't fit it all in that one, then they have to go over that slower 800 gig per GPU versus 10 terabits per GPU to talk to A GPU in another one of those high bandwidth clusters. And all of a sudden you go from 10 terabits or eight terabits, or three terabits even, to 800 gigabits.So that's all of a sudden a much more contended or congested network. So you go from running down a, you know, a motorway at two o'clock in the morning to a bmo, a b, you know, side road, with lots of people on it. And the GPUs do this.Anne Currie: Oh yeah.Christopher Liljenstolpe: And everything slows to a crawl. And all the GPUs go to massive, basically idle.And that's what people don't want. 'Cause those GPUs are very expensive. There's hundreds of those GPU servers are hundreds of thousands of dollars. They use a lot of power and they're just idling waiting for the GPU on the other side of that slow link to get back with an answer. So you don't want your, model or that you're inferring against or your training to be split across these things.So you want everything on that high speed link. And if you want everything on that very high speed link, that multiple terabits per second per GPU, and to think about this, that means that if I've got eight GPUs in a server, that means I've got 80 terabits of bandwidth coming into that server. And if I've got 10 servers, let's say, in that cluster, that means I've got 80 terabits of bandwidth between that server and every other server in that cluster.And you do the math, that's about 10,000 cables running up and down inside that rack. So the cabling becomes interesting. There's all sorts of interesting problems here. so I cram everything in. So this is why I wanna get everything crammed in as tightly as possible so I can get as many things into that rack, it's an easier problem.And the power to put that on copper that runs maybe one meter in length or a meter and a half is less than a wat per cable. Per what's called cerdes. Put it on fiber, I'm over a watt, at least, maybe over a couple of watts. So I go from a 10th of a watt to a couple of watts and it takes more space on the board and everything else so that we get into physics problems.That's why I need to pack it in tight. That's why I need more power in a higher density space, 'cause I wanna get everything into that one high bandwidth domain. Now, another practice might be to do away with this concept of scale out and scale up, and there's some architectures that might do that.But the main model today, the NVIDIA model is scale up and scale out are kept separate. One can argue is that a good model? It is the model in the industry today. That means the software developers have to be cogent of that as well. And the scheduler, people who design the schedulers have to be cogent of that as well.And so this is a design that now ripples through the entire architecture all the way up through the software stack and everything else. Anne Currie: So what you're saying is that we, when we talk about AI and we talk about GPUs and all that kind of stuff, and the incredible amount of power that it requires, we tend not to think about the fact that actually it's the networking that requires one hell of a lot of that power. It's, this is not networking going, you know, across the country.There's not networking outside the data centers. This is networking inside them.Christopher Liljenstolpe: This is networking the rack. Anne Currie: within, Christopher Liljenstolpe: This is a one meter diameter, two meter diameter network and it's tens of thousands of cables. Anne Currie: So I'm sure that something you've been thinking about a lot recently is the enormous shift that's taken place with DeepSeek coming in. Has that completely, have you got, how much of an effect does that have on the network side of things?Christopher Liljenstolpe: So the whole idea behind DeepSeek is you don't need to do, from a training perspective, I think of it as the data sort pre-trained. So you don't need to do as much pre-training. You don't need to do as much training, therefore you don't need as many GPUs to sort of prep your data, prep your model.So that means you don't need as big a scale up cluster to train to get ready to infer. And remember, training doesn't make you any money. If you're in this to make money, training doesn't make you any money. It's inference. It's using the, you know, using the model is what makes the money.And potentially inference as well might be impacted. But Jensen made an interesting point was, as we start doing reasoned inference, that's gonna require a lot more compute. Now it starts looking more like inference, like training, and you're gonna make, up until recently, inference was always one and done.You make one pass through inference and you get the result. That's why we used to get some interesting, let's just call them interesting results. We used to call it, you know, hallucinations. But now you take and you make one pass through and then you sort of check it. Does it make sense and do you reason?Does it look reasonable? And you make another pass through again, another pass through again, and a pass through again, this reasoned inference. That all of a sudden starts using a lot more compute. Looks a little bit more like a training job almost. And that now starts using a lot more GPUs and you need more scale up bandwidth in GPUs.So it'll be interesting to see if DeepSeek benefits, should benefit that reasoned inference as well. The bigger question is, DeepSeek probably only be as good as the pre-trained data they ingest, right? So this sort of becomes, you know, do we feed our AIs with other AI data? And at some point, do we all become self referenced, right?Do we take AI data to feed other AI data? And pretty soon we're all, you know, it is like if all the code in GitHub is written by AIs, and then we use, we train coding models for GitHub using AI written code. Is that a good thing or not a good thing? Anne Currie: If it's tested code. I mean, if they also write tests and they run the tests and the code works, then, but... Christopher Liljenstolpe: Yeah. Yeah. Of course, it's sort of like having the developer write their code too, right? You up with a monoculture.Anne Currie: Yeah, that is true.Christopher Liljenstolpe: You end up with a monoculture.Anne Currie: Yeah, it, yeah,Christopher Liljenstolpe: Or not. Or not, maybe you don't end up with a monoculture. I don't know. This is, now we're getting into philosophy.Anne Currie: So it's interesting. I, I do know, Christopher Liljenstolpe: And now everyone just watched this went from infrastructure to software design to philosophy, and just went. Anne Currie: You know, it's, I, the AI stuff, I do find quite fascinating. I do know somebody who's a Deep Mind engineer and used to work on OpenAI, and I remember them telling me years ago, years and years ago that the big, the massive change, the switch from, you know, it was kind of when AI was starting to get good, I was talking to her nearly 10 years ago.I was like, suddenly it's got a lot better. Why has it got a lot better? And he said it's randomness. It's, we realized that actually if you injected a load of randomness into, a load more randomness into its decision making, suddenly got vastly better. It was a sea change. So it's not as predictable.And it's, it, you know, it is odd that AI, something we don't talk about a lot is that AI is based, at its heart, on the injection of randomness, which I find fascinating. And then, yeah. Christopher Liljenstolpe: There was, an interesting study. If you train AI on bad data in one domain, it will start giving you, bad results off of other domains as well.Anne Currie: That's interesting.Christopher Liljenstolpe: Which was a really sort of, but anyway, yeah, now we're really off the rail. Anne Currie: But yeah, we are, and in fact we've only got 10 minutes left, so we should actually go back onto sustainability. 'Cause the question I wanted to ask you, you mentioned in our bit that we were talking about there, about racks, that, you know, racks are becoming, you know, you needed a nuclear power station for every rack these days.But is that literally the case? Can this only be done through nuclear or can it be done like Texas are making out, are making calls for large, flexible loads for all mega amounts of solar that they're running at. Is it realistic? What do you think, is nuclear and AI, is it a prerequisite? Christopher Liljenstolpe: It is not a prerequisite, but it is probably gonna be a base load demand. And that's because the amount, at least at this point, the amount of money you will invest if you're gonna put up anything a hundred megawatts or more of AI compute, that is a serious amount of investment. And let's also be honest, if you're talking about 500 megawatts or a gigawatt facility, you're also,you're not lifting a substation permit, 'cause there aren't substations for things like that, you are going to jack yourself into a power plant. Because at that point, you know, a gigawatt is a power generation station, right? That is a reactor in a nuclear power station that. Is a, you know, a gasgenerator, a gas turbine in a, you know, a co-generation power plant, et cetera. It's a turbine in a major hydro, right? It is a full scale commercial power plant unit. So there's no reason to have a substation because you are consuming a full commercial power plant. So you might as well plant it there. That's not small money. You are gonna have to guarantee a load to a power company to do that. One. Two, the amount you're gonna spend on the GPUs, let alone all the other infrastructure that goes around it, that is a huge capital investment. You are not gonna want that sitting idle for one minute in a year. So that is going to be a base load that will always, your shareholders are gonna string you up, that will always be running, so that's gonna be a base load. So something's gonna have to support that base load. It could be solar, but then you're gonna have to have a very big battery plant. There's one going in, in India.There's a one gigawatt facility going in for AI, and it's fully built out. It's gonna be held up by a solar plant. That solar plant is gonna be, one third of the ground is going to be solar, and the remainder is gonna be battery to hold the thing for 24x7 so they will be doing solar, but it's going to be solar battery.But yeah, this will be, you're gonna want this thing running all the time. So we joke about it being nuclear. The funny thing was three years ago we were saying these small modular reactors, a hundred megawatts, that's a perfect size for a data hall. Now we're just saying, you know, go, you know, unshutter your commercial nuclear reactors because the gigawatt size commercial nuclear reactors by now are about the right size, the interesting part to that is, what do you do when you have to refuel the reactor? Because the reactors, most commercial reactors have to be shut down when you refuel. If you're jacked into a reactor, you're, what do you do when they have to shut down the reactor? That's a year process.What do you do for power? 'Cause you're probably not connected to the grid. You're connected to, like what they did in Pennsylvania. You're connected to the reactor. What do you do for power when they shut down that reactor? I hope the folks have thought about that. Maybe you still do small, like small modulars.Maybe you do 12 small modulars at a hundred megawatts each, and you sort of have an n+2. Interesting thoughts.Anne Currie: Well, that is a very interesting thought. So yeah, so you're making two fascinating points there that I have never heard made. One is that we are totally over, we've totally run ahead of SMRs, you know, all that thing we're talking. Totally. We've galloped ahead of that and yet it might actually be worth bringing them back just because of that kind of modern resilience thing of it's better to have 10 than one. You know better to have 10 small ones than one big one.Christopher Liljenstolpe: Yeah, I've got resilient reactors, and if it's molten salts, you can refuel them by just, topping off the salt tanks as you go. And you can remove the poison out of 'em as you go. So, you know, just, back the salt truck up and dump more salt in. It's a little more than that, but yeah, sort of.Anne Currie: Yeah.Christopher Liljenstolpe: If you're interested in bashing your head into the wall and learning about things that you never thought you'd have to learn about, this is a fun time to get into data center infrastructure because you get to do things like, okay, how do I cram a couple hundred terabits per second into a network in a rack? At the same time,talk about liquid molten salt reactors. I mean, you know, it's sort of a broad spectrum of, you know, and oh, and let's also talk about signal integrity of dielectric fluids. 'Cause we might have to send all this stuff swimming in a tank. It's, you know, you have a lot of interesting conversations in one day. Anne Currie: It sounds like you're in a pretty fun area at the moment and we thought it was fun. We thought network cloud networking was fun five years ago. That was nothing as it turns out.Christopher Liljenstolpe: Yeah, so, and one thing that's sort of interesting now is we took Scalable Sustainable Infrastructure Alliance in the Linux Foundation. We've merged it, as I'm sure you've heard, with Green Software Foundation,which, so we thought it was probably time to get the hardware guys and the software guys talking, and gals talking together because we realized that we really needed to have these, the stack not have this wall between the hardware and the software.We really needed to have the same things we were talking about before I alluded to. It's like, okay, the hardware impacts of the horror show that we've got going on. I say that in the nicest possible way to my friends doing the chips, the unique challenges that we have coming, we really need better understanding on the scheduler sides, et cetera, and how we manage that and monitor that and the impacts of that on the software side.So we decided to take the folks who are working on open hardware designs and making those sustainable, and marrying that to the software side and the green software folks who are working on how we manage and monitor that as well. So we decided to take those two and put them together. And the first project out of that is gonna be something called Project Mycelium, which is going to be actually looking at, how we build software linkages on, how you manage and monitor the hardware infrastructure on the software side. Anne Currie: Named after the networks of fungus under the, the way that actually, everything in a forest is more, more connected together than we'd ever realized previously using these incredible mycelium connections, I take it. I'm guessing that's why it's named that way.Christopher Liljenstolpe: Exactly. Exactly. And a good friend of mine, who used to be the CTO, field CTO at Equinix, is gonna be running that project for me there.Anne Currie: So, yeah. Utterly fascinating stuff. So yes, I mean, so take, so stepping back from all of this, it's mind blowing amount of new, of complex new thoughts and approaches to things. And what's your view? I mean, you, have a kind of. 30,000, 40,000 foot of view, tend to, on all of these things.What are you thinking? Where's it all going? What's it gonna, what's gonna happen?Christopher Liljenstolpe: Well, one of my jokes is yes. AI will kill us all. The question is, will it get smart enough and realize we're the problem and actively kill us, or will it just take so much resources, it will just melt all the ice caps and create a water world before it becomes sentient and just kill us that way? that's it.There's a joke in every joke. I think right now the path that we're on frankly is not sustainable. You know, we can't, you know, the next logical step from this is we're looking, you know, if we follow that train of 150 know, 60, 100 152, 5600, it's north of a megawatt a rack. That path is unsustainable both from, resources, power, but also economics.It just, we can't do that. At the going rate, the US grid's gonna be capped by 2031. We will be out of power in the United States by 2031. Europe will be out first. so yeah. So something has to give, we have to become more efficient with the way we utilize these resources, the algorithms we build. We're still brute forcing AI.We think this is all brilliant software. It's not, we're still brute forcing the heck out of this stuff. So something's gotta give there. I think when that does, there'll be a lot of business models that might face some challenges. Because there's a lot of value built that this is going to continue going this way.But it needs to happen. So we're gonna end up, I think, and there's a lot of fluff as well. There's a lot of pet, the equivalent of pets.com, out there right now. I think we'll end up with a lot more distributed use cases for AI that don't need the same amount of power. We don't need huge inference across it.But yeah, the, current trend will have to get adjusted, and somebody's gonna figure it out. Anne Currie: The old phrase.Christopher Liljenstolpe: People try it out.Anne Currie: If something can't go on, it won't, it'll stop, you know? know Christopher Liljenstolpe: There will be enough economic pressure that it will drive an innovation that will fix it. So I mean, just you looking at it, justAnne Currie: Yeah, it's the code. Christopher Liljenstolpe: I'm not sure how we'll mine enough copper to support this building the power transmission infrastructure. So anyway, that's my doom and gloom part of this.But I think, it's, what we will end up by the time we're done with it though, is a very efficient computational infrastructure, is it's forcing us to look at everything along the stack. Air is an absolutely horrible heat transfer fluid. We are, everyone's running madly down the road of liquid.Everyone's running madly down the road of higher voltage. Which again, the way we transmit power in a data center is pretty horrible today. Everyone's ringing all the efficiencies they can outta that because now we have to, it's just economically impossible to do it any other way. So whatever comes outta the back of this is we are gonna have a very efficient data center infrastructure.Which is all for the better. We're probably gonna end up with a, we will probably end up driving, this will probably fix the grids, because it has to, because we're driving a very different power transmission infrastructure. So we'll fix a bunch of problems along the way. Silver lining. Anne Currie: And there is a lot of money behind it. So it's, yeah, it is actually aligned with a lot of good things that we want and it's driving a lot of money in those directions. Yeah. It's interesting. If it doesn't kill us all, which, you know, Christopher Liljenstolpe: Yeah, and who knows? It'll probably, it'll probably bring back nuclear, we'll probably, have, be able to have rational conversations about other non-carbon emitting power sources that, Anne Currie: Space-based solar power. Well, I'm desperate for it.Christopher Liljenstolpe: Maybe, yeah, maybe. Might get some countries that just recently shuttered all their nuclear plants go back and put their cooling towers back up.Not talking about any European countries.Anne Currie: Well, I'm sure everybody's brain is completely full now, so, and we've had a really interesting discussion that I have utterly enjoyed. So I think we should probably draw the podcast to an end with any final comments that anybody wants to make. So everything we, well, everything we talked about that we can put in the show notes, will be in the show notes at the bottom of the episode.Do you have any final points that you want to make?Christopher Liljenstolpe: I mean, it is fun times. And it's not all doom and gloom, but you know, it is right now, there is a bit of a hike and you know, it, at this point it seems like it is a train that's gonna keep on going and it will correct. But it is leading to a lot of innovation and that innovation will hang around. Just like when the dot-com bust happened, we will see a correction here and what people thought originally the internet was going to do and what was gonna be delivered by the internet didn't really happen. But it certainly, the things that it is used for, people never, even the people who originally created the ARPANET or the people who invested in the dot-com original late nineties explosion,what they, the money they put into it, they had, they did not foresee what it is being used for now. But we, the world has been, you know, forever changed by that for good and ill both, by that investment and it's gonna be the same thing here. What we're investing in building now, we think we know what it's gonna be used for,we're wrong. Everything we think it's gonna be used for, 5% of it will probably be still what it's being used for 15 years from now. The rest of it, we have no idea. And we'll benefit from it and we'll suffer for it. But, we're building a base infrastructure and other people will build, will actually build on that base infrastructure and deliver things that we will have no idea about. Anne Currie: Yeah, I, that reminds me of sort of a discussion that we had a few years back about why the internet survived 2020, the beginning of the pandemic, which kept up the west. 'Cause otherwise nobody would, if we hadn't been able to all stay at home and work over, video conferencing, things like that.And a lot of the infrastructure that was put in place that we relied on there was to support high definition stream tv. So it was like game people put it in so the folk could watch Game of Thrones, then Game of Thrones saved the West. It's like, who would've predicted that? You just don't know what's gonna happen.Christopher Liljenstolpe: Exactly. Yep. Indeed. And that infrastructure actually, which we didn't talk about, was put in place because, service providers made a horrible choice early on of putting in broadband that was the cheap choice that couldn't do multicast. If they had put in multicast capable infrastructure, they wouldn't have put in the amount of backbone infrastructure that they did. Because they would've had multicast and they wouldn't have had to do the build, that they did, which indeed actually helped us. So it was, you know, that not having multicast out there actually probably saved our bacon. And it pains me to no end. because I was sitting there banging away in the mid nineties, et cetera, as like, "we need to get multicast out there. It's so much more efficient. It will save so much money." And if we had, we probably would've been in much worse shape when the pandemic hit.Anne Currie: It is interesting that flabbiness, things like inefficient code and inefficient code is what we've been building for the past 20 years. Most of my career, we've been building highly inefficient code, but it does mean there's a lot of untapped potential in there to improve, you know, it's, Christopher Liljenstolpe: True.Anne Currie: Unrealized potential as a result of lazy behavior in the past. We are mining our own past laziness that might save us all.Christopher Liljenstolpe: Indeed.Anne Currie: On that note, our laziness and lack of foresight in the past have tended to save us in the future. It might well save us again. On that happy note or that nuanced note,thank you very much for listening and thank you very much for being my excellent guest today, Chris.Christopher Liljenstolpe: Thank you for having me on, Anne, and thank you everyone for listening. I hope it was, if not educational, at least entertaining.Anne Currie: I'm sure it was both. Thank you very much and speak to you on the next time I'm hosting the Environment Variables podcast. Goodbye.Christopher Liljenstolpe: Bye everyone.  Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.Chris Adams: To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode. 
undefined
Mar 27, 2025 • 12min

Backstage: Green Software Patterns

In this episode, Chris Skipper takes us backstage into the Green Software Patterns Project, an open-source initiative designed to help software practitioners reduce emissions by applying vendor-neutral best practices. Guests Franziska Warncke and Liya Mathew, project leads for the initiative, discuss how organizations like AVEVA and MasterCard have successfully integrated these patterns to enhance software sustainability. They also explore the rigorous review process for new patterns, upcoming advancements such as persona-based approaches, and how developers and researchers can contribute. Learn more about our people:Chris Skipper: LinkedIn | WebsiteFranziska Warncke: LinkedInLiya Mathew: LinkedInFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterResources:Green Software Patterns | GSF [00:23]GitHub - Green Software Patterns | GSF [ 05:42] If you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Chris Skipper: Welcome to Environment Variables, where we bring you the latest news from the world of sustainable software development. I am the producer of the show, Chris Skipper, and today we're excited to bring you another episode of Backstage, where we uncover the stories, challenges, and innovations driving the future of green software.In this episode, we're diving into the Green Software Patterns Project, an open source initiative designed to curate and share best practices for reducing software emissions.The project provides a structured approach for software practitioners to discover, contribute, and apply vendor-neutral green software patterns that can make a tangible impact on sustainability. Joining us today are Franziska Warncke and Liya Mathew, the project leads for the Green Software Patterns Initiative.They'll walk us through how the project works, its role in advancing sustainable software development, and what the future holds for the Green Software Patterns. Before we get started, a quick reminder that everything we discuss in this episode will be linked in the show notes below. So without further ado, let's dive into our first question about the Green Software Patterns project. My first question is for Liya. The project is designed to help software practitioners reduce emissions in their applications.What are some real world examples of how these patterns have been successfully applied to lower carbon footprints?Liya Mathew: Thanks for the question, and yes, I am pretty sure that there are a lot of organizations as well as individuals who have greatly benefited from this project. A key factor behind the success of this project is the impact that these small actions can have on longer runs. For example, AVEVA has been an excellent case of an organization that embraced these patterns.They created their own scoring system based on Patterns which help them measure and improve their software sustainability. Similarly, MasterCard has also adopted and used these patterns effectively. What's truly inspiring is that both AVEVA and MasterCard were willing to share their learnings with the GSF and the open source community as well.Their contributions will help others learn and benefit from their experiences, fostering a collaborative environment where everyone can work towards a more sustainable software.Chris Skipper: Green software patterns must balance general applicability with technical specificity. How do you ensure that these patterns remain actionable and practical across different industries, technologies and software architectures?Liya Mathew: One of the core and most useful features of patterns is the ability to correlate the software carbon intensity specification. Think of it as a bridge that connects learning and measurement. When we look through existing catalog of patterns, one essential thing that stands out is their adaptability.Many of these patterns not only align with sustainability, but also coincide with security and reliability best practices. The beauty of this approach is that we don't need to completely rewrite our software architecture to make it more sustainable. Small actions like catching static data or providing a dark mode can make significant difference.These are simple, yet effective steps that can lead us a long way towards sustainability. Also, we are nearing the graduation of Patterns V1. This milestone marks a significant achievement and we are already looking ahead to the next exciting phase: Patterns V2. In Patterns V2, we are focusing on persona-based and behavioral patterns, which will bring even more tailored and impactful solutions to our community.These new patterns will help address specific needs and behaviors, making our tools even more adaptable and effective.Chris Skipper: The review and approval process for new patterns involves multiple stages, including subject matter expert validation and team consensus. Could you walk us through the workflow for submitting and reviewing patterns?Liya Mathew: Sure. The review and approval process for new patterns involve multiple stages, ensuring that each pattern meets a standard before integration. Initially, when a new pattern is submitted, it undergoes an initial review by our initial reviewers. During this stage, reviewers check if the pattern aligns with the GSF's mission of reducing software emissions, follows the GSF Pattern template, and adheres to proper formatting rules. They also ensure that there is enough detail for the subject matter expert to evaluate the pattern. If any issue arises, the reviewer provides clear and constructive feedback directly in the pull request, and the submitter updates a pattern accordingly.Once the pattern passes the initial review, it is assigned to an appropriate SME for deeper technical review, which should take no more than a week, barring any lengthy feedback cycles. The SME checks for duplicate patterns validates the content as assesses efficiency and accuracy of the pattern in reducing software remission.It also ensures that the pattern's level of depth is appropriate. If any areas are missing or incomplete, the SME provides feedback in the pull request. If the patterns meet all the criteria, SME will then remove the SME review label and adds a team consensus label and assigns this pull request back to the initial reviewer.Then the Principles and Patterns Working Group has two weeks to comment or object to the pattern, requiring a team consensus before the PR can be approved and merged in the development branch. Thus the raw process ensures that each pattern is well vetted and aligned with our goals.Chris Skipper: For listeners who want to start using green software patterns in their projects, what's the best way to get involved, access the catalog, or submit a new pattern?Liya Mathew: All the contributions are made via GitHub pull requests. You can start by submitting a pull request on our repository. Additionally, we would love to connect with everyone interested in contributing. Feel free to reach out to us on LinkedIn or any social media handles and express your interest in joining our project's weekly calls.Also, check if your organization is a member of the Green Software Foundation. We warmly welcome contributions in any capacity. As mentioned earlier, we are setting our sights on a very ambitious goal for this project, and your involvement would be invaluable.Chris Skipper: Thanks to Liya for those great answers. Next, we had some questions for Franziska. The Green Software Patterns project provides a structured open source database of curated software patterns that help reduce software emissions. Could you give us an overview of how the project started and its core mission? Franziska Warncke: Great question. The Green Software Patterns project emerged from a growing recommendation of the environmental impact of software and the urgent need for sustainable software engineering practices. As we've seen the tech industry expand, it became clear that while hardware efficiency has been a focal point for sustainability, software optimization was often overlooked. A group of dedicated professionals began investigating existing documentation, including resources like the AWS Well-Architected Framework, and this exploration laid to groundwork for the project. This allows us to create a structured approach to the curating of the patterns that can help reduce software emissions.We developed a template that outlines how each pattern should be presented, ensuring clarity and consistency. Additionally, we categorize these patterns into the three main areas, cloud, web, and AI. Chris Skipper: Building an open source knowledge base and ensuring it remains useful, requires careful curation and validation. What are some of the biggest challenges your team has faced in developing and maintaining the green software patterns database? Franziska Warncke: Building and maintaining an open source knowledge base like the Green Software Patterns database, comes with its own set of challenges. One of the biggest hurdles we've encountered is resource constraints. As an open source project, we often operate with limited time personnel, which makes it really, really difficult to prioritize certain tasks over others.Despite this challenge, we are committed to continuous improvement, collaboration, and community engagement to ensure that the Green Software Patterns database remains a valuable resource for developers looking to adopt more sustainable practices.Chris Skipper: Looking ahead, what are some upcoming initiatives for the project? Are there any plans to expand the pattern library or introduce new methodologies for evaluating and implementing patterns? Franziska Warncke: Yes, we have some exciting initiatives on the horizon. So one of our main focuses is to restructure the patterns catalog to adopt the persona-based approach. This means we want to create tailored patterns for various worlds within the software industry, like developers, project managers, UX designers, and system architects.By doing this, we aim to make the patents more relevant and accessible to a broader audience. We are also working on improving the visualization of the patterns. We recognize that user-friendly visuals are crucial for helping people understand and adopt these patterns in their own projects, which was really missing before.In addition to that, we plan to categorize the patterns based on different aspects. Such as persona type, adoptability and effectiveness. This structured approach will help users quickly find the patterns that are most relevant to their roads and their needs, making the entire experience much more streamlined. Moreover, we are actively seeking new contributors to join us.And we believe that the widest set of voices and perspective will enrich our knowledge base and ensure that our patterns reflect a wide range of experience. So, if anyone is interested, we'd love to hear from you. Chris Skipper: The Green Software Patterns Project is open source and community-driven. How can developers, organizations, and researchers contribute to expanding the catalog and improving the quality of the patterns?Franziska Warncke: Yeah, the Green Software Patterns Project is indeed open source and community driven, and we welcome contributions from developers, organizations, and researchers to help expand our catalog and improve the quality of the patterns. We need people to review the existing patterns critically and provide feedback.This includes helping us categorize them for a specific persona, ensuring that each pattern is tailored to each of various roles in the software industry. Additionally, contributors can assist by adding more information and context to the patterns, making them more comprehensive and useful. Visuals are another key area where we need help.Creating clear and engaging visuals that illustrate how to implement these patterns can significantly enhance their usability. Therefore, we are looking for experts who can contribute their skills in design and visualization to make the patterns more accessible. So if you're interested, then we would love to have you on board.Thank you.Chris Skipper: Thanks to Franziska for those wonderful answers. So we've reached the end of the special backstage episode on the Green Software Patterns Project at the GSF. I hope you enjoyed the podcast. To listen to more podcasts about green software, please visit podcast.greensoftware.foundation. And we'll see you on the next episode.Bye for now.​ 
undefined
Mar 20, 2025 • 50min

The Week in Green Software: Sustainable AI Progress

For this 100th episode of Environment Variables, guest host Anne Currie is joined by Holly Cummins, senior principal engineer at Red Hat, to discuss the intersection of AI, efficiency, and sustainable software practices. They explore the concept of "Lightswitch Ops"—designing systems that can easily be turned off and on to reduce waste—and the importance of eliminating zombie servers. They cover AI’s growing energy demands, the role of optimization in software sustainability, and Microsoft's new shift in cloud investments. They also touch on AI regulation and the evolving strategies for balancing performance, cost, and environmental impact in tech. Learn more about our people:Chris Adams: LinkedIn | GitHub | WebsiteHolly Cummins: LinkedIn | GitHub | WebsiteFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterNews:AI Action Summit: Two major AI initiatives launched | Computer Weekly [40:20]Microsoft reportedly cancels US data center leases amid oversupply concerns [44:31]Events:Data-driven grid decarbonization - Webinar | March 19, 2025The First Eco-Label for Sustainable Software - Frankfurt am Main | March 27, 2025 Resources:LightSwitchOps Why Cloud Zombies Are Destroying the Planet and How You Can Stop Them | Holly CumminsSimon Willison’s Weblog [32:56]The GoalIf you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Holly Cummins: Demand for AI is growing, demand for AI will grow indefinitely. But of course, that's not sustainable. Again, you know, it's not sustainable in terms of financially and so at some point there will be that correction. Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.I'm your host, Chris Adams. Anne Currie: So hello and welcome to Environment Variables, where we bring you the latest news and updates from the world of sustainable software. Now, today you're not hearing the dulcet tones of your usual host, Chris Adams. I am a guest host on this, a common guest, a frequent guest host, Anne Currie. And my guest today is somebody I've known for quite a few years and I'm really looking forward to chatting to, Holly.So do you want to introduce yourself, Holly?Holly Cummins: So I'm Holly Cummins. I work for Red Hat. My day job is that, I'm a senior principal engineer and I'm helping to develop Quarkus, which is Java middleware. And I'm looking at the ecosystem of Quarkus, which sounds really sustainability oriented, but actually the day job aspect is I'm more looking atthe contributors and, you know, the extensions and that kind of thing. But one of the other things that I do end up looking a lot at is the ecosystem aspect of Quarkus in terms of sustainability. Because Quarkus is a extremely efficient Java runtime. And so when I joined the team, one of the things we asked well, one of the things I asked was, can we, know this is really efficient. Does that translate into an environmental, you know, benefit? Is it actually benefiting the ecosystem? You know, can we quantify it? And so we did that work and we were able to sort of validate our intuition that it did have a much lower carbon footprint, which was nice.But some things of what we did actually surprised us as well, which was also good because it's always good to be challenged in your assumptions. And so now part of what I'm doing as well is sort of broadening that focus from, instead of measuring what we've done in the past, thinking about, well, what does a sustainable middleware architecture look like?What kind of things do we need to be providing?Anne Currie: Thank you very much indeed. That's a really good overview of what I really primarily want to be talking about today. We will be talking about a couple of articles as usual on AI, but really I want to be focused on what you're doing in your day job because I think it's really interesting and incredibly relevant.So, as I said, my name is Anne Currie. I am the CEO of a learning and development company called Strategically Green. We do workshops and training around building green software and changing your systems to align with renewables. But I'm also one of the authors of O'Reilly's new book, Building Green Software, and Holly was probably the most, the biggest single reviewer/contributor to that book, and it was in her best interest to do so because, we make, I make tons and tons of reference to a concept that you came up with.I'm very interested in the backstory to this concept, but perhaps you can tell me a little bit more about it because it is, this is something I've not said to you before, but it is, this comes up in review feedback, for me, for the book, more than any other concept in the book. Lightswitch Ops. People saying, "Oh, we've put in, we've started to do Lightswitch Ops."If anybody says "I've started to do" anything, it's always Lightswitch Ops. So tell us, what is Lightswitch Ops?Holly Cummins: So Lightswitch Ops, it's really, it's about architecting your systems so that they can tolerate being turned off and on, which sounds, you know, it sounds sort of obvious, but historically that's not how our systems have worked. And so the first step is architect your system so that they can tolerate being turned off and on.And then the next part is once you have that, actually turn them off and on. And, it sort of, it came about because I'm working on product development now, and I started my career as a performance engineer, but in between those two, I was a client facing consultant, which was incredibly interesting.And it was, I mean, there was, so many things that were interesting, but one of the things that I sort of kept seeing was, you know, you sort of work with clients and some of them you're like, "Oh wow, you're, you know, you're really at the top of your game" and some you think, "why are you doing this way when this is clearly, you know, counterproductive" or that kind of thing.And one of the things that I was really shocked by was how much waste there was just everywhere. And I would see things like organizations where they would be running a batch job and the batch job would only run at the weekends, but the systems that supported it would be up 24/7. Or sometimes we see the opposite as well, where it's a test system for manual testing and people are only in the office, you know, nine to five only in one geo and the systems are up 24 hours.And the reason for this, again, it's sort of, you know, comes back to that initial thing, it's partly that we just don't think about it and, you know, that we're all a little bit lazy, but it's also that many of us have had quite negative experiences of if you turn your computer off, it will never be the same when it comes back up.I mean, I still have this with my laptop, actually, you know, I'm really reluctant to turn it off. But now we have, with laptops, we do have the model where you can close the lid and it will go to sleep and you know that it's using very little energy, but then when you bring it back up in the morning, it's the same as it was without having to have the energy penalty of keeping it on overnight. And I think, when you sort of look at the model of how we treat our lights in our house, nobody has ever sort of left a room and said, "I could turn the light off, but if I turn the light off, will the light ever come back on in the same form again?"Right? Like we just don't do that. We have a great deal of confidence that it's reliable to turn a light off and on and that it's low friction to do it. And so we need to get to that point with our computer systems. And you can sort roll with the analogy a bit more as well, which is in our houses, it tends to be quite a manual thing of turning the lights off and on.You know, I turn the light on when I need it. In institutional buildings, it's usually not a manual process to turn the lights off and on. Instead, what we end up is, we end up with some kind of automation. So, like, often there's a motion sensor. So, you know, I used to have it that if I would stay in our office late at night, at some point if you sat too still because you were coding and deep in thought, the lights around you would go off and then you'd have to, like, wave your arms to make the lights go back on.And it's that, you know, it's this sort of idea of like we can detect the traffic, we can detect the activity, and not waste the energy. And again, we can do exactly this our computer systems. So we can have it so that it's really easy to turn them off and on. And then we can go one step further and we can automate it and we can say, let's script to turn things off at 5pm because we're only in one geo.And you know, if we turn them off at 5pm, then we're enforcing quite a strict work life balance. So...Anne Currie: Nice, nice work.Holly Cummins: Yeah. Sustainable. Sustainable pace. Yeah. Or we can do sort of, you know, more sophisticated things as well. Or we can say, okay, well, let's just look at the traffic and if there's no traffic to this, let's turn it off.off Anne Currie: Yeah, it is an interestingly simple concept because it's, when people come up with something which is like, in some ways, similar analogies, a light bulb moment of, you know, why don't people turn things off? Becasue, so Holly, everybody is an unbelievably good public speaker.One of the best public speakers out there at the moment. And we first met because you came and gave talks at, in some tracks I was hosting on a variety. Some on high performance code, code efficiency, some on, being green. One of the stories you told was about your Lightswitch moment, the realization that actually this was a thing that needed to happen.And I thought it was fascinating. It was about how, I know everybody, I've been in the tech industry for a long time, so I've worked with Java a lot over the years and many years ago. And one of the issues with Java in the old days was always, it was very hard to turn things off and turn them back on again.And that was fine in the old world, but you talked about how that was no longer fine. And that was an issue with the cloud because the cloud, using the cloud well, turning things on and off and things, doing things like auto scaling is utterly key to the idea of the cloud. And therefore it had to become part of Quarkus, part of the future of Java. Am I right in that understanding? Holly Cummins: Yeah, absolutely. And the cloud sort of plays into both parts of the story, actually. So definitely we, the things that we need to be cloud native, like being able to support turning off and on again, are very well aligned to what you need to support Lightswitch Ops. And so the, you know, there with those two, we're pulling in the same direction.The needs of the cloud and the needs of sustainability are both driving us to make systems that, I just saw yesterday, sorry this is a minor digression, but I was looking something up, and we used to talk a lot about the Twelve-Factor App, and you know, at the time we started talking about Twelve-Factor Apps, those characteristics were not at all universal. And then someone came up with the term, the One-Factor App, which was the application that could just tolerate being turned off and on.And sometimes even that was like too much of a stretch. And so there's the state aspect to it, but then there's also the performance aspect of it and the timeliness aspect of it. And that's really what Quarkus has been looking at that if you want to have any kind of auto scaling or any kind of serverless architecture or anything like that, the way Java has historically worked, which is that it eats a lot of memory and it takes a long time to start up, just isn't going to work.And the sort of the thing that's interesting about that is quite often when we talk about optimizing things or becoming more efficient or becoming greener, it's all about the trade offs of like, you know, "oh, I could have the thing I really want, or I could save the world. I guess I should save the world." But sometimes what we can do is we can just find things that we were paying for, that we didn't even want anymore. And that's, I think, what Quarkus was able to do. Because a lot of the reason that Java has a big memory footprint and a lot of the reason that Java is slow to start up is it was designed for a different kind of ops.The cloud didn't exist. CI/CD didn't exist. DevOps didn't exist. And so the way you built your application was you knew you would get a release maybe once a year and deployment was like a really big deal. And you know, you'd all go out and you'd have a party after you successfully deployed because it was so challenging.And so you wanted to make sure that everything you did was to avoid having to do a deployment and to avoid having to talk to the ops team because they were scary. But of course, even though we had this model where releases happen very rarely, or the big releases happen very rarely, of course, the world still moves on, you know, people still had defects, people, so what you ended up with was something that was really much more optimized towards patching.So can we take the system and without actually taking, turning it off and on, because that's almost impossible, can we patch it? So everything was about trying to change the engine of the plane while the plane was flying, which is really clever engineering. If you can support that, you know, well done you.It's so dynamic. And so everything was optimized so that, you know, you could change your dependencies and things would keep working. And, you know, you could even change some fairly important characteristics of your dependencies and everything would sort of adjust and it would ripple back through the system.But because that dynamism was baked into every aspect of the architecture, it meant that everything just had a little bit of drag, and everything had a little bit of slowdown that came from that indirection. And then now you look at it in the cloud and you think, well, wait a minute. I don't need that. I don't need that indirection.I don't need to be able to patch because I have a CI/CD pipeline, and if I'm going into my production systems and SSHing in to change my binaries, something has gone horribly wrong with my process. And you know, I need to, I have all sorts of problems. So really what Quarkus was able to do was get rid of a whole bunch of reflection, get rid of a whole bunch of indirection,do more upfront at build time. And then that gives you much leaner behavior at runtime, which is what you want in a cloud environment.Anne Currie: Yeah. And what I love about this and love about the story of Quarkus is, it's aligned with something, non functional requirements. It's like, it's an unbelievably boring name, and for something which is a real pain point for companies. But it's also, in many ways, the most important thing and the most difficult thing that we do.It's like, being secure, being cost effective, being resilient. A lot of people say to me, well, you know, actually all you're doing with green is adding another non functional requirement. We know those are terrible. But I can say, no, we need to not make it another non functional requirements. It's just a good, another motivator for doing the first three well, you know. Also scaling is about resilience. It's about cost saving, and it's about being green. And it's about, and being able to pave rather than patch, I think is, was the term. It's more secure, you know. Actually patching is much less secure than repaving, taking everything down and bringing it back up.All the modern thinking about being more secure, being faster, being cheaper, being more resilient is aligned or needs to be aligned with being green and it can be, and it should be, and it shouldn't just be about doing less.Holly Cummins: Absolutely. And, you know, especially for the security aspect, when you look at something like tree shaking, that gives you more performance by getting rid of the code that you weren't using. Of course, it makes you more secure as well because you get rid of all these code paths and all of these entry points and vulnerabilities that had no benefit to you, but were still a vulnerability.Anne Currie: Yeah, I mean, one of the things that you've talked about Lightswitch Ops being related to is, well, actually not Lightswitch Ops, but the thing that you developed before Lightswitch Ops, the concept of zombie servers. Tell us a little bit about that because that not only is cost saving, it's a really big security improvement.So tell us about zombie, the precursor to Lightswitch Ops.Holly Cummins: Yeah, zombie servers are again, one of those things that I sort of, I noticed it when I was working with clients, but I also noticed it a lot in our own development practices that what we would do was we would have a project and we would fire up a server in great excitement and you know, we'd register something on the cloud or whatever.And then we'd get distracted and then, or then we, you know, sometimes we would develop it but fail to go to production. Sometimes we'd get distracted and not even develop it. And I looked and I think some of these costs became more visible and more obvious when we move to the cloud, because it used to be that when you would provision a server, once it was provisioned, you'd gone through all of the pain of provisioning it and it would just sit there and you would keep it in case you needed it.But with the cloud, all of a sudden, keeping it until you needed it had a really measurable cost. And I looked and I realized, you know, I was spending, well, I wasn't personally spending, I was costing my company thousands of pounds a month on these cloud servers that I'd provisioned and forgotten about.And then I looked at how Kubernetes, the sort of the Kubernetes servers were being used and some of the profiles of the Kubernetes servers. And I realized that, again, there's, each company would have many clusters. And I was thinking, are they really using all of those clusters all of the time?And so I started to look into it and then I realized that there had been a lot of research done on it and it was shocking. So again, you know, the sort of the, I have to say I didn't coin the term zombie servers. I talk about it a lot, but, there was a company called the Antithesis Institute.And what they did, although actually, see, now I'm struggling with the name of it because I always thought they were called the Antithesis Institute. And I think it's actually a one letter variant of that, which is much less obvious as a word, but much more distinctive. But I've, every time I talked about them, I mistyped it.And now I can't remember which one is the correct one, but in any case, it's something like the Antithesis Institute. And they did these surveys and they found that, it was something like a third of the servers that they looked at were doing no work at all. Or rather no, no useful work. So they're still consuming energy, but there's no work being done.And when they say no useful work as well, that sounds like a kind of low bar. Because when I think about my day job, quite a lot of it is doing work that isn't useful. But they had, you know, it wasn't like these servers were serving cat pictures or that kind of thing. You know, these servers were doing nothing at all.There was no traffic in, there was no traffic out. So you can really, you know, that's just right for automation to say, "well, wait a minute, if nothing's going in and nothing's coming out, we can shut this thing down." And then there was about a further third that had a utilization that was less than 5%.So again, you know, this thing, it's talking to the outside world every now and then, but barely. So again, you know, it's just right for a sort of a consolidation. But the, I mean, the interesting thing about zombies is as soon as you talk about it, usually, you know, someone in the audience, they'll turn a little bit green and they'll go, "Oh, I've just remembered that server that I provisioned."And sometimes, you know, I'm the one giving the talk and I'm like, Oh, while preparing this talk, I just realized I forgot a server, because it's so easy to do. And the way we're measured as well, and the way we measure our own productivity is we give a lot more value to creating than to cleaning up.Anne Currie: Yeah. And in some ways that makes sense because, you know, creating is about growth and cleaning up you know, it's about degrowth. It's about like, you know, it's like you want to tell the story of growth, but I've heard a couple of really interesting, sales on zombie servers since you started, well, yeah, since you started talking about it, you may not have invented it, but you popularized it. One was from, VMware, a cost saving thing. They were, and it's a story I tell all the time about when they were moving data centers in Singapore, setting up a new data center in Singapore.They decided to do a review of all their machines to see what had to go across. And they realized that 66 percent of their machines did not need to be reproduced in the new data center. You know, they had a, and that was VMware. People who are really good at running data centers. So imagine what that's like.But moving data centers is a time when it often gets spotted. But I will say, a more, a differently disturbing story from a company that wished to remain nameless. Although I don't think they need to because I think it's just an absolutely bog standard thing. They were doing a kind of thriftathon style thing of reviewing their data center to see if there was stuff that they could save money on, and they found a machine that was running at 95, 100 percent CPU, and they thought, they thought, Oh my God, it's been hacked.It's been hacked. Somebody's mining Bitcoin on this. It's, you know, or maybe it's attacking us. Who knows? And so they went and they did some searching around internally, and they found out that it was somebody who turned on a load test, and then forgot to turn it off three years previously. And And the, I would say that obviously that came up from the cost, but it also came up from the fact that machine could have been hacked.You know, it could be, could have been mining Bitcoin. It could have been attacking them. It could have been doing anything. They hadn't noticed because it was a machine that no one was looking at. And I thought it was an excellent example. I thought those two, excellent examples of the cost and the massive security hole that comes from machines that nobody is looking at anymore.So, you know, non functional requirements, they're really important. AndHolly Cummins: Yeah.Anne Currie: doing better on them is also green. And also, they're very, non functional requirements are really closely tied together.Holly Cummins: Yeah. I mean, oh, I love both of those stories. And I've heard the VMware one before, but I hadn't heard the one about the hundred percent, the load test. That is fantastic. One of the reasons I like talking about zombies and I think one of the reasons people like hearing about it I mean, it's partly the saving the world.But also I think when we look at greenness and sustainability, some of it is not a very cheerful topic, but the zombie servers almost always when you discover the cases of them, they are hilarious. I mean, they're awful, but they're hilarious And you know, it's just this sort of stuff of, "how did this happen?How did we allow this to happen?" Sometimes it's so easy to do better. And the examples of doing bad are just something that we can all relate to. And, but on the same time, you know, you sort of think, oh, that shouldn't have happened. How did that happen?Anne Currie: But there's another thing I really like about zombie servers, and I think you've pointed out this yourself, and I plagiarized from your ideas like crazy in Building Green Software, which is one of the reasons why I got you to be a reviewer, so you could complain about it if you wanted to early on. The, Holly Cummins: It also means I would agree with you a lot. Yes. Oh This is very, sensible. Very sensible. Yes.Anne Currie: One of the things that we, that constantly comes up when I'm talking to people about this and when we're writing the book and when we're going out to conferences, is people need a way in. And it's often that, you know, that people think the way into building green software is to rewrite everything in C and then they go, "well, I can't do that.So that's the end. That's the only way in. And I'm not going to be able to do it. So I can't do anything at all." Operations and zombie servers is a really good way in, because you can just do it, you can, instead of having a hackathon, you can just do a thrift a thon, get everybody to have a little bot running that doesn't need to be running, instantly halve your, you know, it's not uncommon for people to find ways to halve their life.Yeah. carbon emissions and halve their hosting costs simultaneously in quite a short period of time and it'd be the first thing they do. So I quite like it because it's the first thing they do. What do you think about that? It's, is it the low hanging fruit?Holly Cummins: Yeah, absolutely, I think, yeah, it's the low hanging fruit, it's easy, it's, kind of entertaining because when you find the problems you can laugh at yourself, and there's, again, there's no downside and several upsides, you know, so it's, you know, it's this double win of I got rid of something I wasn't even using, I have more space in my closet, and I don't have to pay for it.Anne Currie: Yeah, I just read a book that I really should have read years and years ago, and I don't know why I didn't, because people have been telling me to read it for years, which was the goal. Which is, it's not about tech, but it is about tech. It's kind of the book that was the precursor to the Phoenix Projects, which I think a lot read.And it was, it's all about TPS, the Toyota Production System. In a kind of like an Americanized version of it, how are the tires production system should be brought to America. And it was written in the 80s and it's all about work in progress and cleaning your environment and getting rid of stuff that gets in your way and just obscures everything., you can't see what's going on. Effectively, it was a precursor to lean, which I think is really very well aligned. Green and lean, really well aligned. And, it's something that we don't think about, that cleaning up waste just makes your life much better in ways that are hard to imagine until you've done it.And zombie, cleaning zombie servers up just makes your systems more secure, cheaper, more resilient, more everything. It's a really good thing to do.Holly Cummins: Yeah. And there's sort of another way that those align as well, which I think is interesting because I think it's not necessarily intuitive. Which is, sometimes when we talk about zombie servers and server waste, people's first response is, this is terrible. The way I'm going to solve it is I'm going to put in barriers in place so that getting a server is harder.And that seems really intuitive, right? Because it's like, Oh yes, we need to solve it. But of course, but it has the exact opposite effect. And again it seems so counterintuitive because it seems like if you have a choice between shutting the barn door before the horses left and shutting the barn door after the horses left, you should shut the barn door before the horses left.But what happens is that if those barriers are in place, once people have a server, if they had to sweat blood to get that server, they are never giving it up. It doesn't matter how many thriftathons you do, they are going to cling to that server because it was so painful to get. So what you need to do is you need to just create these really sort of low friction systems where it's easy come, easy go.So it's really easy to get the hardware you need. And so you're really willing to give it up and that kind of self service model, that kind of low friction, high automation model is really well aligned again with lean. It's really well aligned with DevOps. It's really well aligned with cloud native.And so it has a whole bunch of benefits for us as users as well. If it's easier for me to get a server, that means I'm more likely to surrender it, but it also means I didn't have to suffer to get it, which is just a win for me personally. Anne Currie: It is. And there's something at the end of the goal in the little bit at the end, which I thought was my goodness, the most amazing, a bit of a lightswitch moment for me, when it was talking to this still about 10 years ago, but it was, it's talking about, ideas about stuff that, basically underpin the cloud, underpin modern computing, underpin factories and also warehouses and because I worked for a long time in companies that had warehouses, so you kind of see that there are enormous analogies and it was talking about how a lot of the good modern practice in this has been known since the 50s.And, it, even in places like japan, where it's really well known, I mean, Toyota is so, the Toyota production system is so well managed, almost everybody knows it, and everybody wants to, every company in Japan wants to be operating in that way. Still, the penetration of companies that actually achieve it is very low, it's only like 20%.I thought, it's interesting, why is that? And then I realised that you'd been kind of hinting why it was throughout. And if you look on the Toyota website, they're quite clear about it. They say the Toyota production system is all about trial and error. Doesn't matter, you can't read a book that tells you what we did, and then say, "oh well if I do that, then I will achieve the result."They say it's all about a culture of trial and error. And then you achieve, then you build something which will be influenced by what we do, and influenced by what other people do, and influenced by a lot of these ideas. But fundamentally, it has to be unique to you because anything complicated is context-specific.Therefore, you are going to have to learn from it. But one of the, one of the key things for trial and error is not making it so hard to try something and so painful if you make an error that you never do any trial and error. And I think that's very aligned with what you were saying about if you make it too hard, then nobody does any trial and error.Holly Cummins: Yeah. Absolutely.Anne Currie: I wrote a new version of it, called the cloud native attitude, which was all about, you know, what are people doing? You know, what's the UK enterprise version of the TPS system, and what are the fundamentals and what are people actually doing?And what I realized was that everybody was doing things that were quite different, that was specific to them, that used some of the same building blocks and were quite often in the cloud because that reduced their bottlenecks over getting hardware. Because that's always, that's a common bottleneck for everybody.So they wanted to reduce the bottleneck there of getting the access to hardware. But what they were actually doing was built trial and error wise, depending on their own specific context. And every company is different and has a different context. And, yeah, so you have to be able to, that is why failure is so, can't be a four letter word.Holly Cummins: Yeah. Technically, it's a seven letter word if you say failure, but...Anne Currie: And it should be treated that way.Yeah.  I'm very aware that actually our brief for this was to talk about three articles on AI.Holly Cummins: I have to say, I did have a bit of a panic when I was reviewing the articles because they were very deep into the sort of the intricacies of, you know, AI policy and AI governance, which is not my specialty area.Anne Currie: No, neither is it mine. All that and when I was reading it, I thought quite a lot about what we've just talked about. It is a new area. It's something that, as far as AI is concerned, I love AI. I have no problem with AI. I think it's fantastic. It's amazing what it can produce.And if you are not playing around on the free version of ChatGPT, then you are not keeping on top of things because it changes all the time. And it's, very like managing somebody. You get out of it what you put in. If you put in, if you make a very cursory, ask it a couple of cursory questions, you'll get a couple of cursory answers.If you, you know, leaning back on Toyota again, you almost need to five wise it. You need to No, go, no, but why? Go a little bit deeper. Now go a little bit deeper. Now go a little bit deeper. And then you'll notice that the answers get better and better, like a person, better and better.So if you, really do, it is worth playing around with it. Holly Cummins: Just on that, I was just reading an article from Simon Willison this morning and he, was talking about sort of, you know, a similar idea that, you know, you have to put a lot into it and that to get good, he was talking about it for coding assistance that, you know, to get good outputs, it's not trivial.And a lot of people will sort of try it and then be disappointed by their first result and go, "Oh, well, it's terrible" and dismiss it. But he was saying that one of the mistakes that people make is to anthropomorphize it. And so when they see it making mistakes that a human would never make, they go, "well, this is terrible" and they don't think about it in terms of, well, this has some weaknesses and this has some strengths and they're not the same weaknesses and strengths as a person would have.And so I can't just see this one thing that a human would never do and then dismiss it. I, you know, you need to sort of adapt how you use it for its strengths and weaknesses, which I thought was really interesting. The sort of the, you know, it's so tempting to anthropomorphize it because it is so human ish in its outputs because it's trained on human inputs, but it is not, it does not have the same strengths and weaknesses as a person.Anne Currie: Well, I would say the thing is, it can be used in lots of different ways. There are ways you can use it which, actually, it can react like a person, and therefore does need to be called. I mean, if you ask it to do creative things, it's quite human like. And it will come up with, and it will blag, and it will, you know, it's, you just have to treat it to certainly, certain creative things.You have to go, "is that true?" Can you double check that? Is that, I appreciate your enthusiasm there, but it might not be right. Can you just double check that? In the same way that you would do for, with a very enthusiastic graduate. And you wouldn't have fired them because they said something that seemed plausibleand, well, unless you'd said, do not tell me anything that seems plausible, then you don't double check. Because to a certain extent, they're always enthused. And that's where ideas come from. Stretching what's saying, well, you know, I don't know if this is happening, but this could happen. You have to be a little bit out there to generate new ideas and have new thoughts. I heard a very interesting podcast yesterday where one of the Reeds, I can never remember if it was Reed Hastings or Reed Hoffman, you know, it's like it was talking about AI, it was AI energy use.And he was saying, we're not stupid, you know, if there's, basically, there are two things that we know are coming. One is AI and one is climate change. We're not going to build, to try and create an AI industry that's requires the fossil fuel industry because that would be crazy talk, you know, we do all need to remember that climate change is coming and it is a different model for how, and, you know, if you are building an AI system that relies on fossil fuels, then you are an idiot because, the big players are not. You know, it's, I love looking at our world in data and looking at what is growing in the world?And if you look to a chart that's really interesting to look at, if you ever feel depressed about climate change is to look at the global growth in solar power in solar generated power. It's going up like it's not even exponential. It's, you know, it's, it looks vertically asymptotic.You know, it's super exponential. It's going faster than exponential, nothing else is developing that way. Except maybe AI, but AI from a from a lower point and, actually I think the AI will, and then you've got things with AI, you've got stuff like DeepSeek that's coming out of field and saying, "do you know?You just didn't need to write this so inefficiently. You could, you know, you could do this on a lot less, and it'd be a lot cheaper, and you could do things on the edge that you didn't know that you could do." So, yeah, I'm not too worried about AI. I think that DeepSeek surprised me.Holly Cummins: Yeah, I agree. I think we have been seeing this, you know, sort of enormous rise in energy consumption, but that's not sustainable, and it's not sustainable in terms of climate, but it's also not sustainable financially. And so financial corrections tend to come before the climate corrections.And so what we're seeing now is architectures that are designed to reduce the energy costs because they need to reduce the actual financial costs. So we get things like DeepSeek where there's the sort of fundamental efficiency in the model of the architecture or the architecture of the model rather.But then we're also seeing things as well, like you know, up until maybe a year ago, the way it worked was that the bigger the model, the better the results. Just, you know, absolutely. And now we're starting to see things where the model gets bigger. And the results get worse and you see this with RAG systems as well, where when you do your RAG experiment and you feed in just two pages of data, it works fantastically well and then you go, "okay, I'm going to proceed."And then you feed in like 2000 pages of data and your RAG suddenly isn't really working and it's not really giving you correct responses anymore. And so I think we're seeing an architectural shift away from the really big monolithic models to more orchestrated models. Which is kind of bad in a way, right?Because it means we as engineers have to do more work. We can't just like have one big monolith and say, "solve everything." But on the other hand, what do engineers love? We love engineering. So it means that there's opportunities for us. So, you know, a pattern that we're seeing a lot now is that you have your sort of orchestrator model that takes the query in and triages it.And it says, "is this something that should go out to the web? Because, actually, like, that's the best place for this news topic. Or is this something that should go to my RAG model? Is this something..." You know, and so it'll choose the right model. Those models are smaller, and so they have a much more limited scope.But, within that scope, they can give you much higher quality answers than the huge supermodel, and they cost much less to run. So you end up with a system, again, it's about the double win, where you have a system which maybe took a little bit more work to architect, but gives you better answers for a lower cost. Anne Currie: That is really interesting and more aligned as well with how power is being developed potentially, you know, that there is, that you really want to be doing more stuff at the edge, which that you want, and you want people to be doing stuff at home on their own devices, you know, rather than just always having to go to, as you say, Supermodels are bad.We all disapprove of supermodels. Holly Cummins: Yeah. and in terms of, you know, that aligns with some of the sort of the, you know, the privacy concerns as well, which is, you know, people want to be doing it at home and certainly organizations want to be keeping their data in house. And so then that means that they need the more organization local model to be keeping their, dirty secrets in house.Anne Currie: Well, it is true. I mean, the thing is you, it is very hard to keep things secure and sometimes just do want to keep things in house, some of your data in house, you don't necessarily even want to stick it on Amazon if you can avoid it. But yes, so that's been a really interesting discussion and we have completely gone off topic and we've hardly talked at all about, the AI regulation.I think we both agree that AI regulation, it's quite soon to be doing it. It's interesting. I can see why, the Americans have a tendency to take a completely different approach to the EU. If you look at their laws and I have to, I did do some lecturing in AI ethics and legalities and American laws do tend to be like, well, something goes wrong, you know, get your pantsuit off and fix it. EU laws tend to be about, don't even, don't do it. You know, as you said before, close the door before the horse has, you know, has bolted. And the American law is about bringing it back.But in some ways, that is, that exemplifies why America grows much faster than Europe does. , Holly Cummins: Yeah.I was, when I was looking at some of the announcements that did come out of the AI summit, I think, yeah, I have really mixed feelings about it because I think I generally feel that regulation is good, but I also agree with you that it can have a stifling effect on growth, but one thing that I think is fairly clearly positive that did seem to be emphasized in the announcements as well is the open source aspect.So, like, we're, I mean, we have, you know, sort of open source models now, but they're not as open source as, you know, open source software in terms of how reproducible they are, how accessible they are for people to see the innards of, but I think I was thinking a little bit again when I was sort of the way the AI summit isis making these sort of bodies that have like the public private partnerships, which isn't anything new, but you know, we're sort of seeing quite a few governments coming together. So like the current AI announcement, I think had nine governments and dozens of companies, but it reminded me a little bit of the sort of the birth of radio. When we had this resource which was the airwaves, the frequencies that, you know, had, nobody had cared about. And then now all of a sudden it was quite valuable and there was potentially, you know, the sort of wild west of like, okay, who can take this and exploit it commercially? And then government stepped in and said, "actually, no, this is a resource that belongs to all of us.And so it needs to be managed." Who has access to it and who can just grab it. And I feel a bit like, even though in a technical sense, the data all around us isn't all of ours. It's, you know, a lot of it is copyrighted and that kind of thing. But if you look at the sort of aggregate of like all of the data that humanity has produced, that is a collective asset.And so it should be that how it gets used is for a collective benefit and that regulation, and making sure that it's not just one or two organizations that have the technical potential to leverage that data is a collectively good thing.Anne Currie: Especially at the moment, we don't want everything to be happening in the US, because, maybe the US is not the friendly partner that we would always thought it would be, it's, diversityHolly Cummins: diversity is good. Diversity of geographic interests.Anne Currie: Indeed. Yeah, it is. So yeah, it's, but it is early days. I'm not an anti AI person by any stretch. In fact, I love AI. I think it's really is an amazing thing. And we just need to align it with the interests of the rest of the humanity in termsHolly Cummins: Yes.Anne Currie: but it is interesting. They're saying that in terms of being green, the big players are not idiots. They know that things need to be aligned. But in terms of data, they certainly will be acting in their best interests. So, yeah, I can see they, yeah, indeed. Very interesting. So, we are now coming to time, we've done quite a lot, we've done quite a lot. There won't be much to edit out from what we've talked about today.I think it's great, it's very good. But, Holly Cummins: Shall we talk about the Microsoft article though? Cause that, I thought that was really interesting.Anne Currie: oh yeah, go for it, Yes,Holly Cummins: Yeah, so one of the other articles that we have is, It said that Microsoft had, was reducing its investment in data centers, which was, I was quite shocked to read that because it's the exact opposite of all of the news articles that we normally see, including one I saw this morning that said that, you know, the big three are looking at increasing their investment in nuclear.But I thought it was sort of interesting because we've, I think we always tend to sort of extrapolate from the current state and extrapolate it indefinitely forward. So we say demand for AI is growing, demand for AI will grow indefinitely, but of course, that's not sustainable. Again you know, it's not sustainable in terms of financially and so at some point there will be that correction and it seems like, Microsoft has perhaps looked at how much they've invested in data centers and said "oh, perhaps this was a little bit much, perhaps let's rollback that investment just a little bit, because now we have an over capacity on data centers."Anne Currie: Well, I mean, I wonder how much of DeepSeek had an effect on which is that everybody was looking at it and going, the thing is, I mean, Azure is, it's, not, well, I say this is a public story. So I could, because I have it in the book, the story of during the pandemic, the team, the Microsoft Teams folks looking at what they were doing and saying, "could this be more efficient?" And the answer was yes, because had really no effort in whatsoever to make what they were doing efficient. Really basic efficiency stuff they hadn't done. And so there was tons of waste in that system. And the thing is, when you gallop ahead to do things, you do end up with a lot of waste.DeepSeek was a great example of, you know this AI thing, we can do it on like much cheaper chips and much fewer machines. And you don't have to do it that way. So I'm hoping that this means that Microsoft have decided to start investing in efficiency. It's a shame because they used to have an amazing team who were fantastic at this kind of stuff, who used it, so we, I was saying, Holly spoke at a conference I did last year about code efficiency. And Quarkus being a really good example of a more efficient platform for running Java on. The first person I had on that used to work for Azure. And he used to, was probably the world's expert in actual practical code efficiency. He got made redundant. Yeah. Because, Microsoft at the time were not interested in efficiency. So "who cares? Pfft, go on, out." But he's now working at NVIDIA doing all the efficiency stuff there. Because some people are not, who paying attention to, I, well I think the lesson there is that maybe Microsoft were not paying that much attention to efficiency, the idea that actually you don't need 10 data centers. A little bit of easy, well, very difficult change to make it really efficient. But quite often there's a lot of low hanging fruit in efficiency.Holly Cummins: Absolutely. And you need to remember to do it as well, because I think that, I think probably it is a reasonable and correct flow to say, innovate first, optimize second. So, you know, you, don't have be looking at that efficiency as you're innovating because that stifles the efficiency and you know, you might be optimizing something that never becomes anything, but you have to then remember once you've got it out there to go back and say, "Oh, look at all of these low hanging fruit. Look how much waste there is here. Let's, sort it out now that we've proven it's a success."Anne Currie: Yeah. Yeah, it is. Yes. It's like "don't prematurely optimize does" not mean "never optimize."Holly Cummins: Yes. Yes.Anne Currie: So, I, my strong suspicion is that Microsoft are kind of waking up to that a little bit. The thing is, if you have limitless money, and you just throw a whole load of money at things, then, it is hard to go and optimize. As you say, it's a bit like that whole thing of going in and turning off those zombie machines.You know, you have to go and do it know, it's, you have to choose to do it. If you have limitless money, you never do it, because it's a bit boring, it's not as exciting as a new thing. Yeah, but yeah, limitless money has its downsides as well as up.Holly Cummins: Yes. Who knew?Anne Currie: Yeah, but so I think we are at the end of our time. Is there anything else you want to say before you, it was an excellent hour.Holly Cummins: Nope. Nope. This has been absolutely fantastic chatting to you Anne.Anne Currie: Excellent. It's been very good talking to you as always. And so my final thing is, if anybody who's listening to this podcast has not read building green software from O'Reilly, you absolutely should, because a lot of what we just talked about was covered in the book. Reviewed by Holly.Holly Cummins: I can recommend the book.Anne Currie: I think your name is somewhere as a, some nice thing you said about it somewhere on the book cover, but, so thank you very much indeed. And just a reminder to everybody, everything we've talked about all the links in the show notes at the bottom of the episode. And, we will see, I will see you again soon on the Environment Variables podcast.Goodbye. Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode. 
undefined
Mar 6, 2025 • 57min

AI Energy Measurement for Beginners

Host Chris Adams is joined by Charles Tripp and Dawn Nafus to explore the complexities of measuring AI's environmental impact from a novice’s starting point. They discuss their research paper, A Beginner's Guide to Power and Energy Measurement and Estimation for Computing and Machine Learning, breaking down key insights on how energy efficiency in AI systems is often misunderstood. They discuss practical strategies for optimizing energy use, the challenges of accurate measurement, and the broader implications of AI’s energy demands. They also highlight initiatives like Hugging Face’s Energy Score Alliance, discuss how transparency and better metrics can drive more sustainable AI development and how they both have a commonality with eagle(s)! Learn more about our people:Chris Adams: LinkedIn | GitHub | WebsiteDawn Nafus: LinkedInCharles Tripp: LinkedInFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterNews:The paper discussed: A Beginner's Guide to Power and Energy Measurement and Estimation for Computing and Machine Learning [01:21] Measuring the Energy Consumption and Efficiency of Deep Neural Networks: An Empirical Analysis and Design Recommendations [13:26]From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate | Luccioni et al [45:46]Will new models like DeepSeek reduce the direct environmental footprint of AI? | Chris Adams [46:06]Frugal AI Challenge [49:02] Within Bounds: Limiting AI's environmental impact [50:26]Events:NREL Partner Forum Agenda | 12-13 May 2025Resources:Report: Thinking about using AI? - Green Web Foundation | Green Web Foundation [04:06]Responsible AI | Intel [05:18] AIEnergyScore (AI Energy Score) | Hugging Face [46:39]AI Energy Score [46:57]AI Energy Score - Submission Portal - a Hugging Face Space by AIEnergyScore [48:23]AI Energy Score - GitHub [48:43] Digitalisation and the Rebound Effect - by Vlad Coroama (ICT4S School 2021) [51:11]The BUTTER Zone: An Empirical Study of Training Dynamics in Fully Connected Neural NetworksBUTTER-E - Energy Consumption Data for the BUTTER Empirical Deep Learning Dataset [51:44]OEDI: BUTTER - Empirical Deep Learning Dataset [51:49]GitHub - NREL/BUTTER-Better-Understanding-of-Training-Topologies-through-Empirical-ResultsBayesian State-Space Modeling Framework for Understanding and Predicting Golden Eagle Movements Using Telemetry Data (Conference) | OSTI.GOV [52:26]Stochastic agent-based model for predicting turbine-scale raptor movements during updraft-subsidized directional flights - ScienceDirect [52:46]Stochastic Soaring Raptor Simulator [53:58]NREL HPC Eagle Jobs Data [55:02]Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI AIAAIC | The independent, open, public interest resource detailing incidents and controversies driven by and relating to AI, algorithms and automationIf you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Charles Tripp: But now it's starting to be like, well, we can't build that data center because we can't get the energy to it that we need to do the things we want to do with it. we haven't taken that incremental cost into account over time, we just kind of ignored it. And now we hit like the barrier, right? Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.I'm your host, Chris Adams. Welcome to Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host Chris Adams. If you follow a strict media diet, you switch off the Wi-Fi on your house and you throw your phone into the ocean, you might be able to avoid the constant stream of stories about AI in the tech industry. For the rest of us, though, it's basically unavoidable. So having an understanding of the environmental impact of AI is increasingly important if you want to be a responsible practitioner navigating the world of AI, generative AI, machine learning models, DeepSeek, and the rest. Earlier this year, I had a paper shared with me with the intriguing title A Beginner's Guide to Power and Energy Measurement, an Estimation for Computing and Machine Learning. And it turned out to be one of the most useful resources I've since come across for making sense of the environmental footprint of AI. So I was over the moon when I found out that two of the authors were both willing and able to come on to discuss this subject today. So joining me today are Dawn Nafus and Charles Tripp, who worked on the paper and did all this research. And well, instead of me introducing them, well, they're right here. I might as well let them do the honors themselves, actually. So, I'm just going to work in alphabetical order. Charles, I think you're slightly ahead of Dawn. So, if I, can I just give you the room to, like, introduce yourself?Charles Tripp: Sure. I'm a machine learning researcher and Stanford algorithms researcher, and I've been programming pretty much my whole life since I was a little kid, and I love computers. I researched machine learning and reinforcement learning in particular at Stanford, started my own company, but kind of got burnt out on it.And then I went to the National Renewable Energy Lab where I applied machine learning techniques to energy efficiency and renewable energy problems there. And while I was there, I started to realize that computing energy efficiency was a risingly, like, an increasingly important area of study on its own.So I had the opportunity to sort of lead an effort there to create a program of research around that topic. And it was through that work that I started working on this paper, made these connections with Dawn. And I worked there for six years and just recently changed jobs to be a machine learning engineer at Zazzle.I'm continuing to do this research. And, yeah. Chris Adams: Brilliant. Thank you, Charles. Okay, so national, that's NREL that some people referCharles Tripp: That's right. It's one of the national labs. Chris Adams: Okay. Brillinat. And Dawn, I guess I should give you the space to introduce yourself, and welcome back again, actually. Dawn Nafus: Thank you. Great to be here. My name is Dawn Nafus. I'm a principal engineer now in Intel Labs. I also run the Socio Technical Systems Lab. And I also sit on Intel's Responsible AI Advisory Council, where we look after what kinds of machine learning tools and products do we want to put out the door. Chris Adams: Brilliant, thank you, Dawn. And if you're new to this podcast, I mentioned my name was Chris Adams at the beginning of the podcast. I work at the Green Web Foundation. I'm the director of technology and policy there. I'm one of the authors of a report all about the environmental impact of AI last year, so I have like some background on this. I also work as the policy chair in the Green Software Foundation Policy Working Group as well. So that's another thing that I do. And if you, if there, we'll do our best to make sure that we link to every single paper and project on this, so if there are any particular things you find interesting, please do follow, look for the show notes. Okay, Dawn, I'm, let's, shall we start? I think you're both sitting comfortably, right? Shall I begin?Okay, good. So, Dawn, I'm really glad you actually had a chance to both work on this paper and share and let me know about it in the first place. And I can tell when I read through it, there was quite an effort to, like, do all the research for this.So, can I ask, like, what was the motivation for doing this in the first place? And, like, was there any particular people you feel really should read it?Dawn Nafus: Yeah, absolutely. We primarily wrote this for ourselves. In a way. And I'll explain what I mean by that. So, oddly, it actually started life in my role in Responsible AI, where I had recently advocated that Intel should adopt a Protect the Environment principle alongside our suite of other Responsible AI principles, right?Bias and inclusion, transparency, human oversight, all the rest of it. And so, the first thing that comes up when you advocate for a principle, and they did actually implement it, is "what are you going to do about it?" And so, we had a lot of conversation about exactly that, and really started to hone in on energy transparency, in part because, you know, from a governance perspective, that's an easy thing to at least conceptualize, right? You can get a number.Chris Adams: Mmm. Dawn Nafus: You know, it's the place where people's heads first go to. And of course it's the biggest part of, or a very large part of the problem in the first place. Something that you can actually control at a development level. And so, but once we started poking at it, it was, "what do we actually mean by measuring? And for what? And for whom?" So as an example, if we measured, say, the last training run, that'll give you a nice guesstimate for your next training run, but that's not a carbon footprint, right? A footprint is everything that you've done before that, which folks might not have kept track of, right?So, you know, we're really starting to wrestle with this. And then in parallel, in labs, we were doing some socio technical work on, carbon awareness. And there too, we had to start with measuring. Right? You had to start somewhere. And so that's exactly what the team did. And they found interestingly, or painfully depending on your point of view, look, this stuff ain't so simple, right?If what you're doing is running a giant training run, you stick CodeCarbon in or whatever it is, sure, you can get absolutely a reasonable number. If you're trying to do something a little bit more granular, a little bit trickier, it turns out you actually have to know what you're looking at inside a data center, and frankly, we didn't, as machine learning people primarily. And so, we hit a lot of barriers and what we wanted to do was to say, okay, there are plenty of other people who are going to find the same stuff we did, so, and they shouldn't have to find out the hard way. So that was the motivation.Chris Adams: Well, I'm glad that you did because this was actually the thing that we found as well, when we were looking into this, it looks simple on the outside, and then it turned, it feels a bit like a kind of fractal of complexity, and there's various layers that you need to be thinking about. And this is one thing I really appreciated in the paper that we actually, that, that was kind of broken out like that.So you can at least have a model to think about it. And Charles, maybe this is actually one thing I can, like, hand over to you because I spoke about this kind of hierarchy of things you might do, like there'sstuff you might do at a data facility level or right all the way down to a, like, a node level, for example.Can you take me through some of the ideas there? Because I know for people who haven't read the paper yet, that seemed to be one of the key ideas behind this, that there are different places where you might make an intervention. And this is actually a key thing to take away if you're trying to kind of interrogate this for the first time.Charles Tripp: Yeah, I think it's, both interventions and measurement, or I should, it's really more estimation at any level. And it also depends on your goals and perspective. So it, like, if you are operating a data center, right? You're probably concerned with the entire data center, right? Like the cooling systems, the idle power draw, the, converting power to different levels, right?Like transformer efficiency, things like that. Maybe even the transmission line losses and all of these things. And you may not really care too much about, like, the code level, right? So the types of measurements you might take there or estimates you might make are going to be different. They're gonna be at, like, the system level.Like, how much is my cooling system using in different conditions, different operating conditions, environmental conditions? From a user's perspective, you might care a lot more about, like, how much energy, how much carbon is this job using? And that's gonna depend on those data center variables. But there's also a degree of like, well, the data center is going to be running whether or not I run my job.Right? So I really care about my jobs impact more. And then I might be caring about much shorter term, more local estimates, like ones that, might be from measuring the nodes that I'm running on's power or which was what we did it at NREL or, much higher frequency, but less accurate measurements that come from the hardware itself.Most modern computing hardware has a way to get these hardware estimates of the current power consumption. And you could log those. And there's also difficulties. Once you start doing that is the measurement itself can cause energy consumption. Right? And also potentially interfere with your software and cause it to run more slowly and potentially use more energy.And so, like, there's difficulties there at that level. Yeah, but there's a whole suite of tools that are appropriate for different uses and purposes, right? Like measuring the power at the wall, going into the data center may be useful at the data center or multiple data center level. Still doesn't tell you all the story, right?Like the losses in the transmission lines and where did that power come from are still not accounted for, right? But it also doesn't give you a sense for, like, what happens that I take interventions at the user level? It's very hard to see that from that high level, right? Because there's many things running on the system, different conditions there. From the user's point of view, they might only care about, like, you know, this one key piece of my software that's running, you know, like the kernel of this deep learning network.How much energy is that taking? How much additional energy is that taking? And that's like a very different thing that very different measurements are appropriate for and interventions, right?Like changing that little, you know, optimizing a little piece of code versus like, maybe we need to change the way our cooling system works on the whole data center or the way that we schedule jobs. Yeah, and the paper goes through many of these levels of granularity.Chris Adams: Yeah, so this is one thing that really kind of struck out at me because when you, it started at the kind of facility level, which is looking at an entire building where you mentioned things like say, you know, power coming into the entire facility. And then I believe you went down to looking at say the, within that facility, there might be one or more data centers, then you're going down to things like a rack level and then you're going down tokind of at a node level and then you're all even going all the way down to like a particularly tight loop or the equivalent for that. And when you're looking at things like this, there are questions about like what you what... if you would make something particularly efficient at, say, the bottom level, the node level, that doesn't necessarily impact, it might not have an impact higher up, for example, because that capacity might be just reallocated to someone else.For example, it might just be that there's a certain kind of minimum amount of power draw that you aren't able to have much of an impact on. I mean, like, this is, these are some of the thingsI was surprised by, or not surprised by, but I really appreciated breaking some of that, these out, because one thing that seemed to, one thing that was, I guess, counterintuitive when I was looking at this was that things you might do at one level can actually be counter, can hinder steps further down, for example, and vice versa.Charles Tripp: Yeah, that's right. I mean, I think, two important sort of findings are, yeah, like battle scars that we got from doing these measurements. And one data set we produced is called BUTTER-E, which is like a really large scale measurement of energy consumption of training and testing neural networks and how the architecture impacts it.And we were trying to get reasonable measurements while doing this. And, of the difficulties is in comparing measurements between runs on different systems, even if they're identically configured, can be tricky because different systems based on, you know, manufacturing variances, the heat, you know, like how warm is that system at that time?Anything that might be happening in the background or over the network, anything that might be just a little different about its environment can have, real measurable impacts on the energy consumed. So, like comparing energy consumption between runs on different nodes, even with identical configurations, we had to account for biases and they're like, oh, this node draws a little bit more power than this one at idle.And we have to like, adjust for that in order to make a clear comparison of what the difference was. And this problem gets bigger when you have different system configurations or even same configuration, but running in like a totally different data center. So that was like one tricky finding. And I think two other little ones I can mention, maybe we could go into more detail later. But, another one, like you mentioned, is the overall system utilization and how that's impacted by a particular piece of software running a particular job running is going to vary a lot on what those other users of the system are doing and how that system is scheduled.So, you can definitely get in the situations where, yeah, I reduced my energy consumption, but that total system is just going to, that energy is going to be used some other time, especially if the energy consumption savings I get are from shortening the amount of time I'm using a resource and then someone else.But it does mean that the computing is being done more efficiently, right? Like, if everyone does that, then more computing can be done within the same amount of energy. But it's hard to quantify that. Like, what is my impact? It's hard to say, right?Chris Adams: I see, yeah, and Dawn, go on, I can, see you nodding, so I want you to come in now.Dawn Nafus: If I can jump in a bit, I mean, I think that speaks to one of the things we're trying to bring out, maybe not literally, but make possible, is this. Those things could actually be better aligned in a certain way, right? Like, the energy that is, you know, for example, when there is idle time, right?I mean, there are things that data center operators can do to reduce that, right? you know, you can bring things into lower power states, all the rest of it, right? So, in a way, kind of, but at the same time, the developer can't control it, but if they don't actually know that's going on, and it's just like, well, it's there anyway, there's nothing for me to do, right, that's also a problem, right?So in a way, you've got two different kinds of actors looking at it in very different perspectives. And the clearer we can get about roles and responsibilities, right, you can start to do things like reduce your power when things are idling. Yes, you do have that problem of somebody else is going to jump in. But Charles, I think as your work shows, you know, there's still some idling going on, even though you wouldn't think, so maybe you could talk a little bit about that.Charles Tripp: Yeah, so one really interesting thing that I didn't expect going into doing these measurements in this type of analysis was, well, first, I thought, "oh great, we can just measure the power on each node, run things and compare them." And we ran into problems immediately. Like, you couldn't compare the energy consumption from two identically configured systems directly, especially if you're collecting a lot of data, because one is just going to use like slightly more than the other because of the different variables I mentioned.And then when you compare them, you're like, well, that run used way more energy, but it's not because of anything about how the job was configured. It's just, that system used a little bit more. So if I switch them, I'd get the opposite result. So that was one thing. But then, as we got into it and we were trying to figure out, okay, well, now that we figured out a way to account for these variations, let's see what the impact is of running different software with different configurations, especially like neural networks, different configurations on energy consumption and our initial hypothesis was that it was based on mainly the size of the neural network and, you know, like how many parameters basically, like how many calculations, these sorts of things.And if you look in the research, A lot of the research out there about making neural networks and largely algorithms in general more efficient focuses on how many operations, how many flops does this take, you know? And look, we reduced it by a huge amount. So that means that we get the same energy consumption reductions.We kind of thought that was probably true for the most part. But as we took measurements, we found that had almost no connection to how much energy was consumed. And the reason was that the amount of energy consumed had way more to do with how much data was moved around on the computer. So how much data was loaded from the network?How much data was loaded from disc? How much data was loaded from disc into memory, into GPU RAM for using the GPU, into the different caching levels and red, even the registers? So if we computed like how much data got moved in and out of like level two cache on the CPU, we could see that had a huge correlation, like almost direct correlation with energy consumption. Not the number of calculations.Now, you could get in a situation where, like, basically no data is leaving cache, and I'm doing a ton of computing on that data. In that case, probably a number of calculations does matter, but in most cases, especially in deep learning, has almost no connections, the amount of data moved. So then we thought, okay, well, it's amount of data moved.It's the data moving. The data has a certain cost. But then we look deeper, and we saw that actually. The amount of data moved is not really what's causing the energy to be consumed. It's the stalls while the system is waiting to load the data. It's waiting for the data to come from, you know, system memory into level three cache.It needs to do some calculations on that data. So it's pulling it out while it's sitting there waiting. It's that idle power draw. Just it could be for like a millisecond or even a nanosecond or something, right? But it adds up if you have, you know, billions of accesses. Each of those little stalls is drawing some power, and it adds up to be quite a significant amount of power.So we found that actually the driver of the energy consumption, the primary driver by far in what we were studying in deep learning was the idle power draw while waiting for data to move around the system. And this was like really surprising because we started with number of calculations, it turns out almost irrelevant.Right. And then we're like, well, is it the amount of data moved around? It's actually not quite the amount of data moved around, but that does like cause the stalls whenever I need to access the data, but it's really that idle power draw. And and I think that's probably true for a lot of software.Chris Adams: Yes. I think that does sound about right.I'm just gonna try if I follow that, because there was, I think there was a few quite key important ideas there. But there's also, if you aren't familiar with how computers are designed, you it might, there. I'll try to paraphrase it. So we've had this idea that the main thing is like, the number of calculations being done. That's like what we thought was the key idea.But, Charles Tripp: How much work, you know.Chris Adams: Yeah, exactly. And, what we actually, what we know about is inside a computer you have like multiple layers of, let's call them say, caches or multiple layers at where you might store data so it's easy and fast to access, but that starts quite small and then gets larger and larger, which a little bit slower over time.So you might have, like you said, L2 cache, for example, and that's going to be smaller, much, much faster, but smaller than, say, the RAM on your system, and then if you go a bit further down, you've got like a disk, which is going to be way, what larger, and then that's going to be somewhat slower still, so moving between these stages so that you can process, that was actually one of the things that you were looking at, and then it turned out that actually, the thing that, well, there is some correlation there, one of the key drivers actually is the chips kind of in a ready state, ready to actually waiting for that stuff to come in.They can't really be asleep because they know the data is going to have to come in, have to process it. They have to be almost like anticipating at all these levels. And that's one of the things we, that's one of the big drivers of actually the resource use andthe energy use. Charles Tripp: I mean, so, like, what we saw was, we actually estimated how much energy it took, like, per byte to move data from, like, system RAM to level three cache to level two to level one to a register at each level. And at some cases, it was so small, we couldn't even really estimate it. But in most cases, we were able to get an estimate for the For that, but a much larger cost was initiating the transfer, and even bigger than that was just the idle power draw during the time that the program executed and how long it executed for. And by combining those, we were able to estimate that most of that power consumption, like 99 percent in most cases was from that idle time, even those little micro stalls waiting for the data to move around. And that's because moving the data while it does take some energy doesn't take that much in comparison to the amount of energy of like keeping the ram on and the data is just like alive in the ram or keeping the CPU active, right?Like CPUs can go into lower power states, but generally, at least part of that system has to shut down. So like doing it like at a very, fine grain scale is not really feasible. Many systems can change power state at a like a faster rate than you might imagine, but it's still a lot slower than like out of, you know, per instruction per byte level of, like, I need to load this data.Like, okay, shut down the system and wait a second, right? Like, that's, it just, not a second, like a few nanoseconds. It's just not practical to do that. And it's so it's just keeping everything on during that time. That's sucking up most of the power. the So one strategy, simple strategy, but it's difficult to implement in some cases is to initiate that load that transfer earlier.So if you can prefetch the data into the higher levels of memory before you hit the stall where you're waiting to actually use it,you can probably significantly reduce this power consumption, due to that idle wait. But it's difficult to figure out how to properly do that prefetching. Chris Adams: Ah, I see. Thanks, charles. So it sounds like, okay, they, we might kind of approach this and there might be some things which feel kind of intuitive but it turns out there's quite a few counterintuitive things.And like, Dawn, I can see you nodding away sagely here and I suspect there's a few things that you might have to add on this. Because this is, I mean, can I give you a bit of space, Dawn, to kind of talk about some of this too, because I know that this is something that you've shared with me before, is that yeah, there are maybe some rules of thumb you might use, but it's never that simple, basically, or you realise actually that there's quite a bit more to it than that, for example.Dawn Nafus: Exactly. Well, I think what I really learned out of this effort is that measurement can actually recalibrate your rules of thumbs, right? So you don't actually have to be measuring all the time for all reasons, but even just that the simple, I mean, not so simple story that Charles told like, okay, you know, so I spent a lot of time talking with developers and trying to understand how they work and at a developer perception level, right?What do they feel like? What's palpable to them, right? Send the stuff off, go have a cup of coffee, whatever it is, right? So they're not seeing all that, you know, and, you know, when I talk to them, most of them aren't thinking about the kinds of things that were just raised, right? Like how much data are you looking at a time?You can actually set and tweak that. And that's the kind of, you know, Folks develop an idea about that, and they don't think too hard about it usually, right. So, with measuring, you can start to actually recalibrate the things you do see, right? I think this also gets back to, you know, why is it counterintuitive that, you know, some of these mechanisms and how you actually are training, as opposed to how many flops you're doing, how many parameters, why is that counterintuitive?Well, at a certain level, you know, the number of flops do actually matter, right? If we do actually have a gigantic, you know, I'm gonna call myself a foundation model type size stuff, I'm gonna build out an entire data center for it, it does matter. But as you get, you know, down and down and more specific, it's a, different ball game.And there are these tricks of scale that are sort of throughout this stuff, right? Like the fact that, yes, you can make a credible claim, that foundation model will always be more energy intensive than, you know, something so small you can run on a laptop, right? That's always going to be true, right? No measurement necessary, right? You keep going down and down, and you're like, okay, let's get more specific. You can get to actually where this, where our frustration really started was, you, if you try to go to the extreme, right, try to chase every single electron through a data center, you're not going to do it. It feels like physics, it feels objective, it feels true, but at minimum you start to hit the observer effect, right, that, you know, which is what we did.We were, my colleague Nicole Beckage was trying to measure at an epoch level, right, sort of essentially round, you know, mini round of training. And what she found was that, you know, she was trying to sample so often that she's pulling energy out of the processing and it just, it messed up the numbers, right? So you can try to get down, you know, into that, you know, what feels like more accuracy and then all of a sudden you're in a different ballpark. So these, tricks of like aggregation and scale and what can you say credibly at what level, I think are fascinating, but you kind of got to get a feel for it in the same way that you can get a feel for, "yep, if I'm sending my job off, I know I have at least, you know, however many hours or however many days," right?Charles Tripp: There's also so much variation that's out of your control, right? Like one run to another one system to another, even different times where you ran on the same system can cause measureable and in some cases significant variations in the energy consumption.So it's more about, I think about understanding what's causing the energy consumption.I think that's the more valuable thing to do. But it's easy to like, be like, "I already understand it." And I think there's a, there's like a historical bias towards number of operations because in old computers without much caching or anything like this, right? Like I restore old computers and, like an old 386 or IBM XT, right?Like it's running, it has registers in the CPU and then it has main memory. And it, and almost everything is basically how many operations I'm doing is going to closely correlate with how fast the thing runs andprobably how much energy it uses, because most of the energy consumption on those systems Is just basically constant, no matter what I'm doing, right?It's just it doesn't like idle down the processor while it's not working, right? And there's a historical bias. It's built up over time that, like, was focused on the, you know, and it's also at the programmer level. Like, I'm thinking about what is the computer doing? Chris Adams: What do I have controll over?Charles Tripp: But it's only through it's only through actually measuring it that you gain a clearer picture of, like, what is actually using energy.And I think if you get that picture, then you'll gain an understanding more ofhow can I make this software or the data center or anything in between like job allocation more energy efficient, but it's only through actually measuring that we can get that clear picture. Because if we guess, especially using kind of our biases from how we learn to use computers, how we learn about how computers work, we're actually very likely to get an incorrect understanding, incorrect picture of what's driving the energy consumption.It's much less intuitive than people think.Chris Adams: Ah, okay, there's a couple of things I'd like to comment on, and then Dawn, i might give you a bit of space on this, because, you said, so there's one, so we're just talking about like flops as a thing that people, okay, are used to looking at, and are like, it's literally written into the AI Act, like, things above a certain number of flops are considered, you know, foundational models, for example, so, you know, that's a really good example of what this actually might be.And I guess the other thing that I wanted to kind of like touch on is that, I work in the kind of web land, and like, I mean, the Green Web Foundation is a clue in our organization's name. We've had exactly the same thing, where we've been struggling to understand the impact of, say, moving data around, and whether, how much credence you should give to that versus things happening inside a browser, for example.It looks like you've got some similar kinds of issues and things to be wrestling, with here. But Dawn, I wanted to give you a bit of space because both of you alluded to this, about this idea of having an understanding of what you can and what you can't control and, how you might have a bias for doing one thing without, and then miss something really much larger elsewhere, for example.Can I maybe give you a bit of space to talk about this idea of, okay, well, which things do you, should you be focusing on, and also understanding of what's within your sphere of influence? What can you control? What can't you control, for example?Dawn Nafus: Exactly. I think it's in a sense you've captured the main point, which is, you know, that measurements are most helpful when they are relevant to the thing you can control, right? So as a very simple example, you know, there are plenty of AI developers who have a choice in what data centers they can use.There are plenty who don't, right? You know, when Charles works or worked at NREL, right. The supercomputer was there. That was it. You're not moving, right? So, if you can move, you know, that overall data center efficiency number that really matters because you can say, alright, "I'm putting my stuff here and not there." If you can't move, like, there's no need to mess with. It it is what it is, right? At the same, and this gets us into this interesting problem, again, a tension between what you might look at it from a policy perspective versus what a developer might look at. We had a lot of kind of, you know, can I say, come to Jesus?We had a little momentwhere we, is that on a podcast? I think I can. Where there was this question of, are we giving people a bum steer by focusing at, you know, granular developer level stuff, right? Where it's so much actually is on how you run the data center, right? So you, again, you talk about tricks of scale. On the one hand, you know, the amount of energy that you might be directly saving just by, you know, not using or not using, by the time all of those things move through the grid and you're talking about coming, you know, energy coming off of the transmissions cables, right, in aggregate might not actually be directly that big. It might be, but it might not be. And then you flip that around and you think about what aggregate demand looks like and the fact that so much of AI demand is, you know, that's what's putting pressure on our electricity grid.Right? Then that's the most effective thing you could do, is actually get these, you know, very specific individual jobs down and down, right? So, again, it's all about what you can control, but there are these, whatever perspective you take is just going to flip your, you know, your understanding of the issue around.Chris Adams: So this was actually one thing I quite appreciated from the paper. There were a few things saying, and it does touch on this idea, that yeah, you, might be focusing on the thing that you feel that you're able to control, but just because you're able to, like, Make very efficient part of this spot here that doesn't necessarily translate into a saving higher up in the system. Simply because if it's, if you don't, if higher up in the system isn't set to actually take advantage of that, then you might never achieve some of these savings It's a little bit like when you're working in cloud, for example, people tell you do all these things to kind of optimize your cloud savings. But if people are not turning data centers off, at best, you might be slowing the growth of infrastructure rollout in future, and like these are, and these are much, much harder things to kind of claim responsibility for, or say that, "yeah, it was definitely, if it weren't for me doing those things, we wouldn't have had that happen."This is one of the things that I appreciated the paper just making some allusions to and saying, look, yeah, this is, you know, this is why I mean, to be honest, when I was reading this, I was like, wow, there is, there was obviously some stuff for, beginners, but there's actually quite a lot here, which is quite meaty for people who are thinking of it as a much larger systemic level.So there's definitely things like experts could take away from this as well. So, I just want to check, are there any particular takeaways the two of you would like to kind of draw people's attention to beyond what we've been discussing so far? Because I quite enjoyed the paper and there's a few kind of nice ideas from this. Charles, if I just give you a bit of space to, kind of, come in. Charles Tripp: Yeah. I've got, kind of two topics that I think build on what we talked about before, but could be really useful for people to be aware of. So one is, sort of one of the outcomes of our studying of the impact of different architectures, data sets, hyper parameter settings on deep neural network energy consumption was that the most efficient networks, most energy efficient networks, and largely that correlates with most time efficient as well, but not always, the most efficient ones were not the smallest ones, and they were not the biggest ones, right?The biggest ones were just required so much data movement. They were slow. The smallest ones, they took a lot more iterations, right? It took a lot more for them to learn the same thing. And the most efficient ones were the ones where the working sets, where the amount of data that was moved around, matched the different cache sizes.So as you made the network bigger, it got more efficient because it learned faster. Then when it got so big that the data in like between layers, the communication between layers, for example, started to spill out of a cache level. Then it became much less energy efficient, because of that data movement stall happening.So we found that like there is like an optimum point there. And for most algorithms, this is probably true where if the working set is sized appropriately for the memory hierarchy, you gain the most efficiency, right? Because generally, like, as I can use more data at a time, I can get my software to work better, right, more efficiently. But there's a point where it falls out of the cache and that becomes less efficient. Exactly what point is going to depend on the software. But I think focusing on that working set size and how it matches to the hardware is a really key piece for almost anyone looking to optimize software for energy efficiency is to think about that. How much data am I moving around and how does that map to the cache? So that's like a practical thing.Chris Adams: Can I stop you Because I find that quite interesting, in that a lot of the time as developers we're kind of taught to kind of abstract away fromthe underlying hardware, and that seems to be going the other way. That's saying, "no, you do need to be thinking about this.You can't.There, you know, there's no magic trick." Charles Tripp: Right? And so, like, for neural networks, that could mean sizing my layers so that those working sets match the cache hierarchy, which is something that no one even considers. It's not even close in most architectures. Like, no one has even thought about this. The other thing is on your point about data center operations and kind of the different perspectives,one thing that we started to think about as we were doing some of this work was it might make sense to allocate time or in the case of like commercial data center, commercial cloud operator, even like charge field based on at least partly the energy rather than the time, as to incentivize them to use less energy, right?Like make things more energy efficient. Those can be correlated, but not always right. And another piece of it that I want to touch on of that same puzzle is, from a lot of data center operators perspective, they want to show their systems fully utilized, right? Like there's demand for the system, so we should build an even bigger system and a better system. When it comes to energy consumption.That's probably not the best way to go, because that means that those systems are sitting there probably doing inefficient things. Maybe even idling a lot of time, right? Like a user allocated the node, but it's just sitting there doing nothing, right? It may be more useful instead of thinking about, like, how much is the system always being utilized?But think about how much, how much computation or how many jobs or whatever your, like, utilization metric is, do I get, like, per unit energy, right? And you may think about how much, or per unit carbon, right? And you may also think about, like, how much energy savings can I get by doing things like shutting down nodes when they're unlikely to be utilized and more about like having a dynamic capacity, right?Like full tilt. I can use I can do how many flops or whatever, right? But I can also scale that down to reduce my idle power draw by, you know, 50 percent in low demand conditions. And if you have that dynamic capacity, you may actually be able to get even more throughput. But it's with less energy because when there's no demand, I'm like shutting,I'm like scaling down my data center, right? And then when there's demand, I'm scaling it up. But these are things that are requiring cultural changes in data center operations to happen.Chris Adams: I'm glad you mentioned this thing here because, Dawn, I know that you had some notes about, it sounds like in order for you to do that, you need, you probably need different metrics exposed or different kinds of transparency to what we have right now.Probably more actually. Dawn, can I give you a bit of space to talk about this? Because this is one thing that you told me about before and it's something that is actually touched on in the paper quite a few times actually.Dawn Nafus: Yeah, I mean, I think we can notice a real gap in a way between the kinds of things that Charles brings his attention to, and the kinds of things that show up in policy environments, in responsible AI circles, right, where I'm a bit closer, we can be a bit vague, and I think we are at the stage where, at least my read on the situation, is that, you know, there's, regardless of where you sit in the debates, and there are rip roaring debates about what to do about the AI energy situation, but I think transparency is probably the one thing we can get the most consensus on, but then, like, just back to that, what the heck does that mean? And I think we need a little, like a, more beats than are currently given to actually where, what work are those measurements doing?You know, some of the feedback we've gotten is, you know, "well, can't you just come up with a standard?" Like, what's the right standard? It's like, well, no, actually, if data centers aren't standard, and there are many different ways to build a model, then, yes, you can have a standard as a way of having a conversation across a number of different parties to do a very specific thing, like for example, Charles's example, you know, suggested that if we're charging on a per energy basis, that changes a whole lot. Right? But what you can't do is to say, this is the standard that is the right way to do it, and then that meets the requirement, because that's, you know, what we found is that clearly the world is far more, you know, complicated and specific than that.So, I, you know, I would really encourage the responsible AI community to start to get very specific very quickly, which I don't yet see happening, but I think it's just on the horizon. Chris Adams: Okay. Well I'm glad you mentioned about maybe taking this a little bit wider 'cause we've dived quite spent a lot of time talking about this paper, but there's other things happening in the world of AI actually, and I wanna give you folks a bit of space to kind of talk about anything that like, or things that you are, that you would like to kind of direct some attention to or you've seen that really you found particularly interesting.Charles, can I give you some space first and then give Dawn the same, to like say it to like I know, either shout out or point to some particular things that, if they've found this conversation interesting so far, what they might want to be looking at. More data.Charles Tripp: Yeah. I mean, I think, both in like computer program, computer science at large and especially in machine learning, we've kind of had an attitude, especially within deep learning within machine learning, an attitude of throwing more compute at the problem, right? And more data. The more data that we put through a model and the bigger, the more complicated the model is, the more capable it can be.But this brute force approach is one of the main things that's driving this increasing computing energy consumption. Right? And I think that it is high time that we start taking a look at making the algorithms we use more energy efficient instead of just throwing more compute. It's easy to throw more compute at it, which is why it's been done.And also because there hasn't been a significant like material incremental cost of like, Oh, you know, now we need. Twice made GPUs. I don't big deal. But now we're starting to hit constraints because we haven't thought about that incremental energy costs. We haven't had to, as an industry at large, right?Like, but now it's starting to be like, well, we can't build that data center because we can't get the energy to it that we need to do the things we want to do with it because we haven't taken that incremental cost into account over time, we just kind of ignored it. And now we hit like the barrier, right? And so I think thinking about, the energy costs and probably this means investing in more finding more efficient algorithms, more efficient approaches as well as more efficient ways to run data centers and run jobs. That's gonna become increasingly important, even as our compute capacity continues to increase.The energy costs are likely to increase along with that as we use more and more, and we need create more generation capacity, right? Like, it's expensive at some point where we're really driving that energy production, and that's going to be increasingly an important cost as well as it is now, like, starting to be a constraint to what kind of computing we can do.So I think investing in more efficient approaches is going to be really key in the future. Chris Adams: There's one thing that I, that I think Dawn might come in on this actually, is that, you're talking about, it seems that you're talking about having more of a focus on surfacing some of the kind of efficiency or the fact that resource efficiency is actually going to be something that we probably need to value or sharpen, I mean, because as I understand it so far, it's not particularly visible in benchmarks or anything like that right now, like, and if you have benchmarks deciding, what counts as a good model or a good use of this until that's included. You're not going to have anything like this. Is that the kind of stuff you're kind of suggesting we should probably have? Like, some more recognition of, like, or even like, you're taking at the energy efficiency of something and being that thing that you draw attention to or you include in counting something as good or not, essentially.Dawn Nafus: You know, I have a particular view of efficiency. I suspect many of your listeners might, as well. You know, I think it's notable that at the moment when we're seeing the, you know, the the model of the month, apparently, or the set of models of DeepSeek has come onto the scene and immediately we're starting to see, for the first time, you know, a Jevons paradox showing up in the public discourse.So this is the paradox that when you make things more efficient, you can also end up stimulating so much demand... Chris Adams: Absolute use grows even though it gets individually more efficient.Dawn Nafus: Yeah, exactly. Again, this is like this topsy turvy world that we're in. And so, you know, now the Jevons paradoxes is front page news, you know, my view is that yes, you know, again, we need to be particular about what sorts of efficiencies are we looking for where and not, you know, sort of willy nilly, you know, create an environment where, which I'm not saying you're doing Charles, but you know, what we don't want to do is create an environment where if you can just say it's more efficient, then, somehow, you know, we're all good, right. Which is, you know, what some of the social science of Energy Star has actually suggested that, that stuff is going on. With that said, right, I am a big fan of the Hugging Face Energy Star initiative. That looks incredibly promising. And I think one of the things that's really promising about it, so this is, you know, you know, leaderboards when, you know, people put their models up on Hugging Face. There's some energy measurement that happens, some carbon measurement, and then, you know, leaderboards are created and all the rest of it. And I think one of the things that's really good at, right, I can imagine issues as well, but you're A, you know, creating a way to give some people credit for actually looking. B, you're creating a way of distinguishing between two models very clearly, right? So in that context, do you have to be perfect about how many kilowatts or watts or whatever it is? No, actually, right? Right? You know, you're looking at more or less in comparable models. But C, it also interjects this kind of path dependence. Like, who is the next person who uses it? Right?That really matters. If you're setting up something early on, yes, they'll do something a little bit different. They might not just run inference on it. But you're, changing how models evolve over time and kind of steering it towards even, you know, having energy presence at all. So that's pretty cool to my mind.So I'm looking forward to... Chris Adams: Cool. We'll share a link to the Hugging Face. I think they, I think, do you know what they were called? I think it's the, you might be, I think it's, it was initially called the Energy Star Alliance, and then I think they've been told that they need to change the name to the Energy Score Alliance from this, because Ithink it, Energy Star turned out to be a trademark, but we can definitely add a link to that in the show notes, because, these, this actually, I think it's something that is officially visible now. It's something that people have been working on late last year, and now there is, we'll share a link to the actual GitHub repo, to the code on GitHub to kind of run this, because this works for both closed source models and open source models. So it does give some of that visibility. Also in France, there is the Frugal LLM challenge, which also sounds similar to what you're talking about, this idea of essentially trying to emphasize more than just the, you know, like to pay a bit more attention to the energy efficiency aspect of this and I'm glad you mentioned the DeepSeek thing as well because suddenly everyone in the world is an armchair expert on William Stanley Jevons paradox stuff.Everybody knows! Yeah. Dawn Nafus: Actually, if I could just add one small thing, since you mentioned the Frugal effort in France, there's a whole computer science community, sort of almost at a step's length from the AI development community that's really into just saying, "look, what, you know, what is the purpose of the thing that I'm building, period."And even, and that, you know, frugal computing, computing within limits, all of that world really about how do we get, you know, just something that somebody is going to actually value, as opposed to, you getting to the next, you know, score on a benchmark leaderboard somewhere. so I think that's kind of also lurking in the background here.Chris Adams: I'm glad you mentioned this, what we'll do, we'll add a we'll add links to both of those and, you immediately make me think of, there is this actual, so we're technologists mostly, the three of us, we're talking about this and I work in a civil society organization and, just this week, there was a big announcement, like a kind of set of demands from civil society about AI that's being shared at the AI Action Summit, this big summit where all the great and good are meeting in Paris, as you alluded to, next week to talk about what should we do about this? And, they, it's literally called Within Bounds, and we'll share a link to that. And it does talk about this, like, well, you know, if we're going to be using things like AI, what do, we need to have a discussion about what they're for. And that's the first thing I've seen which actually has discussions about saying, well, we should be actually having some concrete limits on the amount of energy for this, because we've seen that if this is a constraint, it doesn't stop engineers.It doesn't stop innovation. People are able to build new things. What we should also do is we should share a link to, I believe, Vlad Coraoma. he did an interview with him all about Jevons paradox a few, I think, late last year, and that's a really nice deep dive for people who want to basically sound knowledgeable in these conversations on LinkedIn or social media right now, it's a really useful one there as well. Okay, so we spoke a little bit about these ones here. Charles, are there any particular projects you'd like to kind of like name check before we start to wrap up? Because I think we're coming up to the hour now, actually.Charles Tripp: I don't know, not particular, but I did mention earlier, you know, we published this BUTTER-E data set and a paper along with it, as well as a larger one without energy measurements called BUTTER. Those are available online. You can just search for it and you'll find it right away. I think, if that's of interest to anyone hearing this, you know, there's a lot of measurements and analysis in there, including, you know, all the details of analysis that I mentioned where we, had this journey from number of compute cycles to, like, amount of stall, in terms of what drives energy consumption. Chris Adams: Ah, it's visible so people can see it. Oh, that's really cool. I didn't realize about that. Also, while you're still here, Charles, while I have access to you, before we did this interview, you mentioned, there's a whole discussion about wind turbines killing birds, and you were telling me this awesome story about how you were able to model the path of golden eagles to essentially avoid these kind of bird strike stuff happening.Is that in the public domain? Is something, can we link to that? That sounded super cool. Charles Tripp: There's several, papers. I'll have to dig up the links, but there's several papers we published and some software also to create these models. But yeah, I worked on a project where we looked at, we took, eagle biologists and computational fluid dynamics experts and machine learning experts.And we got together and we created some models based off of real data, real telemetry of tracking, golden eagle flight paths through, well, in many locations, including at wind sites, and match that up with the atmospheric conditions, the flow field, like, or graphic updrafts, which is where the wind hits, you know, like a mountain or a hill and it, some of it blows up.Right. And golden eagles take advantage of this as well as thermal updrafts caused by heating at the ground. Right. Causing the air to rise to fly. Golden eagles don't really like flapping. They like gliding. And because of that, golden eagles and other soaring birds, their flight paths are fairly easy to predict, right?Like, you may not know, like, oh, are they going to take a left turn here or right turn there, but generally they're going to fly in the places where there's strong updrafts and using actual data and knowledge from the eagle biologists and simulations of the flow patterns, we were able to create a model that allows wind turbines to be cited and also operate, right?Like, what, under what conditions, like, what wind conditions in particular and what time of year, which also affects the eagles' behavior, should I perhaps reduce my usage of certain turbines to reduce bird strikes? And in fact, we showed that it could be done without significantly, or even at all, impacting the energy production of a wind site.You could significantly reduce the chances of colliding with a bird.Chris Adams: And it's probably good for the birds too, as well, isn't it? Yeah.Alright, we definitely need to find some links for that. That's, going to be absolute catnip for the nerdy listeners who put, who are into this. Dawn, can I just give you the last word? Are there any particular things that you'd like to, I mean actually I should ask like, we'll add links to like you and Charles online, but if there's anything that you would draw people's attention to before we wrap up, what would you pay, what would you plug here? Dawn Nafus: I actually did want to just give a shout out to National Renewable Energy Lab, period. One of the things that are amazing about them, speaking of eagles, a different eagle is, they have a supercomputer called Eagle. I believe they've got another one now. It is lovingly instrumented with all sorts of energy measurements, basically anything you can think to measure.I think you can do it in there. There's another data set from another one of our co authors, Hilary Egan, that has some sort of jobs data. You can dig in and explore like what a real world data center job, you know, situation looks like. So I just want to give all the credit in the world to National Renewable Energy Lab and the stuff they do on the computing side.It's just phenomenal.Chris Adams: Yes, I think that's a really, I would echo that very much. I'm a big fan of NREL and the output for them. It's a really like a national treasure Folks, I'm really, thank you so much for taking me through all of this work and diving in as deeply as we did and referring to things that soar as well, actually, Charles. I hope we could do this again sometime soon, but otherwise, have a lovely day, and thank you once again for joining us. Lovely seeing you two again.Charles Tripp: Good seeing you.Chris Adams: Okay, ciao!  Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.  
undefined
Feb 27, 2025 • 53min

The Week in Green Software: Transparency in Emissions Reporting

Dive into the latest buzz on emissions reporting and the innovative AI Energy Score project by Hugging Face. The discussion tackles the complexities of measuring AI's environmental impact and the importance of collaboration in establishing benchmarks. Key policy shifts, including an executive order on clean energy for data centers, spark debates about ethical considerations and local community impacts. Plus, explore a beginner's guide to energy measurement for computing and upcoming events focused on Green AI initiatives.
undefined
Feb 20, 2025 • 1h 1min

How to Tell When Energy is Green with Killian Daly

In this episode, host Chris Adams is joined by Killian Daly, Executive Director of EnergyTag, to explore the complexities of green energy tracking and carbon accounting. They discuss the challenges of accurately measuring and claiming green energy use, including the flaws in current carbon accounting methods and how EnergyTag is working to improve transparency through time-based and location-based energy tracking. Killian shares insights from his experience managing large-scale energy procurement and highlights the growing adoption of 24/7 clean energy practices by major tech companies and policymakers. They also discuss the impact of green energy policies on industries like hydrogen production and data centers, emphasizing the need for accurate, accountable energy sourcing and we find out just how tubular Ireland can actually be!Learn more about our people:Chris Adams: LinkedIn | GitHub | WebsiteKillian Daly: LinkedIn | WebsiteFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterResources:GHG Protocol [09:15]Environment Variables Podcast | Ep 82 Electricity Maps w/ Oliver Corradi [32:22]Masdar Sustainable City [58:28]If you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Killian Daly: We need to think about this kind of properly and do the accounting correctly.And unfortunately, we don't do the accounting very well today. Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.I'm your host, Chris Adams. Hello, and welcome to another edition of Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Chris Adams. When we write software, there are some things we can control directly. For example, we might be able to code in a tight loop ourselves, or design a system that scales to zero when it's not in use.And if we're buying from a cloud vendor, like many of us do now, we're often buying digital resources, like gigabytes of RAM and disk, or maybe virtual CPUs, rather than physical servers. It's a little bit less direct, but we still know we have a lot of scope for the decisions, to control the impact of their decisions and what kind of environmental consequences come about from that.However, if we look one level further down the stack, like how the energy powering our kit is sourced, our control is even more indirect. We rarely, if ever, directly choose the kind of generation that powers data centers that our code runs in. But we know it still has an impact. So if we want to source energy responsibly, how do we do it?If you want to know this, it's a really good idea to talk to someone whose literal job for years has been buying lots and lots of clean energy and is intimately familiar with the standards involved in doing so and who has spent a lot of time thinking about how to make sure you can tell when the energy you're buying really is green.Fortunately, today I'm joined by just that person, Killian Daly, the Executive Director of the standards organization, EnergyTag. Killian, it's really, nice to have you on the pod. Thanks for coming on.Killian Daly: Yeah, thanks. Thanks very much for having me, Chris. great to be on the pod and, an avid listener, also. So it's always nice to contribute.Chris Adams: Thank you very much. Killian, I'm going to give you a bit of space to introduce yourself, and I've just mentioned that you're involved in EnergyTag, and we'll talk a little bit about what EnergyTag does. Because I know you and because, well, I met you maybe three years ago, I figured it might just be, it might be worth just talking a little bit about our lives outside of green software and sustainability.So, we were in this accelerator with the Green Web Foundation talking about a fossil free internet, and you were talking about EnergyTag and why it's important to track the provenance of energy. I remember you telling, we were asked about our passions. And, you told me about surfing and I never ever thought about Ireland as a place where you would surf because I didn't think it was all that warm. So can you maybe tell me a little bit like enlighten me here because it's not the first country I think of when I think of surfing and when you said that I was like he's" having a joke, right?"Killian Daly: Yeah. Well, I do like to joke, but this is not actually one of the jokes, Well, it doesn't need to be warm to surf. You just need to have waves, I suppose. So, yeah, it's something since I was really very young. I've always gone to the west coast of Ireland. Beautiful County Clare near the Cliffs of Moher.Maybe people know of them. And so we go every year. And my cousins, since a very young age, started surfing. We just, you know, solve these big waves and there's other people out there, surfing, bodyboarding and we're like, "Hey, let's try that out. That looks really cool." So, yeah, since I don't know, 6 or 7 years old, I've been going there every year, in summer, also in winter, me and my cousins also go, yeah.We go at New Year's get into the frigid cold Atlantic. And, yeah, it's magic, really. If you have the right, if you have the right wetsuit, you can kind of, you can get through anything, Chris Adams: So there's no such thing as cold was it bad weather, just bad clothing that also applies to wetsuits.Killian Daly: Yeah. Yeah. Yeah. It couldn't apply. Couldn't apply anymore. And obviously, in winter, you get the biggest swells, right? so actually, people probably don't know it, but Ireland has some of the biggest waves in the world. Now, on the west coast of Ireland, you have, yeah, really massive 50, 60 foot waves.Yeah, really all you can get some sort of a, all time surf there. So, so yeah, it's one of one of our better kept secrets.Chris Adams: I was not expecting to learn how to go totally tubular on this podcast.Killian Daly: Yeah, Chris Adams: Wow, that's, yeah, that's...Killian Daly: It's not, not for the faint of heart, but yeah, I would definitely recommend it.Chris Adams: Actually, now that you mention that, and now that we talk about, going back to the world of energy, now that people talk about Ireland as, the Saudi Arabia of wind, and it being windy AF, Then I can kind of see where you're coming from with it, actually. It doesn't make a bit more sense. So yeah, thank you for that little segue, actually, Killian.Okay, so we've started to talk a little bit about energy. And, I know that your, the organization you work for right now is called EnergyTag. But previously, as I understood it, you didn't, you worked in other organizations before. And, you've been working as a kind of buyer of energy, so you know a fair amount about actually sourcing electricity and how to kind of do that in a kind of responsible way.And I think when I heard you, we spoke about this before, you mentioned that, "yeah, I'm used to buying significant amounts of power" in your kind of previous life. Could I just like, could you maybe talk, provide a bit of a kind of background there, and so we can talk a little bit about context and size, because that might be helpful for us talking about the relative size that tech giants might buy and so on, and how much of that is applicable.Killian Daly: Yeah, sure. Yeah, so, I've been thinking about energy for a long time, even before my professional career studied energy and electrical engineering since I was 18 years old and did a master's in that, also. And then obviously in my working life as well. I've been basically always in the energy sector.So before EnergyTag, I was basically overseeing the global electricity portfolio, and the procurement of electricity for a company called Air Liquide, which is basically a large French multinational that produces, liquid air. So, oxygen, nitrogen, all the different parts of air which are, essential, feedstocks into various industries, and they consume a lot of electricity.So, the portfolio my team oversaw was about 35 to 40 terawatt hours of electricity consumption.Chris Adams: Okay.Killian Daly: Yeah, it's a lot, it's more than my home country, Ireland. It's about the same as Google and MicrosoftChris Adams: put together, yeah. Okay, so, wow. AndKillian Daly: So, it's pretty big stuff. And obviously, when you're working on something like that globally, looking at various electricity markets operating in 80 countries in these huge volumes, I suppose you, kind of learn a lot about what it means to buy power.Chris Adams: I guess if you're looking at something which is basically as much power as an entire country, then there's going to be like country sized carbon emissions, depending on what you choose to power this from. And I guess that's probably why you, I mean, we, have ways of tracking power. I mean, tracking the carbon emissions from various things like this, I mean, called like the GHG protocol, which is a kind of like the kind of gold standard for talking about some of that stuff.And this is something that I think you have some exposure to and I remember when you spoke to me, I remember us sitting down one time and you were telling me about There's a thing called scope 1 and there's a thing called scope 2, and that scope 2 was actually a kind of relatively new Idea where this came into this. Can you maybe tell me a little bit like maybe you could explain to someone who is Who's heard of, carbon footprinting, and they know there's a thing called scopes.Why would anyone care about scope 2 in the first place? And how does it come about in the first place? Because it seems like it's not intuitive for most people when they first, when they start thinking about carbon footprints and stuff like that. Killian Daly: Yeah. I think the obvious, first thing you need to take into account when you think of like a company's emissions is, well, what are they burning themselves on site? do they have gas boilers burning gas? Are they burning coal to produce electricity? So that's, I think, very intuitive and obvious. But actually that is not the end of the story. And there's actually like a, a very funny anecdote. I put a true anecdote from the legendary Laurent Segalen, who does the Redefining Energy podcast and general energy guru. And he was actually involved in the kind of creation of a lot of the carbon accounting standards that are used today, this Greenhouse Gas Protocol standard, which is basically used by over 90 percent of companies now to report their carbon emissions.It is the Bible of how carbon accounting works, right? and so 20 years back, he basically was, down in Australia and visiting an aluminum smelter. On site, they were explaining, "this is very low carbon product. we hardly burn any fossil fuels on site. This is incredibly, clean production." Chris Adams: The aluminium here, right? big chunks of aluminium. Okay, right.Killian Daly: Aluminum, aluminum smelting. So like one of the, biggest metallic commodities that we have, very energy intensive. and so, he was there on site and just saw these big overhead wires coming in from yonder, from somewhere, right? And he said, hang on, what are the, what are those big cables above? and they were like, "oh, yeah, that's the electricity," obviously driving the smelter because aluminium, it's all about electricity. That's what power is an aluminium production facility. And so he said, well, hang on, where is that coming from?They're like, "oh, no, don't, don't worry about that. That's not our responsibility." Well, it absolutely is, right? so you need to think about where is that electricity coming from? How is that being produced? And in that case, it was coming from a very large multi gigawatt coal power plant right next door. Chris Adams: Okay. All right. So I thought you were gonna say, oh, it's maybe a, something clean, like a hydro power station, but no, just a big, fat, dirty, great coal fired power station was the thing generating all the power for it. And that's whereKillian Daly: Absolutely. So, that's kind of the, just a bit of an anecdote is that's why it's so important to think about what we call scope to emissions, the emissions of electricity that I'm consuming, because especially as we electrify the economy, right, more and more emissions are going to become scope 2 emissions.They're going to be related to someone else either burning fossil fuels to produce electricity and to give to a consumer or ideally, using clean energy sources to generate that electricity without carbon emissions. we need to think about this kind of properly and do the accounting correctly.And unfortunately, we don't do the accounting very well today.Chris Adams: Alright, so previously, before we even had that, there wasn't even this notion of scope 2 in the . , you might have just had direct, and then maybe this kind of bucket of indirect stuff, which is really hard to measure, so you're not going to really try to measure it. And okay, so, I remember actually reading about some of this myself, and I always wondered, like, where do some of these figures come, where do, where does even the notion of a protocol like this come from? And one of the things I realized was, particularly with the GHG one, was that they're like, when I listened to Laurent Segalen speaking about some of this, he was basically saying, yeah, this was essentially like Shell, the oil company, who basically said, "we have a way of tracking our own emissions."And, why not use that as a starting point for talking about how we do carbon accounting? And then, scope 2 was a new concept. That was one of the things that they were kind of pushing for. But I suppose this kind of speaks to the idea of, who's in those rooms for those working groups to kind of, that is going to totally change the framing of how we talk about some of this.And I guess that's probably why this, is this a little bit like why you started talking and getting involved with things like EnergyTags so you could take part in those discussions? Because it feels if this is what we're going to use to define how we do this or how we do that just like you have people talking about okay BP had an impact of changing how we think about carbon footprints from, from an individual point of view.But you do need people involved in that conversation to say, "actually, no, that's possibly not the best way to think about this, and there are other ways to take this into account." I mean, is this why you got involved in the EnergyTag stuff?Killian Daly: Yeah, it's one of the main reasons, because I used to do, so, work for one of the world's largest electricity consumers. And so I was responsible for calculating all of the electricity emissions for that company, right? Like doing the scope 2. And so I read the Greenhouse Gas Protocol back to front.That was how the, all the calculations were done. That's what qualified clean and not clean, right? And I remember thinking, "this is an insanely influential document," right? It's kind of in the weeds. It's kind of stayed maybe, to some people, but I wasChris Adams: of tedium around it, here. Killian Daly: Yeah. But the more I've gotten involved in things like regulation and conversations like that, that is where, it's in the annexes, it's in the details that the big decisions are made often. So I remember thinking back then, this is insanely influential and some of the ways that we're allowed to claim to consume clean energy are, frankly, disconnected from reality in a way that is just not okay, right?As in this is far too weak. And definitely, I thought, someday I'd love an opportunity to be able to, say, "hang on, can we,we fix this please? can we do this differently? Can we start to respect some sort of basic realities here?" So, yeah, it was definitely one of the drivers why I joined EnergyTag, which is obviously like a nonprofit that is, has as its mission to clean up accounting, right? And to clean up the way we think about electricity accounting. So, yeah, obviously it's a great honor, I suppose, to be part of those ongoing discussions in the Greenhouse Gas Protocol update process.Chris Adams: So, We spoke before about how there, before there was even no scope 2, right? So that was like, the bar was on the floor. Right, and then we introduced the idea that, oh, maybe we should think about the emissions from the electricity. So that was kind of a bit of a leap forward by one person pushing for this, that otherwise wouldn't have been in the standard at all, right?And I just realized actually now that you mentioned that, we spoke about oil firms being very involved in this and being very organized in this, and I remember people talking about Shell, that's what you use, and how much, and I'm just realising, oh Christ, Shell's in the Green Software Foundation as well.We should, that's something I didn't really think so much about, but they're also there too. So they are organized. Wow. So let's move on. So maybe we could talk a little bit about scope 2 here. The thing I want to kind of get my head around is I'm like, can you maybe talk me through some examples of where this doesn't, this falls down a little bit, where might be a little, stretching your, you spoke about the physicality, the physical reality. where does it need a bit of work, or need some improvement that you're looking to do, looking to address in EnergyTag, for example? Killian Daly: Yeah, so basically, one way of doing scope 2 accounting is basically looking at the energy contracts or the electricity supply, contracts that companies have and saying, well, where are you buying your energy from? How are you contracting for your power? Right? And there's a kind of a number of fundamental issues.One of them is around the temporal correlation, or between when you're consuming electricity and when the electricity you're claiming to consume is being produced. And today, right, we actually allow an annual matching window between production and consumption. And put in simple terms, what that means is that you can be basically solar powered all night long, right. You can take solar energy attributes from the daytime and use them at nighttime, or you could take them from the daytime in March and use them at nighttime in November. At any other time of year. And this just does not make sense, right?Chris Adams: Not physically how the science works for a start. Maybe if I can just dive into that a little bit in a bit more detail because you've mentioned this idea of certificates for example or like claiming like that and as I understand it if I am running a solar farm right I'm generating two separate things. I'm generating power but I'm generating the kind of greenness so these are two independently sellable things which will sometimes be bundled together. That's how I might buy green energy. But under certain rules, they're not. They can be separated. So it's like the greenness that I'm moving or I'm buying and kind of slapping onto something else to make it green. Is that? And if it's at the same time, it's kind of okay. If it's from totally separate times of day, you do like you mentioned where you're saying this thing running at night runs at solar, is running on the greenness from a solar farm, which is stretching the, well, our imagination, I suppose, and your credulity, I suppose.Okay, so that's one example of this is something that you wanted to get, wanted to get fixed. Are there any other ones, or things that you'd point people to, becauseKillian Daly: I think you know the. The other, the other aspect, I think that's pretty, problematic in today's standards is so we've talked about time and the other big one is space, right? Today we allow consumers to claim to use green energy or clean energy over vast geographical boundaries that really don't respect the physical limits of the grid.So, for example, the whole U. S. is considered to be one region, right? So you can buy green energy attributes produced in Texas and say that you're using them in New York. So you could be 100 percent power by Texas solar in New York. Or if you're in Europe, Europe is considered of one region. So you have really absurd cases where you can be powered by Icelandic hydro in Germany, and Iceland has never exported any electricity to anyone. There's no cables leaving Iceland. So, that just doesn't make sense. And this has real consequences because what we're trying to do is obviously drive consumers to buy green energy. If they're doing it in this way, then they're kind of, in some cases, pretending to buy green energy rather than actually going and buying green energy and incentivizing more production of green energy and clean flexibility that's needed to integrate that solar and wind, at every hour of the day.So, that time and space kind of paradigm is maybe a good way of thinking about, some of the fundamental issues here. There are other ones. I don't know how far we want to go into the rabbit hole, but that's two very high level, and hopefully very kind of understandable examples of the problems we have with today's carbon accounting.Chris Adams: Yeah, I think I understand why that would be something we would address, and so presumably this is the thing that EnergyTag's looking to do now. You're basically saying, well, the current system is asking you to make quite spectacular leaps of faith. And there are certain places where you do want to do leaps of faith and be super creative, but accounting might not be where you want to be super creative or super jumpy. That's not always where you want to have your innovation.So that's, this is, so you're saying, well, let's actually be, make this more reflective of what's really happening in the world. So that we've got like some kind of solid foundation to be working on. So,Exactly. Killian Daly: And just maybe on that point, this is not what we advocate for is not, it's not anything radically new, to be honest, because the way electricity markets work today, the way electricity utilities deliver power to customers, just you know, let's say pure gray electricity on electricity markets.It is based on fundamental concepts of time matching. Power markets work on a 16, sorry, a 60, 30 or 15 minute, like balancing period. In Australia, it's 5 minutes. In Europe, there's things called bidding zones. So that's the area over which you can buy and sell electricity. And all of this is to kind of capture these fundamental physical limits of the power system.You have to balance it in real time. And there's only a certain amount of grid capacity. And so you need to realize areas over which it's reasonable to trade power or not. So all we're saying is, make the green energy market much more like the real power market. So we're actually, if anything, trying to make it a bit more common sense,whereas today, we're, quite detached from some of those basic limits thatChris Adams: Ah, I see. Okay. So in fact, in some ways, there are some kind of comparisons where you could plausibly make where people there's a push right now for people to talk about treating environmental data with some of the same seriousness as financial data and apply some of the same constraints it sounds like something a little bit like that so if people are going to have basically take into account the physical constraints when they're purchasing the actual power part, they should think about applying their same ideas when they're thinking about the greenness of it as well. You can't kind of cheat, even if it makes it a bit easier, for example.Killian Daly: Yeah, well, exactly. And, ultimately, what are we trying to do here? Is the purpose so that certain consumers can say that they have no emissions, or is the purpose to set up an incentive system so that when those consumers actually. Do you say they have no emissions that they've gone through all of the challenges of grid decarbonization?So they've bought renewables. So they've invested in storage. So, fine, you can consume solar power at nighttime if you put it in a battery during the daytime. They're thinking about, demand flexibility. Are they consuming a bit less when there's less wind and sun? They're hard challenges, right?We need to do a lot more of those type of things, and a proper accounting framework will make sure that in getting to zero that you have to think about and take all of those boxes. Whereas today, you can just be 100 percent solar powered and obviously that's just not going to lead to the grid decarbonization in the real world that we want to see.Chris Adams: Maybe if you're in space it might work, but mostly no. Okay.Killian Daly: Mostly, no. Yeah, Chris Adams: Okay, so we spoke a little bit about why there are some problems with the existing process, and like you, we've spoke a little bit, hinted at some kind of ways you could plausibly fix this. So do you, could you mind just talking me through some of the key things that EnergyTag is pushing for in that case?Because it doesn't sound like you're trying to do something totally wacky, like, say you're never allowed, sorry, you're, it's not like you're asking for something like a significant change, like you're not allowed to split the greenness from power and or stuff like that. It sounds like you're still working inside the current ways that people are used to buying power and do all that stuff at the moment, right?Maybe you could tell me about how it's supposed to work on the newer schemes that you're working with.Killian Daly: Yeah. So basically what we're advocating for is that, if you're gonna claim to use green energy based on how you contract for power, then, well, you have to temporally match, right? So you can only claim to use green energy produced in the same hour as your consumption. Not in the same year, Okay. number 1. Number 2 is we need to think about the deliverability constraints, right,and this geographical matching issue. And what we're saying is that, for example, in Europe, Europe is not a perfectly interconnected grid. And so you shouldn't be able to claim you're consuming green energy from anywhere else in Europe, you should be doing it, in the same bidding zone or, at least at aChris Adams: There needs to be some physical deliverable, physical connection to make it possible. Okay.Killian Daly: Or fine, you can go across border, but you have to show that actually the power actually did come across border and that you're not violating like fun. You're not importing, 10 times more certificates than you are real power between 2 countries, right? So we need to have those, limits put in place.And another thing that we think is important is that there needs to be some sort of controls on individual consumers just buying a load of certificates, for example, from very old assets. And I'm totally relying on those to be 100 percent green. For example, if I'm in Germany, right, and I just sign a deal with a hydro power plant, that has existed for 100 years and I'm time matched and I'm also within Germany, spatially matched, and I'm claiming to be 100 percent renewableChris Adams: it's not speedytransition if it's a hundred years old, that feels like that's stretching the definition of being an agent of that. Okay.Killian Daly: that's another thing to kind of, you know, having this 3 pillar framework.Sometimes we call about, and that is very important. I think for an existing consumer, it is legitimate to claim a certain amount of that existing power, but that must have a limit, right? You can't just be resource shuffling and "well I'm the one who's taking all the green energy" and everyone else is left with the, fossil that needs to be controlled also.Chris Adams: All right. I think I follow that. So basically, so timely has to be more or less the same time, right? Deliverable, like you need to be able to demonstrate that the power could actually be delivered to that place. So deliverable there. And this other one was like, additional, like we need to transition, so you can't look at something which is 100 years old or 50 years old and say "I'm using that, I'm fine." There is this notion of bringing new supply stream to kind of presumably displace or move us away from our current fossil based default, which is not great from a climate point of view, right?Killian Daly: Exactly. And I think one way, there's a really, a good friend of mine, who's in the Rocky Mountain Institute, Nathan Iyer, smart guy. We've worked a lot on US federal policy topics, and he actually has a really, good analogy about this stuff. BYOB, right?So, yeah, of these 3 pillars. So, like, when you're going to a party, you need to bring your beer to the party on time. You can't bring it yesterday, you need to bring it when the party is happening. You need to bring it to the party, not to another party. And it needs to also be your own beer.You can't just be taking someone else's. And it's it's kind of like a bit simplified, but it's a good analogy, I think for what we're trying to get out here. It's if we get everyone to start like thinking that way and acting on those kind of fundamental principles, obviously, we're going to end up being much more effective in deeply decarbonizing our power systems.Chris Adams: So, decarbonization of the grid communicated through the power of carbonated beverages, basically. Wow!Killian Daly: What could be better?Chris Adams: I think it's, well, it's topical, at least it's still talking about CO2, just on slightly different scales, actually. I quite like that, actually. I might borrow that one myself, actually. Okay. So, there's one thing that you mentioned then. So this notion of, we spoke a little bit before about there's this idea of greenness that could be split, you're still keeping that, so you're not, saying, there's no ban on saying you're not allowed to sell power, that is unbundled from that, there is, that is still a kind of key idea of flexibility, could you maybe, I mean, cause from someone who isn't familiar with it, they might say, "why do we even have this, idea of being able to have separate these in the first place.Doesn't this make things much more complicated?" I mean, I might be going down into the weeds, but is there a reason for that? is it just because that's how it's such a big change there that, or it's really hard to make that, to get people to shift to a new way of doing things or, what was that, what's the thinking around that part there?Killian Daly: Well, basically, right, anytime you want to claim or have a contract, whether that be an unbundled or a bundle PPA contract, Chris Adams: Power Purchase Agreement, right?Killian Daly: Yeah, a power, like a long term power purchase agreement, for example, right? so anytime you have a contract for a specific type of electricity, you need an accounting mechanism or a tracking mechanism that kind of sits on top of the grid and allocates generation to consumption, becauseobviously, the way that the grid actually works, is that electrons are just oscillating around the place. there's not really a methodology to physically trace this individual electron started here and went there, right? And so, much like power markets do, and they have mechanisms for contractually allocating power between different buyers and sellers, as long as it's matched in time and space, that's a fundamental premise of our power markets work, we're basically borrowing that concept, but attaching the greenness attribute,Chris Adams: Ah,Killian Daly: and saying "provided that this system, of detaching greenness from the power is respecting temporal and geographical matching requirements, deliverability requirements, sufficiently, then that should be the basis of legitimate green claims and that essentially creates a market mechanism for financing renewables.If you don't do that, then you cannot have a green power market basically, right? You,= don't have a way of differentiating buyers who are contracted for green power and those who are not doing anything. So, yeah, for example, a few years ago in Air Liquide, we only did this, we didn't look at what contracts we were sourcing.We just did this location based accounting where you take an average of all the generation in the grid. Which is another way of looking at electricity emissions and a very valid way of doing it. But obviously one disadvantage that has is that it basically leaves all consumers passive.They have no incentive to do anything in terms of driving electricity decarbonization. So that's why we need these, these mechanisms of essentially having tracking Chris Adams: systems. Oh, okay, I see. So, if you, if there's no recognition, if I'm working at a large company, why would I, why would I choose to buy something green if I can't be recognized for me doing something, doing that green step? And, so the downside of the location based approach is that yes, it gives you one single answer, but it takes away this idea that organizations which have honestly massive amounts of resources can influence or speed up a transition.That's what it seems to be a kind of it's trying to respect that reality or at least acknowledge that this is what we expect of organizations if they're that powerful.Killian Daly: And one person, I know you've had Olivier Corradi from Electricity Maps on before they've done, some very good blog series on this topic. They're obviously have insanely deep knowledge of grid emissions is really no one better that I've come across.And they did a very kind of simplified explanation of this stuff. And you have the location based method, which is like maximizing physical accuracy and then you have the market based method, which is trying to maximize incentives and financing. And what this 24/7 accounting framework that we're advocating is basically trying to make those things meet in the middle, right? Today we have a market based system that is too much focused on, I would say, flexibility, making it easy for people to say they're green. and so has led to very valid criticism. And what we're trying to do now is bring that market based mechanism back closer to the physical realities of the grid,Chris Adams: Oh, I see.Killian Daly: But keeping the, incentive system, because if you don't have that, then, well, I don't really see the point in even doing the exercise.Chris Adams: Okay. So there's two things that I wanted to kind of just see if I could maybe dive into a little bit on that then. So it sounds like this whole notion of not having this stuff tied to each other is to reflect the fact that people have all these complicated ways to purchase power in the first place.So in my world as a cloud, as like someone working as a cloud engineer, right, I might buy computing by the hour, but I might also buy it, in advance for three years, for example, for a lower price, and that, that provides a bit of stability for whoever's running my server, but this kind of, this is an example of me having multiple different ways of being able to buy something, and essentially, some of that unbundling there is actually trying to capture the fact that there is, there are all these complicated ways to arrange to pay for something, and this is one way that we can use to value some of the Flexibility and stuff you said before.So for example, you spoke about you can't run something on solar power, right? But if you had a battery, you can capture that and then use a battery bit like a time machine to kind of run at night almost right so but therefore you're trying to but that's more expensive than just making some claims.So you need to have some way to recognize the fact that it takes a battery and a bunch of extra smarts to run something at night from that. That's what you're trying to go for with that, right?Killian Daly: Yeah, exactly. And again, basing things on how power markets contractual, they have ways of already have contracted with allocating power between generators and consumers. I think the biggest issue with unbundling, so, selling the energy attributes and the power to different people. Actually, I think what the fundamental problem is the lack of time matching and deliverability requirements. That's where unbundling has gone wrong. Because it's, it said, "we're going to take the green attribute from this energy in Norway, and we're going to allow it to be used at any time of year, anywhere in Europe."That's insane. That's where it starts to get completely insane. I don't have any particular problem with you producing it in one hydro plant, and selling the power into a power pool. and then that being consumed in Norway in the same hour. That's literally how power markets work on a short term power market.Everyone bids into a common pool. And why not just put the attributes into the same pool and well, they, all have the same properties anyway. So it doesn't make a difference. It's the only way you're ever going to have liquidity, right? so I don't see any fundamental issue with, that.The fundamental issue is with the annual matching and theChris Adams: the physics beyond breaking point, essentially.Killian Daly: And that's, I think that's why I'm bundling, it's got such a bad name, right? And I think that's actually been fair, but I do think that it's not that bundling around bundling or necessarily the issue is, kind of theChris Adams: like those three pillars you mentioned. Okay, gotcha. Thank you for indulging me as I went down that thing, because I didn't know the answer to that, and I've always been wondering. Okay, so, we spoke about this thing called EnergyTag. We've spoke a little bit about how it's supposed to work and how it's basically an improvement on some of the approaches before.And, maybe we could talk a little bit about who's using it? Is anyone, adopting it? maybe we could go from there, because this sounds like a cool idea, but there are many, cool ideas. That no one is paying attention to. And I suspect that would be quite a demoralizing conversation if that was the case.So, yeah, I mean, who's using this and where, are there any kind of big name adopters you might point people to or anything like that?Killian Daly: Yeah, so, yeah, two of the leading ones that kind of come to mind immediately, obviously, especially for software folks like yourselves or Google and Microsoft, they have 24/7 clean energy targets by 2030. Basically, they're committing to buying clean power for every hour, their data centers are consuming electricity, everywhere in, in which they're operating.So they're two of the most, I would say, advanced, ambitious, corporate climate commitments in terms of scope 2 electricity procurement, at least. And they're obviously two major buyers. And they've been signing some really interesting deals as well. So there's, gigawatts now already of these 24/7 or close to 24/7 PPAs signed, 80, 90 percent firmed, portfolios of renewables, and that's game changing, right?that's something we've seen emerge in the last few years where traditionally, the way of buying renewables has been "I'm going to buy a solar contract, and I'm going to blend that into whatever I'm buying elsewhere." And that's fine, right? But it's only giving you maybe 20, 20 percent of your electricity on an annual basis.Now, we're seeing new contract structures that are blending together. Solar, wind, batteries, and getting maybe 80, 90 percent like of a flattened,Chris Adams: so that's what I mean by firmed then, so firmed is this idea that it's basically it's when you say, so if it's not firmed, it's like I'm gonna buy the same amount totally without thinking about when it's matched, but if it's firmed then I am trying to think, I'm taking the steps necessary so that I can make a much more credible claim that the power I'm using is coming from generation or from stored amounts of power or something like that.Ah,Killian Daly: And that's, as I said, there's gigawatts of deals done already to date. Are there people doing this hourly matching stuff? Yes, absolutely. Check out our website. There's 30 projects there, with millions of megawatt hours of hourly matching being done.So, this is not 40 organizations or something doing it 5 continents. So, This is not rocket science, right? This is literally taking meter data. That's very common, hourly production and gen data. You could do it on an Excel file with three columns if you wanted, and matching those things together and seeing where we're at. So it's absolutely demonstrated and leaders are doing it. Is everyone doing this? Is this now the status quo way of doing it? No, absolutely not. And that's what we work every day to try change, right? so we're still, I would say, relatively in the early days of this transition, but, as far as I'm concerned, it's kind of inevitable for credibility reasons, transparency reasons also for pretty fundamental economic reasons. Companies going out there and committing to buy loads of energy that is unmatched to their consumption profile.They're leaving themselves open to a lot of risks. So, what if you say, okay, I'm just going to buy a load of solar. That has no connection to how I actually consume electricity. You're leaving yourself open to a lot of volatility that we're seeing electricity markets today. A lot of super high prices in the evening.For example, when you're, when your solar contract is not delivering you anything, then what do you do? Right? you have all this gas volatility and exposure. So it's not just about decarbonization. It's also about things like electricity price hedging. So there's kind of various, I think, fundamentals that mean that.We are going to move in this direction.Chris Adams: okay, so So if I understand that final point that you've basically made is if I want to do this kind of matched thing for example, or if I want to, if I want to be buying some power, one of the advantages of doing like a longer term deal is that there's a degree of stability. So let's say, I don't know, a one country decides to invade another country and then totally make gas prices go through the roof.I'm somewhat insulated from all that stuff so that it's not gonna massively destroy, it's not gonna destroy the, make impossible to kind of pay my own bills, for example. And like we've seen those of examples of that over the last few years, for example. So there's a bit of insulation from that kind of stuff.Yeah.Killian Daly: Exactly. So now we do get into kind of contracting mechanisms here. It's a little bit similar to what basically, if you're committing to a fixed price, for example, for a number of years, if you sign like one of these PPAs and you commit, let's say, to a 10 year fixed price for power. And if you're committing to like a affirmed profile, let's say 90 percent matched,that has a very significant hedging value. So it means that basically you fixed like a lot of your power price. So no matter what happens, if, there's a massive spike in gas prices and power prices go through the roof. You're protected against that. We actually worked on a really interesting study on this a couple of years back or 18 months ago that said.With Pexapark, who are like PPA analysts, and they basically showed that like a 10 megawatt consumer in Germany could save over 10 million euro, in the best of cases, and at least millions of euro in a given year by signing these 24/7, or close to 24/7 power purchase agreements with clean electricity assets, because one thing that clean energy has as an advantage in an ever more uncertain world is that the costs are basically known up front. You know how much money you need to build a wind turbine to build a battery up front.It's all capex heavy. And that means that renewables can basically Give you a fixed price up front where honestly, gas cannot, because, most of their costs are operational. It's about buying the gas when you need it to.Chris Adams: And there's a constant flow is not Okay, I guess with the sun, I mean, there's maybe a scenario where, I mean, it's not like there's a Mr burns style blackout of the sun kind of thing, right? if you're relying on something where no one has control over, no one can, kind of blockade the wind or blockade the sun.That's where some of the stability is coming from, right?Killian Daly: Yeah, exactly. Right. so you have those things, and you know that those fuel sources basically don't cost anything. Right? so you're all your costs are in construction, materials, all things you basically know, largely upfront, and that does enable you to provide long term contracts, typically way beyond the terms that fossil fuel generators can offer.And so it can protect you for, the consumers willing to take that long term price risk. It can really offer really significant hedging benefits. not above alternatives.Chris Adams: Buy that on like the spot market as it were or buying something just like on the regular market. Okay. All right. So, so you mentioned a few large companies doing that stuff and outside of technology, I know that I think it's the federal government. They've, it sounds like you said one or two things, which are quite interesting.There is this idea that 100 percent is obviously really, good. Right. And that's what you want to head towards. But given there are some places where aren't, they're not going, they're not shooting for 100 percent straight away, for example, they might be going for 50 percent or 60 percent or something like that.This is something that is kind of okay to do, or that's okay to start at. Cause I think I heard about the government, the US government had a plan for something about this by 2030 or something.Killian Daly: Yeah. So basically, what we, we started the conversation talking about accounting. So I think the first thing you need to do is get, the accounting right. So that when you say 50, it means 50 or when you say 100, it means 100 because if you're just saying 100 and it means 50, then well, you're screwed, right?You have a bad system. So, I think, actually being at 70 percent renewable, but saying that out loudChris Adams: 70%. Yeah.Killian Daly: and addressing the, the basic fact that you're only there that's much better than kind of saying I'm 100 percent renewable on some annualized basis and kind of like misleading people about where you're at with, decarbonization.Chris Adams: So it's better to be a real 70 than a fake 100, basically, yeah? Killian Daly: Yeah.And, so, you have, electricity, like suppliers, for example, who are, there's like Good Energy in the UK, Octopus Energy in the UK, most of the electricity suppliers now in the UK, in fact, are, offering these like hourly tariffs.And, I don't think any ofChris Adams: it was only one or two that did that. Whoa. That'sKillian Daly: Now, I think this year it'll become more, a kind of a norm, where they will offer this alongside their a hundred percent renewable tariff. And none of those hourly tariffs are gonna start off being a hundred percent renewable, but it's bringing that extra bit of transparency, which I think is great.And, the likes of good energy, they're already offering to thousands of customers, right? This is not just the Googles and the Microsofts who have their long term targets on this. This is already being offered to thousands of customers around the world because electricity suppliers are basically taking.They're doing all the work. They're just giving the consumer the number on some dashboard saying, this is how much matching you have. if you look at the Octopus Energy example, it's quite interesting. They have a tariff called Electric Match for some of their B2B customers and they're actually basically reducing your price of power. when you're more matched, so that's quite cool, yeah, they're charging you less the more that your demand is matched to their generation. Right? And I think that's quite a cool gamification of this. They're saying basically try to consume when there's more wind and sun in the UK, you'll be more matched and we'll cut, we'll cut your rates because obviously it's sort of, it costs them less to deliver that in the first place.So that's. That's the type of cool mechanism.Chris Adams: So, I swear, every single time I speak to energy people, and they say, "oh yeah, the price is totally changing." Then I think of one level up, when we're like paying for cloud, and it's the same price all the time. Someone's making a bunch of money off us doing all the kind of carbon aware computing stuff, because if the price is going, low, I would expect to see those numbers go low.This feels like something we might want to have a conversation about inside the tech industry then, if they are, if there's savings being made here, because it feels like it would be nice if those were passed on, I suppose. So, all right, let's speak, go on,Killian Daly: I think just very importantly, of the, the more I think one fundamental truth that we're going to see,it's already the case in some parts of the world, but this is going to be an essential truth of the transition. The more renewables you have, the more volatility you're going to have in power prices. And the more flexible you can be in your consumption. It is going to be very rewarding economically, if you can consume, at the times of day when there's loads of wind and sun, power prices are going to be very low and you're going to get rewarded for that. If you can't, if you can only be base load, then that is going to cost you.Chris Adams: Ah, okay, alright. Okay. Alright, that's it, that's a useful thing to take into account. And so, we spoke before about, scope 2 and stuff like that, and you spoke about this idea that you're defining this standard. Now, EnergyTag is a standard in its own right, but, as I understand it, it's not like you're stepping outside of this.You are still engaging with the protocols and all the stuff like that right now, yeah?Killian Daly: Basically, so yeah, EnergyTag is a nonprofit. we do a couple of different things. we're obviously focused on this area of electricity accounting, electricity markets and better green energy claims and all that. And so yeah one of the things that we do is we have a voluntary standard for hourly energy tracking because one of the kind of blocking points we have today, is that the way we do this tracking with these energy certificates, it tends to be on a monthly or even an annual basis globally.And sometimes we don't have the information on the certificates to do this hourly matching. So we're trying to un debottleneck that particular technical issue. Think about how do we track through storage, like doing some novel things there. So we have a standard for that, but that's only one of the building blocks, I would say of this much larger question of, like, how do companies do electricity accounting or how do they do carbon accounting more generally? Our standard is there to work on that specific topic, but actually a lot if not most of what we do today is like working on policy advocacy around the world, working on global standards and basically advocating for those to change because ultimately it's the meta-levers, regulations,standards. Once they change, then we're just there to help technically put that all together with some voluntary standards as long as they're needed.But it's not our aim to be the world's next greenhouse gas protocol. That's really not in our wheelhouse. What we want to do is make sure that global standards and regulations are as good as possible.Chris Adams: Oh, I see. Okay, so that, so if we go for a concrete example of this. So, in Europe, if you want to do a hydrogen project, which is, in some ways, kind of a bit like an AI project in that it's like a building that uses loads and loads and loads of power in one place, right?Really dense. If you're going to make, green hydrogen, for example, you're taking water, adding loads of electricity to split that, and that's incredibly energy intensive. So you've probably want that, if you want the green hydrogen to be green, probably only use green energy. And one of the things you told me about before was, yes, we won that fight so that any, and if people want to get any of the subsidies from the government to kind of do this green energy thing, they need to have those three pillars style approach, right?That's what, that's an example of your strategy, yeah?Killian Daly: Yeah, so this is actually the reason I what really brought me into EnergyTag, it was a Greenhouse Gas Protocol thing, but basically are the key to one of the world's largest hydrogen producers. Right? And so I got put onto this topic a few years ago, which I found incredibly important and fascinating and, maybe not well enough understood.It's like, when we're going to produce hydrogen using electricity, we need to really make sure that the electricity is squeaky clean, because of the efficiency issues and losses that you just inherently have with electrolysis. And so, just to give a quick example, Jesse Jenkins lab in Princeton University, a guy called Wilson Ricks, who is a rock star of power system modeling, they model this right?And they show that in the US, if you basically use today's carbon accounting rules, this annual matching stuff, and you built out a hydrogen sector based on those rules, you will have hydrogen that is twice, maybe even three times as bad as today's fossil fuel hydrogen production. and you'd be calling it clean and subsidizing that production. Totally insane, just literally wasting money.And so it's actually really, important. Billions of dollars of subsidy are going to go into hydrogen in Europe and in the United States. And so we worked a lot with NGOs, advanced companies and other partners to advocate for these strong requirements on green electricity sourcing for hydrogen, both in the US and also in Europe, and we won on both fronts, which hasChris Adams: Oh, the US way as well!Killian Daly: Yeah. Yeah. Yeah. Yeah. And it hasn't been, so both of those are legislation in Chris Adams: place.They're in! Yay science!Killian Daly: Yeah, that's the legal way now to qualify for the tax credit in the US. In Europe, there's a phase in period on the hourly part to 2030. So, in 5 years or whatever.But anyway, projects built now, they have to be designed to comply with that. And so, Chris Adams: if you know,it's going to be in the law of five, you're just going to make sure you Killian Daly: going to start doing it now, right? more or less. yeah, so that's, yeah, obviously, this is kind of like hundreds of millions of tons of CO2 per year on the line between good and bad rules and that, that's kind of a concrete example of, why these things matter. Right? Like accounting sounds boring sometimes. I definitely thought it was boring before I realized like, "Oh my God, I'm working for a huge power consumer and this is changing everything." So yeah, it's definitely super, super important that we get this stuff right.Right.Chris Adams: Okay, so we spoke about, it sounds like you've done the work with Air Liquide and you've kind of essentially laid the groundwork to move from a fossil based hydrogen thing to hopefully a greener way of making hydrogen, which ends up being used in all these places. And now it seems like you've got the, okay, you said Google and Microsoft, same power usage as Air Liquide in a single year.Maybe they might've changed, but back then, there's, so it looks like we're seeing some promising signs. For that over here. So maybe, I mean, if we, want to see that, what do we need to see at a policy level? Do you need to have, government saying, "if you want to have green energy for data centers, you need to be at least as good as the hydrogen, industry."Is it something like that you need to do? Because what you've described for the hydrogen thing sounds awesome, but I'm not aware of that in the, kind of IT sector yet. That's something that I haven't seen people doing yet. Killian Daly: That is also coming, right? So hydrogen has just been the first battleground or the first palce, I think. Interestingly, actually, on the 14th of January, just before the inauguration of Donald Trump, as US president, so the Biden administration issued an executive order, which hasn't yet been rescinded.Basically on data centers on federal lands and in that they do require these 3 pillars. So they do have a 3 pillar requirement on electricity sourcing, which is very interesting. Right? I think that's quite a good template. And I think, we definitely need to think about, okay, if you're going to start building loads of data centers in Ireland, for example, Ireland, 20 percent 25 percent of electricity consumption in Ireland is from data centers. That's way more than anywhere else in the world in relative terms. Yeah, there's a big conversation at the moment in Ireland about "okay, well, how do we make sure this is clean?" How do we think aboutprocurement requirements for building a new data center? That's a piece of legislation. That's being written at the moment. And how do we also require these data centers to do reporting of their emissions once they're operational? So, the Irish government, is also putting together a reporting framework for data centers and the energy agency.So the Sustainable Energy Authority of Ireland, SEAI they published a report a couple of weeks ago saying, yeah, they, you know what, they need to do this hourly reporting based on contracts bought in Ireland. So I think we're seeing already promising signs of legislation coming down the road in other sectors outside of hydrogen.And I think data centers is, probably an obvious one.Chris Adams: So people are starting to win. Wow, I didn't realize that. I knew somewhat about that there was an executive order that there was a bit of buzz about. But I didn't realize that, set the precedent. So, yeah, we should do what that massive industry over there is doing because that's now the new baseline that, that's where the bar should be.We should do that as well, basically.Killian Daly: Exactly, because that those hydrogen rules, it's actually what it actually is. Well, actually, the whole debate was about is what is clean electricity procurement? What does that mean? What does it mean to use clean electricity? And that has been defined now in hydrogen rules and that can be copy and pasted to any large new load.Well, if you want it to be clean, we already know the answer. It's in legislation,Chris Adams: It's how to tell when energy is green,Killian Daly: MIT, IEA, the who's who of energy experts have all modeled this and they've all found that this is the way to do it. So, there's a template there, right? And it's, if you're going to go against that, it, yeah, well, obviously, then you're, obviously sacrificing the integrity of your accounting schemes.Chris Adams: Wow! That was, we spoke about how to tell when energy is green, and you've, We seem to be ending on a high, I didn't realise we'd actually got to that. That's really, awesome. You've really made my day for that, Killian. Thank you so much for coming on and diving into the minutiae of carbon accounting for electricity, but also ending it with a slightly less depressing piece of news, which I'll take in this current political climate,Killian Daly: just to interject before I say goodbye, there's one, one really, it's good to end on a positive note, I suppose, in this mad world we live in. There was a project announced recently. I think people should go check it out in the Middle East in UAE, where basically for the first time, they're going to deliver basically, around the clock solar power. So 1 gigawatt of solar, all night long because they're basically, building a massive battery and a huge solar farm, and basically all year round is going to deliver, green electricity at under 70 us dollars per megawatt hour, which is extremely competitive.So, I think solar and storage, what they're going to do together is going to be, is going to change the world. Right? I really think that is going to happen faster than people think. They're going to start to kill gas. So, yeah, I think green energy economics, despite what politicians will want to do with their culture wars,I think will at the end of the day, hopefully, answer some of the questions we're trying to solve here. So, yeah, thanks so much for having me on. It's been a real pleasure.Chris Adams: Brilliant, thank you so much for that mate, and may the fossil age end. That's really, that's so, so cool to actually see that, I totally forgot about the Masdar thing, which is the city. Yeah, and we'll share a link to that so people can read about that, because if you care about, I don't know, continued existence on this planet, then yeah, it's probably one to, good one to read about.Killian, this has been loads of fun, thanks a lot mate, and next time I'm in Brussels I'll let you know, and maybe we can catch up for, have a shoof or something like that. Take careKillian Daly: Yeah. A hundred percent. Thanks. Bye.  Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode. 
undefined
Feb 13, 2025 • 28min

Backstage: Impact Framework

This episode of Backstage focuses on the Impact Framework (IF), a pioneering tool designed to Model, Measure, siMulate, and Monitor the environmental impacts of software. By simplifying the process of calculating and sharing the carbon footprint of software, IF empowers developers to integrate sustainability into their workflows effortlessly. Recently achieving Graduated Project status within the Green Software Foundation, this framework has set a benchmark for sustainable practices in tech. Today, we’re joined by Navveen Balani, Srinivasan Rakhunathan, the project leads and Joseph Cook, the Head of R&D at GSF and Product Owner for Impact Framework, to discuss the journey of the project, its innovative features, and how it’s enabling developers and organizations to make meaningful contributions toward a greener future.Learn more about our people:Navveen Balani: LinkedInSrini Rakhunathan: LinkedInJoseph Cook: LinkedInFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterResources:Impact Framework | Green Software Foundation [00:00]The SCI Open Ontology | Green Software Foundation [04:27]SCI for AI - Addressing the challenges of measuring Artificial intelligence carbon emissions | Green Software Foundation [06:57]SCI Guidance [12:07]CarbonHack [13:03]Impact Framework Github Page [17:58]IF Explorer [20:18]IF Community Google Group [23:42]Events:Kickstarting 2025: A Community-Driven Sustainable Year (February 13 at 5:00 pm CET · Utrecht): [24:21] Advocating for Digital Sustainability (February 19 at 6:00 PM GMT · Hybrid · Brighton): [25:10]Day 0: MeetUp Community GSF Spain (February 20 at 6:00 PM CET · Online): [25:33]Digging Deeper into Digital Sustainability (February 20 at 6:00 pm AEDT· Melbourne): [25:59]Practical Advice for Responsible AI (February 27 at 6:00 pm GMT · London): [26:27]GSF Oslo - February Meetup (February 27 at 5:00 pm CET · Oslo): [26:46]If you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Chris Skipper: Hello, and welcome to Environment Variables, where we bring you the latest news from the world of sustainable software development. I'm the producer of this podcast, Chris Skipper, and today we're excited to bring you another episode of Backstage, where we peel back the curtain at the GSF and explore the stories, challenges and triumphs of the people shaping the future of green software. We're no longer gatekeeping what it takes to set new standards and norms for sustainability in tech.This episode focuses on the Impact Framework, also known as IF, a pioneering tool designed to model, measure, simulate, and monitor the environmental impacts of software. By simplifying the process of calculating and sharing the carbon footprint of software, IF empowers developers to integrate sustainability into their workflows effortlessly.Recently achieving graduated project status within the Green Software Foundation, this framework has set a benchmark for sustainable practices in tech. Today, we have audio snippets from Naveen Balani, Srinivasan Rakhunathan, the project leads. And Joseph Cook, the head of R&D at GSF and product owner for Impact Framework, to discuss the journey of the project, its innovative features, and how it's enabling developers and organizers to make meaningful contributions toward a greener future.And before we dive in, here's a reminder that everything we talk about will be linked in the show notes below this episode. So without further ado, let's dive into the first question about the Impact Framework for Naveen Balani.Naveen, the Impact Framework has been described as a tool to model, measure, simulate and monitor the environmental impacts of software.Could you provide a brief overview how this works and the inspiration behind creating such a framework?Navveen Balani: Thank you, Chris. And thanks to all the listeners for tuning in. Let's first understand the problem we're solving with the Impact Framework. Software runs the world, but its environmental impact is often invisible. Every CPU cycle, every page load, every API call, these all contribute to energy consumption, carbon emissions, and water usage.Yet, without the right tools, measuring and managing this impact remains a challenge. This is where the Impact Framework comes in. It's an open source tool designed to transform raw system metrics like CPU usage or page views into tangible environmental insights, helping organizations take action. Built on a plugin based architecture, it allows users to integrate, customize, and extend measurement capabilities, ensuring scalability and adaptability.More importantly, the Impact Framework helps realize the software carbon intensity specification, making sustainability reporting transparent, auditable, and verifiable. Every calculation, assumption, and methodology is documented in a manifest file, ensuring that impact assessments are replicable and open for collaboration.At its core, the Impact Framework is built on a simple yet powerful idea. If we can observe it, we can measure its impact. And once we can measure it, we can drive real change, reducing emissions, optimizing resource use and building truly sustainable software.Chris Skipper: What were some of the most significant technical or organizational challenges you faced during the development of the Impact Framework and how did you and the team overcome them?Navveen Balani: The Impact Framework wasn't just built, it evolved. It was shaped by real world challenges. Lessons learned and the need for a scalable, transparent way to measure software's environmental footprint. The foundation of the Impact Framework was laid through previous projects and ideas. Starting with SCI Open Data, which tackled the lack of reliable emissions data, and SCI Guide, which helped organizations navigate different datasets and methodologies.Another critical component was the SCI Open Ontology, which defines relationships between architecture components, establishing clear boundaries for calculating measurements. Alongside these foundational efforts, real world use cases from member organizations applying software carbon intensity measurement played a crucial role.These practical implementations tested SCI in diverse environments, refining methodologies, and ensuring that SCI calculations were not just theoretical, but applicable and scalable across industries, but data alone wasn't enough. We needed to scale measurement across thousands of observations.Sustainability assessments had to be continuous, automated, and seamlessly integrated into software development. This led to key innovations like aggregation, which enables organizations to condense vast amounts of data into meaningful, structured insights, rolling up emissions data across software components to provide a holistic system wide view.Technology, however, was just one piece of the puzzle. Adoption was equally critical. To accelerate real world impact, we opened up the Impact Framework to our annual Carbon Hackathon event. Where teams worldwide build projects that pushed its capabilities. This was a turning point, validating its flexibility and refining it through community driven development.At its core, the Impact Framework is built on transparency. Unlike black box solutions, every input, assumption, and calculation is fully recorded in a manifest file. Making assessments auditable and verifiable. This commitment to openness has been crucial in building trust and driving adoption.Chris Skipper: Looking ahead, what are the next steps for the Impact Framework? Are there specific new features or partnerships on the roadmap that you're particularly excited about?Navveen Balani: That's a great question, Chris. Looking ahead, the Impact Framework is entering an exciting new phase with a major focus on expanding measurement capabilities for AI. Right now, we're working on the SCI for AI specification. which extends software carbon intensity to both classical AI and generative AI workloads.Measuring AI's environmental impact comes with a new level of complexity. AI isn't just another software workload. The environmental footprint varies significantly depending on whether you're training a model from scratch, fine tuning a large language model, or simply using an AI API like ChatGPT or Gemini.Each scenario has different compute demands. Memory requirements and energy consumption patterns, making standardized measurement both challenging and essential. Through the Impact Framework, we aim to tackle this by developing new plugins and contributions that enable precise measurement of AI related energy use, hardware efficiency, and emissions across training, fine tuning, and inference workloads.These capabilities will collectively evolve, through community participation with researchers, developers, and organizations, contributing to refining methodologies, expanding data sets, and ensuring that AI measurement remains transparent, auditable, and standardized. This collaborative approach will allow organizations to quantify, compare, and optimize their AI workloads.Making sustainability a key consideration in AI deployment. Beyond AI, we are also exploring new partnerships to further enhance the Impact Framework's adaptability. Collaboration with cloud providers, software vendors, and sustainability researchers will be crucial in ensuring that the framework evolves alongside industry needs.Our goal is to make environmental impact measurement not just an option, but a fundamental part of software and AI development at scale.Chris Skipper: Moving on, we have some questions for Srini. Srini, IF emphasizes composability and the ability to create and use plugins. Could you explain how this innovative approach has enabled more accurate and flexible environmental impact calculations for different types of software environments?Srini Rakhunathan: Absolutely. The Impact Framework's emphasis on composability and the use of plugins is actually a game changer for different environmental impact calculations. If you notice that the framework is highly modular, making and allowing users to create and integrate various plugins. What it means is you can tailor the framework to fit the specific needs of your software and it doesn't matter what type of software you have, whether it's cloud based, on prem or hybrid.What is also advantageous is that the plugin ecosystem has a wide range of tasks. For example, it has something around data collection, it can do impact calculation, it can do reporting. It can do also very, very specific tasks like math functions and aggregation functions. What this means, you can mix and match plugins to create a mashed up pipeline that reflects your environment, whether you are running your software on web, cloud, mobile, doesn't really matter. As long as you know what your software boundaries are, you will be able to combine these plugins and create your own, um, pipeline, if you will. And that pipeline will help you, uh, create your calculation pipeline that can either run one time or run as a batch or, you know, run based on certain triggers.What it also means, and if you notice, there is also manifest files, and we will be talking more about it later in this conversation, is that the manifest files ensures that you have a repeatable way of calculation. I mean, you mash up these different plugins and you create a pipeline and you embed it in a manifest file and it's repeatable.So what I think is this framework's capability of composability and plug in can help you make very, very accurate impact calculations.Chris Skipper: How have collaborations with organizations like Accenture and Microsoft, as well as the open source community, contributed to the success of the Impact Framework? Are there any standout moments or partnerships you'd like to highlight?Srini Rakhunathan: Thanks, Chris. That's a great question. So the cornerstone of the success of Impact Framework has been collaborations. And this has been ongoing from the time this project was conceptualized. Bear in mind that when we, like Naveen, who's there also with us, and I, along with the Joseph and Asim started thinking about the project.The initial vision of the project was very different. So we started off with something called SCI Guide, where we wanted to collate datasets across the open source community to help calculate emissions from software. And we built the SCI Guide and that transitioned into something called CarbonQL, which is a primitive version of what we see today in the Impact Framework, which is more like how do we make sure that it is easier for users or developers to calculate emissions from software and the learnings that Naveen, Joseph, I and Asim went through to come up with the initial version of Impact Framework and the amount of work that the team has put together to get it to graduation state is amazing and it speaks volumes about the collaborations that has gone ahead into the building of the tool.One particular highlight I want to call out is every year, GSF organizes what they, what is called the CarbonHack. And in 2024, the CarbonHack focused on getting the open source community to come and build tools.On top of Impact Framework, either extension of the tool or building content or newer areas where the Impact Framework can be used. And you would be amazed at the amount of contributions that came in and newer use cases that looked at calculating emissions, not just from carbon, but from water and other forms of renewable resources was also identified.And that's great. That, I believe, was a standout moment for the tool.Chris Skipper: The IF documentation highlights the use of a manifest file and a CLI tool to calculate environmental impacts. Could you walk us through how these tools work and how they lower the barriers for developers to adopt sustainable practices?Srini Rakhunathan: Definitely, we can talk about both the CLI tool and the manifest file. These are actually cornerstone capabilities built within the Impact Framework, and they help us to calculate the environmental impacts. What happens is, the manifest file contains a list of of the software's infrastructure boundary encoded as YAML files.It's in the standard YAML format, and it contains every bit of component that is part of the software, whether it's front end, middle tier, back end, database, API, everything encoded as what's the hardware used, what's the utilization, what's the telemetry involved. So much so that it can be used to give us an input to the Impact Framework CLI tool that calculates emissions.The use of the file enables transparency and rerunability. That means it can allow anyone to re execute the manifest file and everyone will come up with the same calculations. The second piece that we spoke about, which is a CLI tool, it's a command line tool, which means it can be used to run on any environment.It processes the manifest file and computes the environmental impacts. So the way it works is developers can pass the path to the manifest file to the CLI tool, and it'll take care of the calculations. The tool has capabilities to do phased execution and that allows efficient and flexible use of the framework.Chris Skipper: And finally, what lessons have you learned from working on this project that might benefit other teams looking to build tools or frameworks for sustainability in tech?Srini Rakhunathan: Thanks for asking this question. At an overall level, I would like to respond to this question by focusing on lessons learned from two aspects. The first is the execution model and the second will be the technical design. In the execution model space, this project is a good example of how open source collaboration works.The team used GitHub extensively, and most of the meetings were asynchronous. And the engineers and the product managers and everyone who worked on the project worked through GitHub. And collaborated extensively using the open source tools available, which is a great model for scale. The second aspect we should look at from an execution model, and which is a success story here, is how the team used customer feedback as inputs to make the product better.There were constant, if not many sessions with many customers with whom the team worked to engage with them and understand what the requirements are for building a tool that can help them calculate emissions and use that feedback into the process, into the backlog to make the tool better. The second aspect of lessons learned will be on technical design.And here I would want to call out that. The whole concept of building a plugin ecosystem and make them composable such that it can, you have a, you know, you have a set of plugins that you deliver to the community, like a base framework, and then you allow extensibility. So that's a great model, which can help tools that can use sustainability as a calculation engine.And then the second piece is, which is also equally important. As you do this. You also make sure that you have extensive and good documentation that can help anyone who's coming on board understand the framework and be able to get on board and run with building a new plugin as soon as possible. The IF code, the GitHub site, if you go there, You will have a link to the docs page.And if you read through the docs, it's very, very self explanatory and will allow anyone who can come in and who's interested in building a plugin, do that at the fastest possible time. So these are, in my mind, lessons learned both from an execution model and the technical design aspect.Chris Skipper: Moving on, we now have some questions for Joseph. Joseph, the Impact Framework recently achieved the status of a graduated project under the GSF. What does this milestone mean for the project, and what were some of the key factors that led to its graduation?Joseph Cook: The Impact Framework graduation was a huge milestone because it represents the moment when the project is considered sufficiently mature that it no longer needs to be incubated and instead it can largely be handed over to the community. We consider the software to be feature rich and stable enough that people can integrate it into their systems, and in order to graduate, the project had to meet a quite stringent set of requirements, including demonstrating that Impact Framework had real world users, and that we had addressed community requests and bug reports, and that we had suitably comprehensive test coverage, and that the documentation and the onboarding materials were all fit for purpose.Now that milestone has passed, development activity is going to be much more ad hoc and driven by the community, rather than following a development roadmap that's defined by Green Software Foundation. Our efforts at the GSF will now be in driving adoption instead.Chris Skipper: How does the Impact Framework engage with the broader tech community to encourage adoption? Can you tell us what steps the GSF is taking to include the community as part of the IF development?Joseph Cook: Impact Framework is used by all kinds of organizations, but it also has a thriving open source community. And most of the discussion with the community happens on GitHub, either through issues or on the discussion board. But we also have a Google group where we share updates and collect feedback. Open source development on Impact Framework is really fundamental.It's really baked into the very core of the project. Instead of trying to ship Impact Framework with all the built in features to connect to thousands of different services and systems that people want to measure, we instead focused on making it really easy to build plugins, and then encouraged an open source community to develop, where people create their own plugins for all the features that they care about, and share them with each other on our Explorer website, which is like a free marketplace for Impact Framework plugins. This model actually makes the Impact Framework much more robust and much more stable because we have a much greater diversity of voices influencing what Impact Framework can do and what it can connect to. It decentralizes the development of the project without compromising the core software, and it also means that our small development team doesn't shoulder the burden of maintaining a huge code base with lots of different brittle connectors to third party APIs and services.And going forward, we want to keep this community thriving and see thousands more Impact Framework plugins listed on the Explorer.Chris Skipper: How do you see the Impact Framework setting new benchmarks for environmental responsibility in tech? Are there specific metrics or practices that you believe will influence industry standards?Joseph Cook: Impact Framework is a lightweight piece of software for processing what we call manifest files. These are YAML files that follow a simple format that captures the architecture of the system that you're studying. All the observations that you've made about that system and all of the operations that are applied to your data.I like to refer to these files as executable audits because they mean that you don't just report emissions numbers anymore, you actually show you're working too. And this enables the community to fork and modify your manifests and challenge you. And through iteration, you can come to crowdsourced consensus over your environmental reports. We would love to see this radical transparency become the gold standard for environmental impact reporting for software. Not only that, but manifests can be the basis for experimentation or forecasting, and help decision makers to assess the environmental benefits of implementing some change. Imagine you're challenged about why you chose some specific action.Your manifests are your evidence. And we think this combination of transparency and reproducibility, composability, and openness is a unique selling point for Impact Framework, and it could transform the way projects and organizations report their emissions and introspect their own operations.Chris Skipper: For listeners who are interested in getting involved with the Impact Framework, what are the ways they can contribute or support the project? Are there specific skills or areas where the community can make the most impact?Joseph Cook: If you would like to get involved in Impact Framework, there are many ways to do so. If you're a developer, you can head to the GitHub, where we have plenty of open issues, including some specific good first issues to help people get started. If you want to build plug ins, then you can download our template and use that to bootstrap your way in, and then submit your plug in to the Explorer using a simple typeform on our website.We always appreciate updates to the documentation too, and if you're interested in integrating Impact Framework into your systems, we'd You can always reach out to research at greensoftware. foundation to discuss it with us directly. We're always happy to help. If you just want to test the water or you have general questions about Impact Framework, you can start discussions on our GitHub discussion board or communicate via our Google group, IF-community@greensoftware.foundation.Chris Skipper: Awesome. So I'd like to thank Naveen, Srini, and Joseph for their contributions to this episode. Before we finish off this episode, I have a few events that need announcing.Starting us off, we have an event that will be happening today, the date of the publication of this episode, February the 13th, 2025 at 5 p.m. CET in Utrecht, Netherlands. Any Netherlands based listeners, you're invited to a Green Software Community Meetup today from 5pm until 8pm at Werkspoorkathedraal. Join us for a free in person event to kickstart a more sustainable year in tech. You'll hear insightful talks about reducing your software's energy footprint, scaling down for greener computing and building a grassroots digital sustainable movement. This is a great opportunity to connect with like-minded professionals, share ideas, and be part of a growing Dutch community that's dedicated to building a greener tech future. Food and drinks are provided free of charge.Next up is an event in Brighton in the UK, happening on February the 19th from 6:00 PM to 8:00 PM at Runway East, which features Senior Digital and Sustainability Manager for OVO, Mark Buss, speaking about the challenges with advocating for digital sustainability within his company. The talk will also be live streamed, so we will have a link in the show notes below for that.Next up for any Spanish listeners, we have the first ever meetup of the Green Software Community in Spain that will be happening online at 6pm On February the 20th, Dia Zero, Comunidad, Meetup, Green Software Foundation, España will be a chance for you to discuss how to collaborate with other people passionate about climate change and green software. And we'll have a link to that in the show notes below too.Next up down under in Australia on February the 20th at 6pm AEDT in Melbourne, we have Digging Deeper into Digital Sustainability. How to design and build tech solutions. This will be happening at ChargeFox. Katherine Buzza will be talking about the impact that software is having on the world's carbon emissions, and how to align your career in tech with the decarbonized future we can all play a role in creating.Next up, another UK event on February the 27th at 6pm GMT in London. Practical Advice for Responsible AI will be held in person at the Adaptivist offices. Talks about Green AI with Charles Humble and AI Governance with Jovita Tam. Click the link below to find out more.And finally, on our events list, we have GSF Oslo will be having its February meetup on the 27th of February at 5pm in person at the Accenture offices from 5 until 8pm. Come along to find out how leveraging data and technology can drive sustainability initiatives and enhance security measures and dive into green AI. Talks from Abhishek Dewangan and Johnny Mauland. Details in the podcast notes below.So that's the end of this episode about the Impact Framework project at the GSF. I hope you enjoyed the podcast. To listen to more podcasts about the Green Software Foundation, please visit podcast.greensoftware.foundation, and we'll see you on the next episode. Bye for now!
undefined
Jan 23, 2025 • 13min

Backstage: Carbon Aware SDK

In this episode, we go behind the scenes of the Carbon Aware SDK, a groundbreaking tool enabling developers to reduce software emissions by running workloads where and when energy is greenest. Featuring insights from Vaughan Knight, chair and project lead of the SDK, the episode dives into its origins, real-world applications, challenges, and milestones, including early contributions from UBS and Microsoft and its recent 1.7 release with NPM and Java libraries. Learn about how the SDK supports Software Carbon Intensity (SCI) metrics, practical examples of carbon-aware workload scheduling, and the roadmap for expanding developer resources and geolocation-based solutions.Learn more about our people:Vaughan Knight: LinkedInFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterResources:Carbon Aware SDKIf you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!
undefined
Jan 16, 2025 • 45min

Deep Green Technologies

In this episode of Environment Variables, host Chris Adams sits down with Mark Bjornsgaard of Deep Green to explore a transformative approach to data center design and sustainability. Mark shares insights into how Deep Green reimagines traditional data centers by co-locating them in urban areas to provide heat reuse for facilities like swimming pools, district heating systems, and industrial processes. They discuss the challenges of planning and policy, the rise of high-density computing driven by AI, and the potential for data centers to become integral components of community infrastructure. Tune in to learn about the intersection of digital innovation and environmental responsibility, and how new business models can turn waste into opportunity.Learn more about our people:Chris Adams: LinkedIn | GitHub | WebsiteMark Bjornsgaard: LinkedIn | WebsiteFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterResources:Mark Bjornsgaard on LinkedIn: Dell's OCP Solutions Propel AI Innovation [07:52]Civo [37:31]Real Time Cloud | GSF If you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Mark Bjornsgaard: The government does need to legislate. There is just not enough structure and there's not enough impetus for people to do the right thing. But the also, and particularly in the UK, what the government needs to do is planning is a huge, huge hurdle. I never really understood that until we'd be working with Deep Green for, you know, building data centers.It is breathtaking how Kafka-esque the planning system in the UK is. It's just, It's beyond insane.Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.I'm your host, Chris Adams. Okay, Mark, a few years back, when people were asked what a data center was, if they knew what one was at all, they might talk about some kind of thing, room, cupboard full of a few machines, maybe in a rack inside a unused room inside a building, for example. But these days, in the 2020s, people are more likely to talk about a warehouse full of hyperscale kind of data servers in a building, which is maybe the size of a football field or larger, for example, the kind of things that are run by massive firms like Google, Microsoft and Amazon, for example.Now, as I understand it, you work with data centers, too, but they can take a rather different shape and interact rather differently with the built environment. So for those who've never heard of Deep Green, or how the stuff you're doing is different, give a kind of brief introduction to like how your approach to like building data centers is and how that has an impact on how it works with the surrounding area, for example, communities.Mark Bjornsgaard: Yeah. So, as you say, most data centers are built in the middle of nowhere, and the vast majority are built without heat reuse. So the vast majority simply eject the heat that comes out of the computers. Data centers, we know, two to four percent of the world's electricity supply, and computers themselves are incredibly efficient electric heaters.So 97 percent of the electrons that go into a computer come out as heat. So you've got us as a species, us in a climate emergency, taking two to four percent of the world's electricity supply, converting it into heat, and then ejecting it into the atmosphere, which 10 years ago, that might have sounded kind of plausible or even sort of necessary.But in a world, as I said, in a climate emergency, that doesn't look so clever. So the difference between Deep Green and every other data center, most other data centers is we are building the data center where the heat can be reused. So very hard to transport heat, but relatively easy to transport electrons to take the data center to where the heat's required.So that's what we do. We build smaller data centers, co-locate them where heat's required. Now that might be a laundry, it might be a distillery, it might be food production, it might be antibiotic production, it might be a swimming pool, but more often than not, it's what's called a district heating system.So these large centralized heat networks that through super insulated pipes supply heat to large areas of different cities. That sort of principle, that district heat systems and heat networks, we're not very good at them in the UK specifically, but we are, the government is certainly planning for us to get a lot best them in the years to come.So, that's where we're anchored. We, you don't build them in the middle of nowhere, you build them where they're required. There's a further, there's a further caveat and a sort of, a kind of context to this, I suppose, if you'd like. Up until the point where AI started to become part of our everyday lives, those normal data centers aren't on very much.They're only on 20, 30 percent of the time, and they don't actually generate very good waste heat. So you can certainly forgive the great, the good of the data center industry for not necessarily trying too hard to reuse heat in the old world. But in the world that's coming where we've got these incredibly dense racks of NVIDIA and other chips, where, you know, she utilising a massive, huge amount more energy than previously the datasets had.That, it's at this point where those are on 70, 80 percent of the time, and they're generating an enormous amount of heat, and the heat's relatively high grade. It's not high grade heat as class within, but it's good low grade heat. So at this point, then the ability to reuse heat becomes a real thing. And that's why we exist.Chris Adams: Ah, I see. Okay, so there's a couple of things I'd like to unpack if I may. So the first thing you said was, okay, so there used to be data centers if they were going to be built in a kind of hyperscale thing. You're looking for kind of cheap land and then that's why they're often kind of miles away and probably maybe near things like say a grid connection or fiber connection or something like that, all right?So that was like one of the previous approaches, but the downside of that is that, well, you've, you might have all this heat, but no one's able to use it, so you just vent it into the sky, so it's basically wasted in that way. So the other, another way you could do this is you can actually build these, where they kind of interact more, where they're kind of more complementary to the kind of urban fabric, as it were, and then you can use that.But the thing that we've seen, one of the reasons that's been stopping that before is that essentially the data centers might have generated some heat, but it wasn't enough heat. So, you said low grade, and when you talk about low grade heat, that's like maybe 40 degrees, 50 degrees? Like, maybe you could expand on that, what that might mean, because I think for people who've never heard of the world of heat reuse, they don't know what high grade heat or low grade heat might be or what some of these uses might be, for example.Mark Bjornsgaard: Yes. Yeah. No. It's so as you say, low grade heat in industrial settings can be as high as a couple of hundred degrees. So when you say a data center is going to be producing heat at 45, 50, 55 degrees, then that doesn't sound very warm at all. That said, 30 percent of all of the economy, 30 percent of all of the industry can use that very low grade heat.So for example, a swimming pool very reliably loses a degree of temperature every hour. And it only needs to be 30 degrees. So if you've got, if you're trying to push heat from a, from one side of heat exchanger into another, if you've got kind of pool temperature water at 25 degrees, one side of that's the heat exchanger, and you've got, you know, our heat at 55, the other side, then heat flows the right way.When it comes to district heating systems and heat networks, the old ones, actually, again, they weren't very, it was quite difficult to plug data centers into them because those old heat networks were quite high heat. They needed heat at 80, 90 degrees. So if you were a data center and you said, I'll give you heat at 35 degrees, it really wasn't that useful. Now, fifth generation district heating systems, the ones that we're building in the UK and the ones that are beginning to be built elsewhere in the world, they can use very much lower temperature heat because the buildings themselves are better insulated. So the whole, the kind of what we think of as ecology, industrial ecology, the kind of ecology starts to, to make sense because lots more offtakers can use this relatively low grade heat,Chris Adams: Ah, I see. And you also said one other thing about, this is kind of one of the kind of flip sides of massively more dense compute. Here's one thing we've spoken about before. People talk about, okay, there is like worry about data centers, basically, or like AI data centers being massively more dense.Like the examples, I think I saw you share a link on LinkedIn, which kind of blew my mind. Like, some of these new racks from Dell can have like half a megawatt ofMark Bjornsgaard: half a megawatt per rack.Chris Adams: and like, I couldn't really kind of picture what that was. I know it's about 30, it's around 30 times minimum, or around 30, more than 30 times what you might have for an enterprise data center rack.So like, that's quite a lot of energy there. But like, can you maybe just like, what does half a megawatt even look like for most people, because it's really hard toMark Bjornsgaard: it's really, yeah, it is, it's really, it's sort of so vague, it's very hard to get your head around, isn't it? So, I always like to think of it in terms of your boiler on your wall at home. So that's going to be about 10 to 20 kilowatts, right? Your boiler at home. So that one Dell rack is, produces 50 times the amount of heat on the basis that on the basis that 97 percent of the electrons that go into it come out as heat.That 500 kilowatt rack is producing anywhere between 30, 40, 50 times more heat than the boiler on the wall of your house. And so, an unfathomable, you know, amount of kind of heat. Then if you look at it in the context of a normal data center, if you go into a conventional data center now, you might have rack densities of between 7 and 12 kilowatts a rack.So when you're talking about densities of again, kind of, you know, 20, 30 times. the density of compute in a single space. Now for us, we love that because we have the opposite problem of every other data center. We're space constrained, not power constrained. So if we can go to a swimming pool and we can heat a very large swimming pool with only two racks of gear, like a megawatt of, that for us is amazing because we spend much less money on building a data center, fencing, security, containers, all the other gubbins, fire suppressant systems, all the other gubbins that you'd have around a data center, when you compress them and you squidge them down, you make them much easier to deploy in the fabric of our communities and society. And then you get this really crazy kind of stats where I was in a data center in Sacramento, a couple of weeks ago, and you got this massive data hall,it's meant to be one and a half megawatts. It is one and a half megawatts of power, but the whole hall is empty. There are just three or four racks just at the end of the hall because those racks are 130 kilowatts a rack. And so they've built a data center. The physical shell of the data center is built for those rack densities, but they don't need all of that space.So actually what's going on at the moment in the data center industry is we believe is this sort of giant misallocation of capital where people are building data centers in the old way, when they actually should be building them for the world that's emerging, which is this really high dense, these rack densities that look nothing like conventional data centers.Chris Adams: So you, okay, that's interesting, and I'd like to come back to some of the things you said there about what the implications of massively more dense compute might actually be. But you also said a few things interesting about this idea of saying, you know, community involvement and things like that.Because one thing that I've never heard anyone else talk about in the data center industry or even the kind of like tech IT industry talk about was this idea of a, borrowing the idea of a social license to operate. This is an idea that people talk about in say fossil fuels and oil majors and stuff like that.And you said, well, this is one way that we can actually essentially keep that social license to operate by actually offering a much, much more kind of equitable deal with the communities we're kind of trying to integrate with rather than having this kind of like standoffish approach. Maybe you could like talk a little bit more about that, because I don't really hear people saying that much about data centers.They usually say, "well, you should be grateful because without us, you wouldn't have your cat pics without and and and..." It does feel like it's kind of missing a huge power of why people might push back against data centers or why they even talk about why they, you know, whatever the deal is when someone comes in and says, "Hey, can we build a bunch of digital infrastructure in your part of the world," for example.Mark Bjornsgaard: Yeah, I mean, as you say, we talk a lot about a social license to operate because, and we believe that in the future, you will get more and more pushback from communities around having data centers in their backyard, because you've got these huge sheds which are hogging and clogging transmission grids.So these transmission grids to be built by public money and then their commercial enterprise, yeah, dumps down there and says, "well, I want 100 megawatts" and then suddenly you realize that half the streets in the area can't put in heat pumps because there's no more grid capacity in the substages or they can't have electric cars. So, we think that social license to operate will be increasingly important in the future. No doubt. But the also the other, I guess the other on the other sort of flip side of this is that datacenters don't really employ anyone, right? I think the datacenter industry is a bit naughty when it says, "oh, you know, we're going to build a datacenter, we're going to employ 4,000 people."It's like, that's actually not true. You might employ 4,000 people while it's being built, but the reality is once a datacenter is up and running, the number of people who have to be employed in the actual vicinity are very low. But if you build a data center and then you say "I'm going to reuse the heat with a aquaculture park or a distillery or a laundry," suddenly then you then produce genuine net new jobs in a local area.So not only is the kind of the environmental bit of the social license talk very important, we think increasingly data centers are going to be looked on as having to be good citizens in terms of, you know, employment and doing the right thing with the community and we've already seen a lot of this, right?We've had moratoriums on data centers in the Netherlands and in Ireland and Singapore. We think we're in this sort of grace period in the transition. In the next 3 to 5 years electrons, then the amount, the number of electrons are going to become very constrained. We're not actually yet in the bottleneck, but in the next three to five years, we're going to start going to that period of time where they just genuinely are not enough electrons to go around.And we are going to have to make genuine choices about what we do with scarce electrons. And at that point, we believe, that if you're a data center and you're not doing the right thing, then, you know, you're the very least your operations going to be severely curtailed. Stroke, you're going to be in the midst of a full scale culture war, which you just don't want to go anywhere near.Right?Chris Adams: Okay, so you said a couple of things which I think might be worth exploring or kind of diving into there because a one of the key things I think I'm getting from you is that, yes, you might be able to kind of force some changes through quickly or you might say like, okay, well, I think one of the key things is that we need this transition itself to be sustainable and if you are able to kind of maybe push through some changes now you'll end up with so much pushback that you won't be able to sustain that state of changing as we end up like essentially moving away from fossil fuels a society based on electrification in many cases.Mark Bjornsgaard: That's exactly. Yeah, exactly. So, yeah, I think what we see is that we see that. We are energy and software folk and we're venture capitalists by trade. We see, we don't see the data center industry as a, we don't take it as sort of face value. What we see is 70 percent of the UK's total energy budget being the heating of spaces.So what, we're looking at from the other end of the telescope, we're saying, well, how could we, how can we best, what's the fastest, quickest way of heating all our shops and offices and factories? And the reality is, the quickest, fastest way of doing that is using computers as electric heaters.The fact that they happen to be there as data centers is almost, you know, that's kind of just a happy circumstance for us. We're solving what we see as a, as the meta problem, if you like. And just seeing what tools and capabilities we have to be able to solve that problem.Chris Adams: Okay all right so this is actually one thing that you... Because I think this is the thing that some of us forget about when we just think about IT like okay there's other transition, other changes that need to take place and before we, before you came on to this, I remember I saw you did a talk about these kind of for the wicked problems related to climate.And I wonder if you might get a kind of maybe kind of expand on some of that because I think it's quite a useful context to help people who are thinking about their role as a technologist. But, okay, like, why would you even care about heat reuse, and why would you care about anything other than just the efficiency of your code directly, rather than this kind of wider, more systemic view, for example?Mark Bjornsgaard: Yeah. Of course, we are. We all see our worlds in kind of what's in front of us, and that's completely understandable. As you say, we frame heat reuse and the electrification of heat, as you say, in context of what we think of as four wicked problems. So and these wicked problems make out make up roughly about 50 percent of the entire transition.So if we solve these four problems, then we will be somewhere around 50% of the challenge of the transition take place and those problems are the heating of, of spaces, so all of our homes and offices, the industrial use of heat, so all industrial processes need to be de decarbonized and kind of electrified, and then we think of, controlled environment agricultureand what's going on with how we grow stuff, the sustainability movement is rapidly kind of moot, sort of casting its eye across agriculture is realizing that actually how we feed 8 billion people on this planet is actually kind of some like 70 to 80 percent of all of our food is intensively farmed and based on fossil fuels.And then the fourth wicked problem is carbon sequestration. So how do you, actually sequester carbon out of the atmosphere? That is also a problem around heating. If you take those four wicked problems, they can all be somewhat or completely solved with data center heat, with low grade heat on it. And so we're sitting there saying, well, look, if those datacenters are going to be built anyway, if we already need to spend between 10 and 20 percent of our entireelectricity budget for our country on data centers, then all logic says you build those data centers where you can use the electron twice. The electron can do its funky thing in the data center. We can have all that utility. And then so long as you've done in the right way, like we're doing it, you can just pass on 97 percent of that electron in the form of heat for it to then be used in those four wicked problems. So to us, that is, there's sort of a beautiful, immutable logic there, particularly in a world where you haven't got enough electrons. If you had bountiful, you know, fusion, fission, whichever the good nuclear bit is, if you had a bountiful electricity supply, then you might not be that bothered.But the reality is in the next 10, 20 years, we're going to be so constrained by the amount of electricity that we have, we're going to have to get really good at being as efficient as we can.Chris Adams: And I suppose it's actually, I mean, in the I mean, I'm calling you from in Germany, where most of our, almost all of our heating is still coming from basically combustion, burning like gas and stuff like that, for example, which is expensive. And even when you look at the UK gas again is one of the, what was the, I think it's the largest source of heating in the UK by quite a long way.And these are two things which are volatile and where you're exposed to all kinds of changes in prices and things like that. And this is one thing that we probably do need to move away from. So that seems to be one thing like you're kind of, this is one of the approaches that you're looking at doing here, I suppose.This is one thing I should ask you about then, because we spoke a little bit about this being a thing that we, that is valued and this is like a shift in how the role that digital infrastructure plays in kind of like the wider societal role. We've also spoken about in the UK, there is this goal to get entirely off, essentially have like some as close to as possible as a fossil free grid by 2030, which basically mean getting rid of a bunch of this heating from burning fossil fuels, right? Now that's a really ambitious goal. And like, as an as someone who grew up in London or grew up in the UK, I'm like, "wow, this is really cool."This is like, I'm really impressed by that kind of ambition. And it's also one thing we've seen where a number of larger providers have basically said, "well this 2030 goal, it was a nice idea, but the moon has moved," to quote president having Brad Smith at Microsoft saying, "Oh, yeah, we were not pushing for 2030 anymore."And I kind of feel like if there is this goal of 2030 in the UK, for example, and we have very similar goals in other parts of the world. Like what needs to happen at policy level to actually make this possible for the actual data center or the kind of digital infrastructure there because right now, I'm not aware of the kind of support or how policy kind of values this kind of different way of thinking about the role that digital infrastructure plays.But we have seen with new government, basically in the UK, they do seem to be very keen on having a massive rollout of infrastructure. So. what's the deal here? Is it gonna be, how do we make, how do we square this circle basically?Mark Bjornsgaard: It's not, the declaration of data centers as critical infrastructure isn't quite as good news as it looks. So the, so that is that predicated on regulatory capture and if you declare data centers as critical infrastructure, you can then basically run ride roughshod over any local objections.So the fact that the labor government announced that isn't necessarily a good thing. It's probably the opposite. In Europe, we've got the EED, we've got the European Energy Directive, I think it is, and by an Energy Efficiency Directive, which is, which effectively says that certainly in Germany by 2028, you won't be able to build a new data center without reusing 20 percent of the heat. So there is a, there is already a, some sort of regulatory framework out there that's saying "you've got to do the right thing.You've got to have, you've got to use green electrons. You've got to reuse the heat." So that's good. The reality is, as we all know, governments probably have to use carrot and stick. So you probably have to do a little bit more stick and a little bit more carrot. Those people who are being good citizens and reusing heat should get some brown points and should get some economic benefit from that.And those who aren't, increasingly should be penalised. I mean, now you'd expect us to say that because obviously we're on what we think of as the right side of history. So I think the short answer is the government does need to legislate. There is just not enough structure and there's not enough impetus for people to do the right thing. But the also, particularly in the UK, what the government needs to do is planning is a huge, huge hurdle.I never really understood that until we'd be working with DeDeep Greenor, you know, building data centers. It is breathtaking how Kafka-esque the planning system in the UK is. It's just, it's beyond insane. it's crazy. So you've got regulations like, because you're leased of a council on a district heating system means that you only got that lease because you said you'd use green energy.If you put a data center within the environment of your district heating system, because we've got generators that kick in if, you know, for redundancy and resiliency, that then means that you're in contravention of your lease. So instead of somebody just going, "yeah, that's a shit idea, let's not do that. Put across through that. That's an unfathomably complicated year long process."We've had to put one pool we're trying to qualify, we've had to resubmit planning seven times. So this is just, I mean it's beyond rank stupidity, it's just a madness in this country, in the UK at least, around, we hate success in this country. We just hate success. This will be the third business that we develop in the UK and then scale in the US because in this country it is, yeah, we just can't get out of our own way.It's really sad. And, you know, everyone says, "oh, we'll try and change." It's like, it's very simple. It's like, you either want people to do this or you don't. Do you know what I mean? Like no amount of meetings or nice coffees or platitudes or strongly worded emails. Do you know what I mean? Like, it's very fucking simple.Can I build a data center or not? If I can't, then I can't. You know what I mean? Like it is, yeah. So this country is, it's very difficult to do here. And I suspect in a lot of Europe it is. So we need government to get out of its own way and clear a path for us.Chris Adams: So you said a couple of things that I think maybe we could just go into a bit more detail before we move on from there. Because you said one of the things was, things like the, there is one like regulation, the energy efficiency directive, which is It's one of the ideally one of the drivers of transparency for organization for people operating digital infrastructure, like they'll, you know, as a result, you know, for you to comply with this, you need to be able to listen information like the carbon intensity of the power, how much your, you know, how clean the power is, for example, how much of it is coming from, say, fossil fuels, how much water you're using and things like this.And presumably, like, these are some of the metrics that you might be able to kind of look good on, as it were, or this kind of way of building infrastructure might look a bit better, for example, like, if you're reusing some of the heat, I suppose, then does that have an implication on maybe how much water might be used, for example, and things like that?Mark Bjornsgaard: Yes. And you've got to be very careful that it's not whack a mole that you don't, you know, you don't drop your PUE, but then you raise your, so you use evaporative cooling, you might drop your PUE or your, the energy use, you know, the Power Utilization Effectiveness of your data center, but then you massively increase the amount of water you use.So there is a balance. There is a balance to be struck across all of these metrics. That's why there isn't one perfect kind of measure, if you like. Certainly in our case, we don't use any water, so the way that we cool, the way that the direct chip cooling and, the types of cooling we use, we don't use any water and, you know, there really isn't, as far as I understand, and I'm not an expert in terms of a techie expert in this area, but, really using water is a question of just how much margin you're prepared to sacrifice, you know, it is perfectly possible to cool the data center without using any water.It's just you make a small amount more money on each data center if you use water and people again, the great and good of the data center industry are always be good environmental citizens. They could choose to use no water and just spend a little bit and make a little bit less money. Okay.Chris Adams: You, ah, so you said something quite interesting there about how So you're using essentially liquid cooling as one way we can, as I understand it, liquid cooling in cars is way more efficient than air cooling in cars, which is why we've moved over. Presumably it's the same kind of idea here. So that's, that would result in a more efficient system that you'd be looking at using here.Okay. And that, okay. That helps me understand how that might actually fit into heating a swimming pool or something like that. So if you've got an efficient way to move the heat from one place. to another place and like the whole point about you know people use water for heat storage and stuff like that it makes total sense I can see why you'd have like a nice chunky kind of like sink I suppose and if you if these are the things that you're doing then I suppose there's a chance to be more transparent, I suppose, with the kind of figures you're using for this.So this might be, okay, that's, okay, that's interesting. All right, so if I could, I'd like to ask you a little bit about this AI question, because the approach you're describing here, of having lots and lots of distributed, having series of smaller data centers, like, built into the kind of fabric around us, that seems quite a bit different to the massive, centralized, gigascale data center, kind of paradigm that people talk about so I want to ask like if this is, I've always assumed that you need to have massive centralized data centers to do some of the kind of. AI workload stuff because you need to have these things network with each other. The way you're describing it sounds like that might not be the case.You know, the things not being in the same building might not be the showstopper that people initially thought it was. Could you maybe talk a little bit about this? Because this suggests like a kind of post cloud way of thinking about computing, for example. And I want to ask, like, do you actually need a data, a mega cluster?Or is there a, an alternative that you're suggesting here?Mark Bjornsgaard: The truth is at the moment you need the mega clusters. So we, when we think of training large language models, those need to be done at the moment, those mega clusters need generally need to be all in one place. The trouble is, as data centers grow bigger and bigger, and as you build gigawatt data center campuses, and even larger, when we get, when we think of the trillion dollar cluster, the amount of compute we're going to need to, kind of enable artificial general intelligence, I think we're going to need something like 100 gigawatts of power, right?100 gigawatt data center, which is, now, when you build, start to build data centers in these sizes, You Actually start to have a distributed problem anyway because you physically can't each sort of node running a version of the model has, it's so far away from the other node. You've got a distribution problem almost by default by size.If that make if that makes any sense. So we've certainly got to be better at networking the architectures around large language models. And, there isn't very much academic research on this, there is a bit. We're doing a lot of work with NVIDIA and Nokia around this. The Chinese, we think, are doing a lot more work around this than other people, which is in itself interesting as we see a race to AGI emerging. So certainly the networking between data centers is going to become increasingly important. See, in the last six months, you've seen Microsoft spending billions laying massive fiber pipes between its AI data centers because it's trying to use these, you know, even 100 megawatt data center needs to be kind of physically clustered with another 100 megawatt data centers.But that's also all in the world of training. Now, of course, when that, and that's where the models are learning, and that's great, and that's going to go on. The world that will emerge is obviously mostly going to be inference. So when you think of a world of AI in 10 years time, actually 90 percent is going to be inference, 10 percent is going to be training.So we are, at DeepGreen, we're not necessarily trying to win the large language model, massive cluster game. What we're building is, the compute substrate for the future, where there will need to be thousands of megawatts of smaller data centers, smaller cluster sizes, much closer to where we all live and work.So we're, this substrate, this compute substrate will be required in the future.Chris Adams: Okay. All right. So, so basically, what I think you're saying, or what I'm kind of taking away from that, is that it was almost like a typology of different kinds of digital infrastructure that you might think about. So rather than just being one model, which is inherently better than the other, you probably would need to have different setups, depending on the different kinds of roles that you might actually be having.And it's, you can kind of see people talking a little bit about this with the whole idea of like edge computing, but it sounds like for certain things you do need, you may, there may be a world where you do have big box Walmart-style out of town data centers doing certain things because, and you just, and you may have to accept that there's, you're not able to use some of the waste heat or you may need to like co locate things to use that and like have some kind of clusters and I guess China's, you can see some examples of people co-locating energy generation with industry and things like that.But then there's this other kind of like other end of the scale, which is a more distributed thing. And that's something that you're looking, that you're looking at building, like, the kind of data centers that might actually integrate with, say, cities and things where they're closer to where it's actually being used.But the, you're trying to go for a more kind of integrated approach by making as many of the outputs, the waste outputs, something that can be reused by other people for example because presumably there's a cost to like heating a swimming pool like it's non zero if you need to do that and if you've got the heat coming from what you're using then that's something economically benefit that's something that you might write into like currency benefits agreements and things like that. Mark Bjornsgaard: Yeah. If you think about some of the inference work use cases that are already emerging, whether that's, you know, you integrating, you interfacing or chatting, maybe your kids are talking to a chatbot and they're trying to learn about they've got some visualization, some rendering visualization, which takes a lot of GPU compute.That will be, those GPUs will be, it is better that they are co-located, or they're located somewhere closer to where the user is, particularly in the US, where they'll see, or other countries, and not just the US, but, you know, across Europe and other large continents, large land masses, you want the compute to be physically closer to people.So, you know, where they're living and working. So that that is very important. But of course that world is just emerging. So at the, but that said, there are already a, there's already a lot of refining training. There's already a lot of people who are taking the outputs of the very large language models and then applying their own data to them and then refining, training them.And then there's a whole bunch of other use cases around medical science and fluid dynamics and all the other stuff that the robots are gonna do for us. That world is now, as we know, emerging fast. That's the world that we're really building for smaller compute clusters, much closer to where people live and work.And then, as you say, then you start to change the economics about how society works. You know, in the UK, we're spending 1.5 billion pounds heating our swimming pools every year. Really, we shouldn't be spending anywhere near that. Because those, pools should be being heated by recaptured heat. If we allow ourselves to build the data center infrastructure in the right way, the interesting thing about the UK particularly and other countries is that there's lots of fiber in the ground.So when we first started building a data center, we talked about them following the fiber. Now, data centers don't really need to do that. There's plenty of fiber around. You can pretty much build a data center wherever you like. Now you have to, now people are saying they're following the heat, sorry, the power, but the third generation, the third phase of data center development, we see is people following the heat.So first of all, you went to where the fiber is, then you went to where the power is. that's the era we're in now, but very quickly you're then now going to build data centers where the heat's required.Chris Adams: i see where there's presumably like someone who like an offtaker who would use that and then be in favor of something being set up in their neighborhood or in as part of their project, they're getting a bit set up. Okay, so you said one thing that was, I think, quite interesting from there about, okay, there's loads of fiber, there's more fiber than we thought, like all this kind of dark fiber from 20 years ago, the last boom and bust, there's people might reuse some of that.And some of this has, this could feel a little bit kind of academic or maybe not, it might feel a little bit like, "okay, what's happening in the future?" But As I understand it, some of this stuff is like, what if I'm a, if I'm a developer, I think, "oh, this is kind of cool." I like the idea of actually being able to run infrastructure, run kind of the code or run my applications in somewhere like this, in this kind of environment, because I think it's maybe more interesting.Or, and if I can have the same convenience and same, the same kind of experience as a developer deploying code, as then why, you know, I might try this out. Is it something that people can use? Like, is there like. I mean, if I'm used to, like, deploying things into, like, virtual computers, I mean, virtual private servers or Kubernetes, is there something like that?How do I actually try out some of this or use some of this stuff, for example?Mark Bjornsgaard: Yes, it's because we're we are just a dumb datacenter operator. We are making our capacity of our datacenters available. Then that's the physical space in our datacenters for people like Amazon and Microsoft and Google and loads of other people to come and put their kit in our datacenters. So the minute you put your kids in our datacenter, then it will be doing something useful with the heat.So as you say that there are a few cloud providers who already partnering with our main partner who have been incredibly supportive to us for years is a platform called Civo. So yeah, again, a UK business paying UK tax. If you as a developer want to run, you want a cloud service that is every bit as good as AWS or Google or Amazon or Azure,and you want it to be green, then just go to Civo. And then you will be, Civo are using our data centers. So you as a developer, you shouldn't have to make any compromises at all, right? You shouldn't have to worry about any of this stuff. This should all be abstracted away. And in time will be where you can just be assured that when you're running code, it's running in the most environmentally, you know, it's being run in the most sustainable way possible. Now, part of the problem with the large clouds is that their reporting, their ESG reporting, their sustainability reporting is pretty shunky, stroke, complete bullshit. So I think that's part of the problem that I think a lot of cloud services at the moment aren't really taking this very seriously.And what is certainly very hard as a developer or as an end user of a cloud platform to know how green or not your cloud is. The reality is any cloud platform that's claimed to be green just by using green electrons is ignoring 90 percent of the problem, right? 90 percent of the carbon in a data center is in the kit itself.The scope, what's called scope three, the carbon that has been used to manufacture the computers themselves. So however much you jump up and down and say, "I'm doing really well because I'm buying green electricity or I'm buying" that's pretty much. I mean, it's notChris Adams: 10 percent rather than the other,Mark Bjornsgaard: exactly so really, as, we all get better at this and as reporting becomes better and as greenwashing gets, people start to come down on greenwashing, as developers, as a whole community, we will have much, much better visibility about how green our clouds really are, but the reality is a green cloud, it comes down to the carbon in the compute and what you're doing, what you're doing to mitigate and reduce and remove that carbon.Chris Adams: Okay. Alright, so maybe this is one thing that, so, there's one thing, there's one project that we work on in the Green Software Foundation that may be relevant for this. There's one project called the Realtime Cloud Project, where there is an effort to basically work out the carbon intensity for on a kind of per hour for every single region that we have.If this is something that, I mean, it would be wonderful to have groups like Civo or people like that share something like this. Because the whole effort is to have some standardized data sets, some standardized numbers that you can trust and you can optimize for. And if what you've described is basically saying that yeah, running stuff inside infrastructure here is essentially somewhat fungible compared to running in other infrastructure here.But if the number, if you're able to kind of reflect that in a lower carbon intensity or lower embodied energy or lower water usage then or any of the any other metrics that are available then that feels like a useful thing to actually allow people to be able to do and it sounds like that is something people can do today rather than having to this being a conversation about 2026 or 2027, by the sounds of things.Mark Bjornsgaard: Well, to be clear, we're still, we're bringing our capacity online now. So we'd be a year in sort of designing since raising the money from Octopus designing building and now getting shovels in the ground and actually getting our data set the first wave of data centers built. So we've not done, we deliberately not said anything about this because we didn't want to be kind of part of the problem.We want to be very much part of the solution. Whatever we will be reporting next year will be, you know, we'll be holding our hands up saying this is. This is as good as it gets the moment and we're going to improve it. But I think it's incumbent on all of us to be very transparent about that. I think that's it.No one's trying to be perfect. No one's going to get kind of shot down for not being perfect. I think it's much more about the attitude you bring to it as a business rather than being, you know, "this is the law and I'm telling you it's like this" when we all know that's not true. But I think it's much better to be more tentative about it and say, "look, we don't know everything, but, you know, we think our scope three is this, and we are removing it using these removals."And if somebody says, "I don't like those removals, I think they're nonsense." And whilst you say, "well, okay, but we are paying, you know, $250 a tonne for that carbon, so they're not complete bullshit." You know what I mean? I think it's in the, in this next phase, it's all about hopefully not giving each other too hard a time, but actually getting a bit more transparency and a bit more kind of clarity on where we are, because only then can we then start chipping away at it, right?Chris Adams: Yeah. And like in the UK, we have very, clear targets for the very least like 2030 to get there, for example.Mark Bjornsgaard: Quite, which is incredibly shortChris Adams: It's very, it's like, it's almost tomorrow, isn't it? Yeah.Mark Bjornsgaard: I'm so old that the years pass like days these days, but yeah, five years doesn't feel very long at all, frankly. Yeah.Chris Adams: I could definitely sympathize with that because we are a non profit focusing on a fossil free internet by 2030. So that is very, acute for us as well. All right, Mark, I've really enjoyed chatting with you. And I've learned a bunch from us, like wonder or wandering through the world of digital infrastructure and stuff, we're just coming to the end of the time.So I want to ask, like, is, I mean, if you, are there any projects or things you want to kind of point people's attention to, or people, if people want to find out more about the work you're doing, where should people be looking, for example?Mark Bjornsgaard: Yeah. If you're a developer, go to Civo. They're amazing people. It's an amazing platform, as I said. And the fastest, quickest way of supporting us is by using Civo. Buying Hewlett Packard Enterprise, Hewlett Packard GreenLake AI. So we're landing whenever you buy HPE kit in the UK and hopefully the US, you will have the option to land it in a Deep Green data center now.So increasingly, developers and businesses can make green choices just by searching out our partners, you almost certainly never come to us directly. You're going to be consuming cloud services by a third party, but asking your cloud service providers to land that kit in our data center is the fastest, quickest way of helping us.Yeah.Chris Adams: Brilliant. Well, in that case, I'll speak to other friends to see if there's a way to filter any kind of like cloud providers for heat swimming pool as one of the kind of like features when I'm looking for my cloud computing in future. Mark, this has been fun. I really enjoyed it. Thank you so much for making the time, especially given like getting hit with COVID last week and everything like that.So once again, thank you again for this and yeah, this is great. Take care of yourself and have a lovely week. All right, Mark.Mark Bjornsgaard: Thanks very much for having me. Thank you.Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please, do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.
undefined
Jan 9, 2025 • 35min

Finding Signal Amongst the Noise in Carbon Aware Software

In this episode of Environment Variables, host Chris Adams is joined by Tammy Sukprasert, a PhD student at the University of Massachusetts Amherst, to dive deep into her research on carbon-aware computing. Tammy explores the concept of shifting computing workloads across time and space to reduce carbon emissions, focusing on the benefits and limitations of this approach. She explains how moving workloads to cleaner regions or delaying them until cleaner energy sources are available can help cut emissions, but also discusses the challenges that come with real-world constraints like server capacity and latency. Together they discuss the findings from her recent papers, including the differences between average and marginal carbon intensity signals and how they impact decision-making. The conversation highlights the complexity of achieving carbon savings and the need for better metrics and strategies in the world of software development.Learn more about our people:Chris Adams: LinkedIn | GitHub | WebsiteThanathorn (Tammy) Sukprasert: LinkedIn | GitHub | Google ScholarFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterNews:On the Limitations of Carbon-Aware Temporal and Spatial Workload Shifting in the Cloud | Proceedings of the Nineteenth European Conference on Computer Systems [03:25]On the Implications of Choosing Average versus Marginal Carbon Intensity Signals on Carbon-aware Optimizations | Proceedings of the 15th ACM International Conference on Future and Sustainable Energy Systems [22:12] Resources:Tammy's GitHub [19:00]CarbonScaler: Leveraging Cloud Workload Elasticity for Optimizing Carbon-Efficiency | Proceedings of the ACM on Measurement and Analysis of Computing Systems [33:19]If you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPTION BELOW:Tammy Sukprasert: With that one hour job with perfect knowledge of one year, we can reduce the carbon emission of the whole world by 37%.Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.I'm your host, Chris Adams. Hello, and welcome to Environment Variables. Where we bring you the latest insights and updates from the world of sustainable software development. I'm your host, Chris Adams. One of the oft repeated quotes when people talk about sustainability in software is that if you can't measure it, then you can't manage it.And when it comes to working out the carbon footprint of a software application, a significant portion of the footprint comes from what we refer to as the carbon intensity of the electricity in use, i.e., how green it is. And there are various steps you can take to make the same application using the same code, you can make it greener by running it where the grid is greener. So if you were to choose to run it in Iceland, that's one example. Or you can choose to run the grid, run the application at different times when the grid is greener, like when the sun is in the sky and your solar panels are wearing away. But how much greener can they get? And what else could we need to think about when trying to adopt a ways or ideas like this? Enter our guest for this episode today, Tammy Sukprasert, a PhD student at the Laboratory of Advanced Software Systems and Sustainable Computing Lab at the University of Massachusetts Amherst.Tammy recently authored the paper on the limitations of carbon aware, temporal, and spatial workload shifting in the cloud, which examines how shifting computing workloads across time and space can help cut emissions. Tammy, we're going to spend a bit of time talking about why you chose to work in this field.But to begin with, can I give you a bit of space to introduce yourself and what you do?Tammy Sukprasert: Hi, Chris. Thanks for having me here. I'm Tammy Sukprasert. I'm a PhD student from the University of Massachusetts Amherst. I work on cloud and edge computing with a specific focus of decarbonizing computing. I'm currently calling you from Amherst, Massachusetts, and it's nice out here.Chris Adams: Cool. That's nice. We've had a, it's snowing in Berlin, so I'm a little bit jealous, actually. Hi folks. If you are new to this podcast, my name is Chris Adams. I am the Director of Technology and Policy at the Green Web Foundation. And I'm also the, one of the chairs of the Green Software Foundation Policy Working Groups.And also, the host of this podcast. Now, before we dive into the conversation with Tammy, if you're listening to this for the first time, here's a quick reminder. We will try to link to all the papers and all the links and all the projects on GitHub in this, and there will be extensive show notes as well as a transcript if there's anything you particularly missed.And I think that's pretty much it. Tammy, are you sitting comfortable?Tammy Sukprasert: Yep. Nice.Chris Adams: In that case, I guess I'll begin. All right. We've linked to this in the show notes, but the paper title, On the Limitations of Carbon Aware Temporal and Spatial Workload Shifting in the Cloud, does kind of give a clue about what this research might actually be about.But for those who are new to this idea, would you mind bringing listeners up to speed about what workloads are, what workload shifting is, when we talk about carbon aware computing?Tammy Sukprasert: Sure. So to understand what workload shifting is, we need to have some idea of why we can shift the workload in the first place. So carbon intensity is based on the contributions of the different energy sources in the electric grid, right? So at different point in time, the demand changes. So there is different contribution of different sources.That's why there's variation in carbon emissions. So there will be a high carbon period and low carbon period. And because of that, instead of running the workload during the high carbon period, you can actually schedule the workload to the lower carbon period or lower carbon region. So some of the workload, you can delay the start time.The workload could be machine learning or some batch jobs. And instead of running right away when it was dispatched during the high carbon period, you can delay the start time and run it during the low current period. And at the same time, there are also, there is another type of workload that you can move or shift the workload around.That could be a web request or an inference request. And instead of running your workload at your own region, you can look into other locations that have lower carbon intensity and migrate the in it.Chris Adams: So if I, so let's say I'm using like maybe a chat bot or like, or I'm using something like maybe chat GPT and I am in, say, Germany, maybe it's dark, it's not very windy and it's not very sunny, for example, and most of the power is coming from coal being burned on the grid, for example, I might, rather than my request being served in Germany at the same time, it could plausibly be, say, forwarded to somewhere else in the world, as long as it's fast enough.So, it might get forwarded to, say, Denmark, which is super windy instead. And that would mean that it would be slightly greener, for example. That's what you were referring to when you spoke about the inference. And then the other thing you mentioned before was like a machine learning job or like a video encoding thing.That's something that I might not be seeing myself. But it's something that probably needs to happen within like a few days or something like that. So it's important, but it's not urgent. And because there's a bit of flexibility, I can choose when to do that to minimize the environmental impact of the extra amount of demand being put onto the grid.Is that what you're, I think that's what you're saying there, right?Tammy Sukprasert: Right. So it's just basically align your job schedule with low carbon period. Yeah. That's the key idea of the shifting.Chris Adams: Gotcha. And then, so you spoke about there's one, which is if I'm doing something through time, that's like the temporal thing. Like I either bring it forward or wait till later. And then there's a spatial idea, which is me just moving it somewhere else. It might be happening at the same time, but it might be happening in Denmark, for example, or Iceland rather than in Germany.Yeah?Tammy Sukprasert: Yes, that's correct.Chris Adams: Okay, cool. So, okay. We've got a good idea about what some of this might be. And a question I might ask is like, why is this interesting to you? Like what, how do you end up finding out or even kind of wanting to research this in the first place?Tammy Sukprasert: Yeah. So there are many works that look into the benefits of reducing carbon reduction based on time shifting or spatial shifting, but it happened in a limited setting. i.e., a small number of regions or specific type of jobs, so people only look into spatial shifting or people only look into temporal shifting, or maybe they only look into a few number of regions but we were wondering, what if we look into both spatial and temporal and with the big picture of the whole world. So instead of looking to into a few regions, we look into 123 regions that we have in our data set and we want to see what is the broad impact of temporal and spatial shifting as a whole.Chris Adams: I see. Okay. So thanks Tammy. So for this research paper, as I understand it, you decided to see how much, what kind of savings you really can achieve with things like Carbon Aware Computing. And a little bit about what kind of conditions might be necessary for these savings to be possible. So would you mind expanding on some of this?We can start simple, fast, simple first, and then we can work our way up. So yeah, let's see, what were the first things we started with? And what were the first, what was like the ideal scenario for the savings? And we can go from there.Tammy Sukprasert: All right. So with the current state of the world, right, the average carbon intensity is about 368 grams per kilowatt hour. And to achieve as much savings as possible in terms of carbon reduction, right, you will want to migrate your workload to Sweden, which is the region with the lowest carbon intensity in our data set. And migrating all the workload to Sweden, you can actually achieve 96 percent carbon reduction for the whole world.Chris Adams: Okay, so what you're talking about there is you've basically gone from an average figure for carbon intensity of electricity to much, much cleaner electricity. And that's in this kind of ideal scenario, that's what you've essentially done. You've moved all of the computing jobs to the cleanest possible electricity there.That's what we've done there. This is the ideal scenario. So where do we go from here then, for example, are there other constraints and things we know we need to take into account when doing this?Tammy Sukprasert: Great. So of course, Sweden cannot take all the workloads in the world, right? So we were like, okay, instead of just moving everything to Sweden, what if we have capacity constraints? So we look into the scenario where every region in the world has an idle capacity of 50%. We're trying to be generous here because we want to understand the impact of the idle capacity on carbon reduction, right? So with every region having 50 percent idle capacity to absorb the job from other regions, instead of achieving, so now no one can actually migrate. So now not everyone can migrate to Sweden, right? Some other regions have to migrate to somewhere else. So, with that, the savings from 96 percent global reduction.Drops to 51 percent.Chris Adams: Okay.Tammy Sukprasert: if not everyone can go to Sweden. Yeah,Chris Adams: All right. That's still not bad. And when you're talking about capacity, you're referring to the fact that say, maybe there's a, like you've used the word region here, and for region, I think that's like a cloud region, like say AWS West or something like that. That's what you're referring to there.And there's maybe a certain amount of reserve capacity they have to hold back. And that's what you're referring to there. So the idea that maybe different cloud places, different cloud data centers have a bunch of spare capacity and that's what they'd be using to move everything there, right? So, okay.Okay. Well we never actually talked about latency constrains Tammy Sukprasert: as well, right. So let's say for example, a web request, you need some service level objective or SLO to respect, to be respected, right. And so we look into that as well. And with, so now we have capacity constraints. So the scenario gets more and more realistic, right?So from 96% you added a capacity constraint, and now the saving drops from 96% to 51%. And we also look into a more realistic case where we think about web requests that have some latency constraint, where there's some service level objective that has to be respected. And so on top of the capacity constraints that we have, that we achieve 51%, we added a 50 milliseconds capacity constraint, and that further reduced the carbon savings to 31%. So in the real life scenario, we are really far from the 96% that we want to aim for, right.Chris Adams: So if I understand that correctly, basically there is a speed, the speed of light is fast, but it's not infinite. And therefore there are certain parts of the world where you definitely need to get a response back in time. And that's why you've introduced this kind of 50 millisecond kind of budget. So it has to be, your ping, your request has to come back in that kind of time budget.And that basically places a second constraint. And even with these two constraints, this is essentially talking about, okay, these are the carbon emissions that can be reduced. By moving things to the various regions that are available based on the capacity of all these other places, like Sweden and then the next cleanest one and the next cleanest one.That's what you're referring to there. All right. Okay. I think I understand that part there. And that honestly, 31 percent still sounds pretty good, to be honest. But if we look at the figures for what, 2%, if we're looking at maybe, A hundred million tons of CO2 each year, and 30 percent of that is 300, is 30 million tons.That's not bad. That's more than at Google, for example. So, okay. That's okay. So that is interesting, then. So this is one of the high level findings you found, assuming you could do this in this kind of decreasingly idealized scenarios. And eventually we get to a point where, okay, this is actually something that you might plausibly try adopting in, or you might be kind of advocating for in certain regions, for example.Tammy Sukprasert: Right. Yeah. The point that we're trying to make is that as you added more constraints, the gap between the ideal case of 96%. Your achievable goal widens. So that's what we're trying to show in this paper.Chris Adams: Okay, cool. And when you're talking about the regions here, these are largely the regions that are inside the electricity maps. Was it the electricity maps dataset or was it just the list of all of the regions for the biggest cloud hyperscalers? I wasn't quite sure when we were looking at this, cause there's a list of them, right?Tammy Sukprasert: Right. So we used a dataset from ElectricityMaps. Shout out to ElectricityMaps. Thank you for the dataset. The dataset has 123 regions worldwide, right? But on the dataset, we group them up, we filtered the regions that overlap with the cloud region, and look at all exclusively the results for the cloud regions. Chris Adams: Ah, I see. So you created this way to make these comparisons basically by saying, maybe there's one data center, which we see in the cloud, like say Amazon AWS West, which is a lot of people refer to as like Oregon West 1. And because we know that a data set of carbon intensity from electricity map says, yes, this is Oregon.You've been able to look at the numbers then in that way, right? That's where some of this is referring to.Tammy Sukprasert: Yeah, so we did a mapping between the electricity map data with the location of the cloud region.Chris Adams: Okay. All right. So, and that, and when we're looking at those numbers there, so you mentioned this figure of 96%. Was that looking at just location or was that looking at anything to do with time as well? Because I wasn't quite sure about that part there.Tammy Sukprasert: So the 96 percent is just spatial shifting. So we have a separate result for temporal shifting where everyone in the world, every region in the world can schedule their workload based on one year ahead data. So everyone in the world can schedule their workload if they know aboutChris Adams: perfect forward knowledge. Yeah.Tammy Sukprasert: yeah, perfect, knowledge for one year ahead.And with that, we look at the extreme case, the most ideal case where the workload is a unit job, one hour job, to understand what is the best case scenario for temporal shifting, right? So with that one hour job with perfect knowledge of one year, we can reduce the carbon emission of the whole world by 37%.Chris Adams: That's just temporal, not looking at location as well, right?Tammy Sukprasert: Yes, so we have the results for temporal shifting that if we give every region a perfect knowledge of their carbon intensity a year ahead to plan their workload, what is going to be the best scheduling scenario for the future? Temporal shifting, right? So with everyone having the perfect knowledge for a year, you can reduce the carbon emission of the whole world by 37%.Chris Adams: Ah, okay. So you're looking around maybe 30 percent when we were looking at purely locational, and then we're looking at just purely time. It's around, it's relatively similar, basically, but these are relying on. A kind of visibility that people don't really have a lot of the time, but, and, okay. So the next question I'm kind of asked is, it possible to look at time and space for this to get an idea of what the savings might be next from that then? Tammy Sukprasert: Yeah. So we also look into that in our paper. So if you look at spatial and temporal shifting combined, the result actually shows that spatial shifting dominates the carbon reduction. This is simply because when you move the workload to the lowest region possible in your data set, right, to achieve the savings, that region is already low in carbon intensity, so time shifting doesn't make much of a difference.Chris Adams: Ah, I see. Okay. So it's, basically the clean regions tend to be clean most of the time anyway, rather than being kind of spiking up and down for example. So that's what it seems like you're suggesting there, right?Tammy Sukprasert: Right. It still varies, but the variation between the high carbon period and low carbon period is relatively small.Chris Adams: Okay, well, that kind of makes sense. Cause I mean, now that when you lay out like that, I don't really think about it until you framed it that way, but like Iceland is usually green because it's running on geothermal, which is like pretty standard. Like it's steady. And even when you look at like, say Sweden, for example, there's like a wind and everything like that, but there's lots of hydro and stuff like that.So again, it's not nearly as spiky as, say, Germany, where we are the land of like wind is, we're land of coal and solar. We have lots of coal, which is high carbon intensity, and lots of solar, which is very, low intensity. And flicking back and forth between these things means that we might have big swings, but on average, it's not particularly low compared to Iceland or Sweden, for example.Huh.Tammy Sukprasert: Correct. Yeah.Chris Adams: Oh, right. Wow. I, that's, in retrospect, it kind of seems obvious when you, but things are only obvious with when you look at it like that. And one thing you shared with me before we spoke about this was that some of this stuff is actually like, if people wanted to kind of explore some of these calculations, is this online somewhere? Is it like a GitHub repo or something where you can like poke around at some of these things?Tammy Sukprasert: Yeah. So all the simulations in this paper, it's open source. So please check my lab website, my lab GitHub for the simulations.Chris Adams: Okay, cool. All right. I think I've got the link here. So that's, this is from, so there's literally a repo called decarbonization potential. That's the one you're referring to here, right? On GitHub.Tammy Sukprasert: Yes, that's correct.Chris Adams: Brilliant. Okay. We'll definitely add that in the show notes because people who aren't like frantically exploring this themselves, it's where it's, right there.Okay. So that was one of the first pieces of research. Essentially that there are some savings that can be made. It's around like the 30 percent mark in a kind of perfect world with location and sort of about the same with temporal. And if I understood it correctly, combining the two doesn't deliver massively more savings than that, right?It's still never more than half this kind of intervention that you could possibly make, right?Tammy Sukprasert: Right, yeah, combining the two doesn't give you double the benefits, because the benefits are dominated by spatial migration, but not much of the temporal, if you combine them together.Chris Adams: Okay. Thank you. I'm really, glad you actually spoke about this because we can now have some of the numbers. To basically talk about the fact that, yeah, we still need to do other things. You can't just like leave your code and make no changes. That might get you some of the way. And if you're looking at Temporal, it'll get you 37 percent of the way in a perfect world.But you still need to make some other changes if you want to kind of reduce the environmental footprint further. Brilliant. Okay. Thank you for that. So we talked about some of the savings you can get in your previous paper. The fact that there's maybe around the 30 percent figure. And if you can move everything through space, you get around maybe 30 ish percent savings.If you look at, if you have perfect knowledge forward for the year, then it's maybe slightly higher than 30%, but it's in the same kind of ballpark. And if you were to look at moving all of your computing jobs through time and space, you can't just double this number. It's still going to be a meaning, it's going to be more than 30%, probably less than 50%.So that's one of the figures that we have. We'll share a link to the GitHub repo for people who are curious about this and want to see if they know what jobs they ran last year, they could see what kind of savings they could have achieved. So that's one thing. And we've spoken so far about some constraints that we have, but there's a few more constraints that we need to take into account.So for example, so far, we've been talking about how much, how many spare servers we have, like data center capacity inside this. But there are other constraints that we need to also think about, which are a little bit further down the stack, as it were. So there may be a certain limited amount of green energy, at which point when you have more demand than that, you might need to have some other forms of generation come on stream.And like, this is something that I think you explored in one of the other papers. So maybe we could talk about that. So, okay, this other paper that you spoke about, maybe we can just like, let us know the name and then we'll see where we go from there.Tammy Sukprasert: Right, so this paper, titled, On the Implications of Choosing Average vs marginal Carbon Intensity Signals on Carbon Aware Optimizations, basically, average vs marginal for carbon aware optimizations, right. So this paper came from the fact that, okay, People have been suggesting, let's shift the workload through time, let's shift the workload to different locations, but we never actually agree on which carbon intensity signal to use for carbon aware optimization, so as the title suggested, there are two types of carbon intensity signals that are mainly used, namely average carbon intensity signal and marginal carbon intensity signal.So for average carbon intensity signal, just think of it as a snapshot of the grid at that point in time, right? And the way it's calculated is the weighted average of carbon emissions weighted by their production,Chris Adams: Okay. So if I just check, I just want to start you there. So make sure I keep keeping up with you. So there's two ways you can measure carbon intensity, like how green electricity is. And this first one, this average one is basically saying, well, I've got maybe two coal fired power generators and one wind farm, so therefore I'll apply double the weighting of the coal versus one of the wind farm.That's kind of what, that's a simplified version, but that's essentially how you work out an average figure, right? Tammy Sukprasert: Right, right, but marginal carbon intensity signal is different. The way it's calculated is the carbon intensity with respect to the change in demand. So let's say just now you said you have two wind farms and one coal, but the next unit of demand is going to be served by gas generator. So then the marginal carbon intensity signal is the current intensity signal of that of the gas generator.Chris Adams: I see. Okay. So rather than looking at the average, it's almost like the kind of consequences of me doing a particular thing. That's what we're looking at there, right?Tammy Sukprasert: That's correct.Chris Adams: Okay. And this, so now we've got this. I hope if you're listening and you're struggling, this is really hard.So, thank you for staying with us so far. So this was the general, this is what we were looking into. And, as I understand it, this incentivizes different actions, or if you were looking at this, you might choose to move things to a different region or choose to run a computing job or do something at a different time.That's been my understanding of this. Is this is what you looked into then?Tammy Sukprasert: Right, so the paper look into the fact that if you follow one signal as a scheduling signal, you might end up in more carbon emission based on the perspective of the other signal. Yeah, so it turns out like you cannot just follow one signal and hoping that you will do well based on the other signals perspective as well.Chris Adams: Ah, okay. All right. So this adds another layer of complexity to this then. So if I understand it, I could be following one and that gives me some idea here, but there are certain places where they can be different. They can have different signals. So like some places might be the same, but there are certain parts of the world where I might have quite radically different signals between these two.That's what I think I'm hearing from that.Tammy Sukprasert: Right, because the two carbon intensity signals are calculated so differently, so in, within one region, the signals are generally not correlated. So when you schedule for one signal, let's say, for example, I use in the marginal carbon intensity signal as a scheduling signal, right? And I place a workload in this low carbon period based on marginal, but within the same time period, someone else is like, looking from the perspective of the average carbon intensity signal, they'll be like, "Hey, I wouldn't place my workload here because it's high carbon period right now."So it has some conflicting decision making.Chris Adams: And, presumably when you looked in the, when you're doing this research, were there particular parts of the world where you see wild spreads between these two places? Like there's some places that it's quite safe, right?Tammy Sukprasert: So in the paper, we look into, Arizona and Virginia for this kind of conflicting scheduling. So Arizona has fluctuating average carbon intensity signal, but really flat marginal and vice versa for Virginia. So let's just take Arizona, for example. Like if. You want to schedule based on marginal carbon intensity signal, you wouldn't do anything because it's flat.You can just place a workload wherever you want. But if you want to schedule the workload based on the average signal, you'll be like, I would place my workload at this particular time slot because it had the lowest carbon intensity signal during the day.Chris Adams: Ah, I see. Okay. So this suggests that you're going to need to be really explicit about which kind of signal you're following. And, there are certain parts of the world where it, you're more exposed to the differences between this, for example. That's what I think I'm hearing there.Wow. that sounds, yeah. Sustainability in software does not get easy. Okay. So that's one of the things we were looking at here. And, it sounds like that you've spent quite a lot of time looking into this, looking at this whole field then, and, presumably when people are taking their first steps to trying to work out the environmental impact of software, for example, would you suggest, is there like an order of things you might start with this?Cause this feels like relatively advanced, high level, complicated, calculations here, and is it possible to kind of look at the environmental impact of software without this straight away? Like, can you add this a little bit later, perhaps? Maybe there's like some rules of thumb or some approaches you might suggest as a researcher who has looked into this and tried to understand the environmental footprint of some software and said, "well, okay, you might want to just look at the total amount of energy used or the total amount of resources used first, before you look at, say, this carbon aware stuff. And if you can look at carbon aware, then maybe look at location first" or something like that. Cause this feels like kind of exciting, but this also feels like it gets complicated very, very quickly.Tammy Sukprasert: So when I started working, on carbon intensity signals, I find that the average carbon intensity signal is easier to understand simply because you just look at the overall picture of the grid and you take the average of the energy sources, right? But for marginal carbon intensity, it was interesting concept for me.You look into the carbon emission based on the change in demand, but I was having a hard time understanding this because in a practical sense, I feel like it's going to be challenging of understanding which power plant is actually serving my compute workload. Like, it's not transparent enough.Chris Adams: I see. So there's almost like a counterfactual you're, comparing it against like a, how do you know if someone, I think you, we spoke about this sort of like there's a power stack, right? Like, yes, I've switched off, I've stopped pulling power from the grid, for example, but, how do I know that no one else has pulled power from the grid at the same time?Is that what you're kind of getting at there?Tammy Sukprasert: Right. For marginal carbon intensity for me, the concept is actually good. Like, you're responsible for the carbon emission that you triggered, right. But, In, reality, like you don't know which power source is serving your demand and whether in the next time it's to serve by the same force. So for example, like I plug in my laptop only, maybe I could, my laptop maybe is fulfilled by coal, but someone, let's say, Chris, you unplug your lab, right? Maybe now you left the, now your, the demand decreases is my laptop still, my laptop power is still fulfilled by coal? Like I don't have that. So...Chris Adams: Ah, I see. Okay. Alright. That makes, no, that makes a bit more sense. And I kind of, I think I understand why, I think I follow basically the reasoning between why you might start with one before starting with the other one. Because I think I agree with you on that. I found the average a bit easier for me to get my head around two as well.And, marginal does sound really cool, but I don't think I'm very confident explaining it to other people. And I think that, I think my experiences seem to echo yours, actually. I'm glad you said that because I did wonder if it was just me and that does make it a bit easier for me too.I feel a bit better about myself now, actually. Thanks for that, Tammy. Okay. So, this has basically been your day job for the last few months, diving into the world of carbon signals and things like that. Is this some of the continued research you're doing, or are you looking into other fields now beyond software carbon intensity and working out the differences of carbon, working out the, potentials of carbon aware computing here?Tammy Sukprasert: So I'm still working on carbon aware computing stuff. Currently I'm working on a web service that harnesses renewable energy and I have to think about how we should handle the workload when there is no renewable energy available.Chris Adams: Okay. All right. So one thing this does seem to suggest is that if we're just looking at carbon in here, that's not showing us the whole picture. And even when we just look at carbon. We end up with quite a, we can end up with like difficult or conflicting signals for this. So it may be that we need to, we might need to expand the way we think about as software engineers, we think about the next layer down and say, like, are there other things we take into account beyond just looking at marginal or looking at average?Maybe there's something else we need to do or another way of thinking about the grid and how our interactions as software engineers kind of work with it and how that can have an impact there.Tammy Sukprasert: Right. So I think we need to move beyond the static signal and instead maybe look into other characteristics to take into consideration when doing carbon aware optimization, maybe in future direction, maybe we would agree on some other signal that captures the long term impact of the grid, like average carbon intensity signal and the current, like the instantaneous change in carbon intensity, like marginal. So yeah, apart from optimizing for carbon efficiency as a community, I think everyone should keep in mind about like, we need a better metric to capture this carbon emission.Chris Adams: Okay. Thank you for that. Tammy, this was a ride for me. Every single time I come to trying to understand the environmental footprint of software, I think I understand that there's a whole nother set for this. And you've really opened my eyes to this. Tammy, if people are interested in this field, are there any other projects or work that you've read about recently that you'd like to draw people's attention to?Tammy Sukprasert: Yeah, I think you should look at Carbon Scaler. I think that's one of the things IChris Adams: Oh.Tammy Sukprasert: recommend people to check it out.Chris Adams: Okay, we'll have to share a link to that because that's totally new to me. I've never... I'm not aware of that one actually.Tammy Sukprasert: So yeah, it's a system that reacts based on the available carbon intensity, and you scale the workload based on that. So you don't have to shift the workload.Chris Adams: Okay. All right. And if people want to find out more about the work that you're doing, where should people be following? Is there maybe, is there a website or are you on LinkedIn? Like what's the best place for people to direct people's attention if they wanted to follow up and read actually some of the work that you've been publishing and talking about here today?Tammy Sukprasert: Yeah, so, I'm on LinkedIn. You can search my name up, Tammy Sukprasert, or T Sukprasert for the link, yeah.Chris Adams: Brilliant. All right. Well, Tammy, thank you so much for giving us some of your time and sharing what you've learned from here. It's been absolutely fascinating. And we now finally have some numbers about what we can achieve with carbon aware computing. At least we have some numbers now to work with. So thank you once, again for this, and I hope you have a lovely week.Cheers, Tammy.Tammy Sukprasert: Chris, cheers.Chris Adams: Hey everyone. Thanks for listening. Just a reminder to follow Environment variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show and of course, we'd love to have more listeners.To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again and see you in the next episode!

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode