Environment Variables cover image

Environment Variables

Latest episodes

undefined
Apr 24, 2025 • 35min

The Economics of AI

Chris Adams sits down in-person with Max Schulze, founder of the Sustainable Digital Infrastructure Alliance (SDIA), to explore the economics of AI, digital infrastructure, and green software. They unpack the EU's Energy Efficiency Directive and its implications for data centers, the importance of measuring and reporting digital resource use, and why current conversations around AI and cloud infrastructure often miss the mark without reliable data. Max also introduces the concept of "digital resources" as a clearer way to understand and allocate environmental impact in cloud computing. The conversation highlights the need for public, transparent reporting to drive better policy and purchasing decisions in digital sustainability. Learn more about our people:Chris Adams: LinkedIn | GitHub | WebsiteMax Schulze: LinkedIn | WebsiteFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterResources:Energy Efficiency Directive [02:02]German Datacenter Association [13:47] Real Time Cloud | Green Software Foundation [22:10]Sustainable Digital Infrastructure Alliance [33:04]Shaping a Responsible Digital Future | Leitmotiv [33:12]If you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Max Schulze: The measurement piece is key. Having transparency and understanding always helps. What gets measured gets fixed. It's very simple, but the step that comes after that, I think we're currently jumping the gun on that because we haven't measured a lot of stuff. Chris Adams: Hello and welcome to Environment Variables, brought to you by the Green Software Foundation.In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect. Candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams. Hello and welcome to another edition of Environment Variables, where we bring you the latest news and updates from the world of sustainable software development.I'm your host, Chris Adams. We're doing something a bit different today. Because a friend and frequent guest of the pod, Max Schulzer is actually turning up to Berlin in person where I'm recording today. So I figured it'd be nice to catch up with Max, see what he's up to, and yeah, just like catch up really.So Max, we've been on this podcast a few times together, but not everyone has listened to every single word we've ever shared. So maybe if I give you some space to introduce yourself, I'll do it myself and then we'll move from there. Okay. Sounds good. All right then Max, so what brings you to this here?Can you introduce yourself today? Yeah. Max Schulze: Yeah. I think the first question, why am I in Berlin? I think there's a lot of going on in Europe in terms of policies around tech. In the EU, there's the Cloud and AI Development Act. There's a lot of questions now about datacenters, and I think you and I can both be very grateful for the invention of AI because everything we ever talked about, now everybody's talking about 10x, which is quite nice.Like everybody's thinking about it now. Yep. My general introduction, my name is Max. For everybody who doesn't know me, I'm the founder of the SDIA, the Sustainable Digital Infrastructure Alliance. And in the past we've done a lot of research on software, on datacenters, on energy use, on efficiency, on philosophical questions around sustainability.I think the outcome that we generated that was probably the most well known is the Energy Efficiency Directive, which is forcing datacenters in Europe to be more transparent now. Unfortunately, the data will not be public, which is a loss. But at least a lot of digital infrastructure now needs to, Yeah,be more transparent on their resource use. And the other thing that I think we got quite well known for is our explanation model. The way we think about the connection between infrastructure, digital resources, which is a term that we came up with and how that all interrelates to software. Because there's this conception too that we are building datacenters for the sake of datacenters.But we are, of course, building them in response to software and software needs resources. And these resources need to be made somewhere. Chris Adams: Ah, I see. Max Schulze: And that's, I think what we were well known for. Chris Adams: Okay. Those two things I might jump into a little bit later on in a bit more detail.So, if you're new to this podcast, my name is Chris Adams. I am the policy chair in the Green Software Foundation's Policy Working Group, and I'm also the director of technology and policy in the confusingly, but similarly named Green Web Foundation. Alright. Max, you spoke about two things that, if I can, I'd like to go dive into in a little bit more detail.So, first of all, you spoke about this law called the Energy Efficiency Directive, which, as I understand it, essentially is intended to compel every datacenter above a certain size to start recording information, and in many ways it's like sustainability-adjacent information with the idea being that it should be published eventually.Could we just talk a little bit about that first and maybe some of your role there, and then we'll talk a little bit about the digital resource thing that you mentioned. Max Schulze: Yeah. I think on the Energy Efficiency Directive, even one step up, europe has this ambition to conserve resources at any time and point.Now, critical raw materials are also in that energy efficiency. Normally, actually, this law sets thresholds. Like it is supposed to say, "a building shall not consume more power than X." And with datacenters, what they realized, like, actually we can't set those thresholds because we don't know, like reliably how many resources have you consumed?So we can't say "this should be the limit." Therefore, the first step was to say, well, first of all, everybody needs to report into a register. And what's interesting about that, it's not just the number that in datacenter land everybody likes to talk about, which is PUE, power usage effectiveness. And so how much overhead do I generate with cooling and other things on top of the IT, but also that it for the first time has water in there.It has IT utilization ranges in there. It even has, which I think is very funny., The amount of traffic that goes in and out of a datacenter, which is a bit like, I don't know what we're trying to measure with this, but you know, sometimes you gotta leave the funny things in there to humor everybody. And it goes really far in terms of metrics on like really trying to see what resources go in a datacenter, how efficiently are there being used, and to a certain degree also what comes out of it. Maybe traffic. Yeah. Chris Adams: Ah, I see. Okay. Alright, so it's basically, essentially trying to bring the datacenter industry in line with some of other sectors where they already have this notion of, okay, we know they should be this efficient, and like we've had a lack of information in the datacenter industry, which made it difficult to do that.Now I'm speaking to you in Berlin, and I don't normally sound like I'm in Berlin, but I am in Berlin, and you definitely sound like you are from Germany, even though you're not necessarily living in Germany. Max Schulze: I'm German. Chris Adams: Oh yeah. Maybe it might be worth just briefly touching on how this law kind of manifests in various countries, because I know that like this might be a bit inside baseball, but I've learned from you that Germany was one of the countries that was really pushing quite hard for this energy efficiency law in the first place, and they were one of the first countries who actually kinda write into their own national law.Maybe we could touch a little bit on that before we start talking about world of digital resources and things like that. Max Schulze: Yeah, I think even funnier, and then you always know in the Europe that a certain country's really interested in something, they actually implemented it before the directive even was finalized.So for everybody who doesn't know European policies, so the EU makes directives and then every country actually has to, it's called transpose it, into national law. So just because the EU, it's a very confusing thing, makes something, doesn't mean it's law. It just means that the countries should now implement it, but they don't have to and they can still change it.So what Germany, for example, did, in the directive it's not mandatory to have heat recovery. So we're using the waste heat that comes out of the datacenter. But also the EU did not set release thresholds. But of course Germany was like, "no, we have to be harsher than this." So they actually said, for datacenters above a certain size, that needs to be powered by renewable energy, you need to have heat recovery,it's mandatory for a certain size. And of course the industry is not pleased. So I think we will see a re revision of this, but it was a very ambitious, very strong, "let's manage how they build these things."Chris Adams: I see. Okay. There is a, I think, is there a German phrase? Trust is nice, control is better.Yes. Well, something like that. Yeah. Yeah. Okay. All right. So if I'm just gonna put my program ahead on, so when I think of a directive, it's a little bit like maybe an abstract class, right? Yes. And then if I'm Germany, I'm making a kind of concrete, I've implemented that class in my German law basically.Yes. Max Schulze: Interfaces and implementations. Okay. Chris Adams: Alright. You've explained it into nerd for me. That makes a bit more sense. Thank you for that. Alright, so that's the ED, you kind of, you essentially were there to, to use another German phrase, watch the sausage get made. Yeah. So you've seen how that's turned up and now we have a law in Germany where essentially you've got datacenters regulated in a meaningful way for the first time, for example. Yeah. And we're dealing with all the kind of fallout from all that, for example. And we also spoke a little bit about this idea of digital resources. This is one other thing that you spend quite a lot of intellectual effort and time on helping people develop some of this language themselves and we've used ourselves in some of our own reports when we talk to policy makers or people who don't build datacenters themselves. 'Cause a lot of the time people don't necessarily know what, how a datacenter relates to software and how that relates to maybe them using a smartphone. Maybe you could talk a little about what a digital resource is in this context and why it's even useful to have this language.Max Schulze: Yeah, and let me try to also connect it to the conversation about the ED. I think when, as a developer, you hear transparency and okay, they have to report data. What you're thinking is, "oh, they're gonna have an API where I can pull this information, also, let's say from the inside of the datacenter." Now in Germany, it is also funny for everybody listening, one way to fulfill that because the law was not specific,datacenters now are hanging a piece of paper, I'm not kidding, on their fence with this information, right? So this is like them reporting this. And of course we as, I'm also a software engineer, so we as technical people, what we need is the datacenter to have an API that basically assigns the environmental impact of the entire datacenter to something.And that something has always bothered me that we say, oh, it's the server. Or it's the, I don't know, the rack or the cluster, but ultimately, what does software consume? Software consumes basically three things. We call it compute, network, and storage, but in more philosophical terms, it's the ability to store, process and transfer data.And that is the resource that software consumes. A software does not consume a datacenter or a server. It consumes these three things. And a server makes those things, turns actually energy and a lot of raw materials into digital resources. Then the datacenter in turn provides the shell in which the server can do that function.Right? It's, the factory building is the datacenter. The machine that makes the t-shirts is the server. And the t-shirt is what people wear. Right?Chris Adams: Ah, I see. Okay. So that actually helps when I think about, say, cloud computing. Like when I'm purchasing cloud computing, right, I'm paying for compute. I'm not really that bothered about whether it's an Intel server or something like that.And to a degree, a lot of that is abstracted away from me anyway, so, and there's good sides to that and downsides to that. But essentially that seems to be that idea of kind of like cloud you compute and there being maybe for want of a better term, primitives you build services with, that's essentially some of the language that you are, you've been repurposing for people who aren't cloud engineers, essentially, to understand how modern software gets built these days.Right. Max Schulze: And I think. That's also the real innovation of cloud, right? They gotta give them credit for that. They disaggregated these things. So on. When AWS was first launched, it was S3 for storage, EC2 for compute, and VPC for networks, right? So they basically said like, whatever you need, we will give it to you at scale in infinite pools of however much you need and want, and you pay only for it by the hour.Which before you had to rent a server, the server always came with everything. It came with network, it came with storage, and you had to build the disaggregation yourself. But as a developer, fundamentally all you want, sometimes you just want compute. Now we have LLMs. I definitely just want compute. Then you realize, oh, I also need a lot storage to train an LLM.Then you want some more storage. And then you're like, okay, well I need a massive network inside that, and you can buy each of these pieces by themselves because of cloud. That is really what it is about. Chris Adams: Oh, I see. Okay. And this is why it's little bit can be a bit difficult when you're trying to work out the environmental footprint of something because if we are trying to measure, say a server, but the resources are actually cloud and there's all these different ways you can provide that cloud,then obviously it's gonna be complicated when you try to measure this stuff. Max Schulze: Yeah. Think about a gigabyte of storage on S3. There may be hundreds of servers behind it providing redundancy, providing the control layer, doing monitoring, right? Like in a way that gigabyte of storage is not like a disc inside a server somewhere.It is a system that enables that gigabyte. And on thinking on that, like trying to say the gigabyte needs to come from somewhere is the much more interesting conversation than to go from the server up. Ah. It's misleading otherwise. Chris Adams: Alright. Okay. So. I'm gonna try and use a analogy from say, the energy sector, just to kinda help me understand this because I think there's quite a few key ideas inside this. So in the same way that I am buying maybe units of electricity, like kilowatt hours I'm buying that, I'm not really buying like an entire power station or even a small generator when I'm paying for something. There's all these different ways I can provide it, but really I care about is the resources. And this is the kind of key thing that you've been speaking to policy makers or people who are trying to understand how they should be thinking about datacenters and what they're good for and what they're bound for, right? Yes. Okay. Alright, cool. So you are in Berlin and it's surprisingly sunny today, which is really nice. We've made it through the kind of depressing German winter and I've actually like, you know, you, we've crossed parts quite a few times in the last few weeks because you've been bouncing between where you live in Harlem, Netherlands, and Brussels and Berlin quite a lot.And I like trains and I imagine you like trains, but that's not the only reason you are zipping around here. Are there any projects related to digital sustainability that you could talk about that have been taking up your time, like that you're allowed to talk about these days?Max Schulze: Yeah, I there's a lot.There's too many actually, which is a bit overwhelming. We are doing a lot of work still on software also related to AI and I don't think it's so interesting to go into that. I think everybody from this podcast knows that there's an environmental impact. We now have a lot of tools to measure it, so my work is really focused on how do I get policy makers to act. And one project that I just recently came out and now that the elections are over in Germany, we can also talk about it, is we basically wrote a 200 page monster, call it the German Datacenter, not a strategy yet, it's an assessment and there's a lot of like, how much power are they gonna use?That's not from us. But what we, for the first time we're able to do is to really explain the layers. So there's a lot of misconception that say building a datacenter creates jobs. But I think everybody in software knows that, and I think actually all of you should be more offended when datacenters claim that they are creating jobs because it is always software that runs there that is actually creating the benefit, right?A datacenter building is just an empty building, and what we've been able to explain is to really say, okay, I build a datacenter, then there is somebody bringing servers, running IT infrastructure, maybe a hoster. That hoster in turn provides services to, let's say an agency. That agency creates a website. And that's a really complex system of actors that each add value,and what we've shown is that a datacenter, per megawatt, depending on who's building it, can be three to six jobs. And a megawatt is already a very large datacenter, just can be 10,000 servers. If you compare that to the people on top, like if you go to that agency that can go to up to 300 to 600 jobs per megawatt.And the value creation is really in the software and not anywhere else. And we believe that the German government and all sort of regions, and this applies to any region around the world, should really think like, "okay, if I did, I will build this datacenter, but how do I create that ecosystem around it? You know, in Amsterdam is always a good example.You have Adyen, you have booking.com, you have really big tech companies, and you're like, "I'm sure they're using a Dutch datacenter." Of course not. They're running on AWS in Ireland. So you don't get the ecosystem benefit. But your policy makers think they do, but you don't connect the dots, so to say. Chris Adams: Ah, okay.So if I understand this, so essentially the federal German government, third largest economy, I think it's third or fourth largest economy in the world. Yes. They need to figure out what to do with the fact there's lots and lots of demand for digital infrastructure. They're not quite sure what to do with it, and they also know they have like binding climate goals. So they're trying to work out how to square their circle. And there is also, I mean, most countries right now do wanna have some notion of like being able to kind of economically grow. So they're trying to understand, okay, what role do these play? And a lot of the time there has been a bit of a misunderstanding between what the datacenter provides and where the jobs actually come from.And so you've essentially done for the first time some of this real, actually quite rigorous and open research into, "okay, how do jobs and how is economic opportunity created when you do this? And what happens if you have the datacenter in one place, but the job where the agencies or the startups in another place?" For example, because there seems to be this idea that if you just have a datacenter, you automatically get all the startups and all the jobs and everything in the same place.And that sounds like that might not always be the case without deliberate decisions, right? Max Schulze: Yes. Without like really like designing it that way. And it becomes even more obvious when you look at Hyperscale and cloud providers, where you see these massive companies with massive profits and let's say they go to a region, they come to Berlin,and they tell Berlin, you know, having actually Amazon and Spain also sent a really big press release, like, "we're gonna add 3% to your GDP. We're going to create millions of jobs." And of course every software engineer know is like just building a datacenter for a cloud provider does not do that.And what they're also trying to distract, which we've shown in the report by going through their financial records, is that they don't, they pay property tax, so they pay local tax, in Germany is very low. But they of course, don't pay any corporate income tax in these regions. So the region thinks, "oh, I'm gonna get 10% of the revenue that a company like Microsoft makes."That's not true. And in return, the company ask for energy infrastructure, which is socialized cost, meaning taxpayers pay for this. They ask for land, not always available, or scars. And then they don't really give much back. And that's really, I'm not saying we shouldn't build datacenters or you know, but you have to be really mindful that you need the job creation.The tax creation is something that comes from above this, like on top of a datacenter stack. Yeah. And you need to be deliberate in bringing that all together, like everything else is just an illusion in that sense. Chris Adams: Oh, I see. Okay. So this helps me understand why you place so much emphasis on help helping people understand this whole stack of resources being created and where some of the value might actually be.'Cause it's a little bit like if you are, let's imagine like say you're looking at, say, generating power for example, and you're like, you're opening a power station. Creating a power station by itself isn't necessarily the thing that generates the wealth or it's maybe people being able to use it in some of the higher services, further up the stack as it were.Correct. And that's the kind of framing that you helping people understand so they can have a more sophisticated way of thinking about the role that datacenters play when they advance their economies, for example. Max Schulze: I love that you're using the energy analogy because everybody will hear that, or who's hearing this on the podcast will probably be like, "oh yeah, that's obvious, right?"But for digital it, to a lot of people, it's not so obvious. They think that the power station is the thing, but actually it's the chemical industry next to it that should actually create, that's where the value is created. Chris Adams: I see. Okay. Alright. That's actually quite helpful. So one of the pieces of work you did was actually.Providing new ways to think about how digital infrastructure ends up being, like how it's useful for maybe a country, for example. But one thing that I think you spoke about for some of this report was actually the role that software can actually play in like blunting some of the kind of expected growth in demand for electricity and things like that.And obviously that has gonna have climate implications for example. Can we talk a little bit about the role that designing software in a more thoughtful way actually can blunt some of this expected growth so we can actually hit some of the goals that we had. 'Cause this is something that I know that you spend about fair amount of time thinking about and writing about as well.Max Schulze: Yeah, I think it's really difficult. The measurement piece is key, but having transparency and understanding always helps. What gets measured gets fixed. It's very simple. But the step that comes after that, I think we're currently jumping the gun on that because we haven't measured a lot of stuff. We don't have a public database of say, this SAP system, this Zoom call is using this much.We have very little data to work with and we're immediately jumping through solutions that like, oh, but we, if we shift the workloads, but if we're, for example, workload shifting on cloud, it's, unless the server has turned off, the impact is zero. Or that zero is extreme, but it's very limited because the cloud provider then has an incentive to, to fill it with some other workload.You, it's, we've talked about this before. If everybody sells oil stocks because they're protesting against oil companies, it just means somebody else gonna buy the oil stock. You know? And it ultimately brings them spot prices down. But that's a different conversation. So I think, let's not jump to that.Let's first get measurement really, right? And then it raises to me the question, what's the incentive for big software vendors or companies using software to actually measure and then also publish the results? Because, let's be honest, without public data, we can't do scientific research and even communities like the Green Software Foundation will have a hard time, you know, making report or giving good, making good analysis if we don't have publicly available data on certain software applications.Chris Adams: I see. Okay. This does actually ring some bells 'cause I remember when I was involved in some of the early things related to working out, say software carbon intensity scores. We found that it's actually very, difficult to just get the energy numbers from a lot of services simply because that's not the thing that, 'cause a lot of the time,if you're a company, you might not want to share this 'cause you might consider that as commercially sensitive information. There's a whole separate project called the Real Time Cloud project within the Green Software Foundation where the idea is to, and there's been some progress putting out, say, region by region figures for the carbon intensity of different places you might run cloud in, for example, and this is actually like a step forward, but at best we're finding that we could get maybe the figures for the carbon intensity of the energy that's there, but we don't actually have access to how much power is being used by a particular instance, for example. We're still struggling with this stuff and this is one thing that we keep bumping up against. So I can see where you're coming from there. So, alright, so this is one thing that you've been spending a bit of time thinking through, like where do we go from here then?Max Schulze: Yeah, I think first we need to give ourselves a clap on the back because if you look at the amount of tools that can now do measurement like commercial tools, open source tools, I think it's amazing, right? We have, it's all there. Dashboards, promoters things, report interfaces, you know, it's all there. Now, the next step, and I think that's, as software people, we like to skip that step because we think, well, everybody's now gonna do it.Well, it's not the reality. Now it's about incentives. And I think, for example, one organization we work with is called Seafit and it's a conglomerate of government purchasers, iT purchasers, who say, "okay, we want to purchase sustainable software." And to me it's very difficult to say, and I think you have the same experience, here are the 400 things you should put in your contracts to make the software more sustainable.Instead, what we recommend is to simply say, well, please send me an annual report of all the environmental impacts created from my usage of your software, and very important phrase we always put in this end, please also publish it. Yeah. Again, and I think, right now, that's what we need to focus on. We need to focus on creating that incentive for somebody who's buying, even like Google Workplace, more like notion to really say, "Hey, by the way, before I buy this, I want to see the report," right?I want to see the report from my workplace, and even for all the people listening to this, any service you use, like any API you use commercially, send them just an email and say, "Hey, I'm buying your product. I'm paying 50 euro a month, or 500 or 5,000 euros a month. Can I please get that report? Would you mind?"Yeah. And that creates a whole chain reaction of everybody in the company thinking, "oh my God, all our customers are asking for this." Yeah, we need this. One of our largest accounts wants this figured out. And then they go to the Green Software Foundation or go to all the open source tools.They learn about it, they implement a measurement. Then they realize, "oh, our cloud providers are not giving us data." So then they're sending a letter to all the cloud providers saying like, "guys, can you please provide us those numbers?" Chris Adams: Yeah. Yes. Max Schulze: And this is the chain reaction that requires all of us to focus and act now to trigger.Chris Adams: Okay. So that sounds like, okay. When you, when I first met you, you were looking at, say, how do you quantify this and how do you build some of these measurement tools? And I know that some, there was a German project called, is It SoftAware, which was very, you know, the German take on SoftAware that does try to figure these out to like come up with some meaningful numbers. And now the thing it looks like you're spending some time thinking about is, okay, how do you get organizations with enough clout to essentially write in the level of disclosure that's needed for us to actually know if we're making progress or not?Right? Yeah. Max Schulze: Correct. Little side anecdote on SoftAware. The report is also a 200 page piece. It's been finished for a year and it's not published yet because it's still in review in the, so it's a bit, it's a bit to pain. But fundamentally what we concluded is that, and I, there's other people that have already, while we are writing it, built better tools than we have.And again, research-wise, this topic is, I don't wanna say solved. All the knowledge is out there and it's totally possible. And that's also what we basically set the report. Like if you can attach to the digital resource, if I can attach to the gigabyte of S3 storage, that is highly redundant or less redundant, an environmental product declaration.So how much, physical resources went in it, how much energy went into it, how much water? Then any developer building a software application can basically then do that calculation themselves. If I use 400 gigabytes of search, it's just 400 x what I got environment important for, and that information is still not there.But it's not there because we can't measure it. It's there because people don't want to, like you said, they don't want to have that in public. Chris Adams: Okay. So that's quite an interesting insight that you shared there, is that, 'cause when we first started looking at, I don't know, building digital services,there was a whole thing about saying, well, if my webpage is twice the size, it must have twice the carbon footprint. And there's been a whole debate saying, well actually no, we shouldn't think about that. It doesn't scale that way. And it sounds like you're suggesting yes, you can go down that route where you directly measure every single thing, but in aggregate, if you wanna take a zoom out, if you wanna zoom out to actually achieve some systemic level of change, the thing you might actually need is kind of lower level per primitive kind of allocation of environmental footprint and just say, well, if I know the thing I'm purchasing and building with is say, gigabytes of storage, maybe I should just be thinking about in terms of each gigabyte of storage has this much, so therefore I should just reduce that number rather than worrying too much about if I halve my half, halve the numbers, it's not gonna be precisely a halving in emissions because you're looking at a kind of wider systemic level.Max Schulze: First of all, I never talk about emissions because that's already like a proxy. Again, I think if you take the example of the browser, what you just said, I think there it becomes very obvious, what you really want is HP, Apple, Dell, any laptop they sell, they say, you know, there's 32 gigs of memory per gigabyte of memory.This is the environmental impact per CPU cycle. This is the environmental impact. How easy would it be then to say, well, this browser is using 30% CPU, half of the memory, and then again, assigning it to each tab. It becomes literally just a division and forwarding game mathematically. But the scarcity, that the vendors don't ultimately release it on that level makes it incredibly painful for anyone to kinda reverse engineer and work backwards. Exactly. You get it for the server for the whole thing. Yeah. But that server also, of which configuration was it? Which, how much memory did it have? And this subdivision, that needs to happen.But again, that's a feature that I think we need to see in the measurement game. But I would say, again, slap on the back for all of us and everybody listening, the measurement is good enough. For AI we really see it like, I think for the first time, it is at a scale that everybody's like, it doesn't really matter if we get it 40 or 60% right. It's pretty bad. Yeah. Right. And instead of now saying like, oh, let's immediately move to optimizing the models. Let's first create an incentive that we get all the model makers and then especially those service providers and the APIs, to just give everybody these reports so that we have facts.That's really important to make policy, but also then to have an incentive to get better. Chris Adams: Okay. So look, have a data informed discussion essentially. Alright, so you need data for a data informed discussion basically. Max Schulze: Yes. Chris Adams: Alright. Max Schulze: To add to that, it's really because you like analogies and I like analogiesit's a market that is liquid with information. What I mean by that, if I want to buy a stock of a company, I download their 400 page financial report and it gives me a lot of information about how good that company's doing. Now for software, what are we, what is the liquidity of information in the market?It's, for environmental impact, it's zero. The only liquidity we have is features. There are so many videos for every product on how many features and how to use them. So we have even the financial records of most software companies you can't actually get, 'cause they're private. So we have very scarcity of information and therefore competition in software is all about features.Not about environmental impact. And I'm trying to create information liquidity in the market so that you and I and anybody buying software can make better choices. Chris Adams: Ah, okay. And this helps me understand why, I guess you pointed to there was less that French open example of something equivalent to like word processing.I think we, it should be this French equivalent to like Google Docs. Yeah. Or which is literally called Docs. Yeah. And their entire thing was it's, it looks very much, very similar to some, to the kind of tool you might use for like note taking and everything like that. But because it's on an entirely open stack, it is possible to like see what's happening inside it and understand that, okay, well this is how the impacts scale based on my usage here, for example.Max Schulze: But now. Now one of our friends, Anna, from Green Coding, would say, yeah, you can just run it through my tool and then you see it, but it's still just research information. We need liquidity on the information of, okay, the Ministry of Foreign Affairs in France is using docs. It has 4,000 documents and 3000 active data users.Now that's the where I want the environmental impact data, right? I don't want a lab report. I don't wanna scale it in the lab. I want the real usage data. Chris Adams: Okay. So that feels like some of the next direction we might be moving to is almost looking at some of these things, seeing, like sacrificing some of the precision for maybe higher frequency information at like of things in production essentially.So you can start getting a better idea about, okay, when this is in production or deployed for an entire department, for example, what, how will the changes I make there scale across rather than just making an assumption based on a single system that might not be quite as accurate as the changes I'm seeing in the real world?Max Schulze: And you and I have two different bets on this that go in a different direction. Your bet was very much on sustainability reporting requirements, both CSRD or even financial disclosures. And my bet is if purchasers ask for it, then it will become public. And those are complimentary, but they're bets on the same exact thing. Information liquidity on environmental impact information. Chris Adams: Okay. All right. Well, Max, that sounds, this has been quite fun actually. I've gotta ask just before we wrap up now, if people are curious, and I've found some of the stuff you're talking about, interesting. Where should people be looking if they'd like to learn more?Like is there a website you'd point people to or should they just look up Max Schulze on LinkedIn, for example? Max Schulze: That's always a good idea. If you want angry white men raging about stuff, that's LinkedIn, so you can follow me there. We, the SDIA is now focused on really helping regional governments developing digital ecosystems.So if you're interested in that, go there. If you're interested more in the macro policy work, especially around software, we have launched a new brand that's our think tank now, which is called Leitmotiv. And I'm sure we're gonna include the note, the link somewhere in the notes. Of natürlich. Yeah. Yeah. Very nice.And yeah, I urge you to check that out. We are completely independently funded now. No companies behind us. So a lot of what you read is like the brutal truth and not some kind of washed lobbying positions. So maybe you enjoy reading it. Chris Adams: Okay then. All right, so we've got Leitmotiv, and we've got the SDIA and then just Max Shulzer on LinkedIn.These are the three places to be looking for this sort. Yeah. Alright, Max, it's lovely chatting to you in person and I hope you have a lovely weekend and enjoy some of this sunshine now that we've made it through the Berlin winter. Thanks, Max. Thanks Chris. Hey everyone. Thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts.And please do leave a rating and review if you like what we're doing. It helps other people discover the show. And of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser.Thanks again and see you in the next episode.
undefined
Apr 17, 2025 • 1h 1min

OCP, Wooden Datacentres and Cleaning up Datacentre Diesel

Host Chris Adams is joined by special guest Karl Rabe, founder of WoodenDataCenter and co-lead of the Open Compute Project’s Data Center Facilities group, to discuss sustainable data center design and operation. They explore how colocating data centers with renewable energy sources like wind farms can reduce carbon emissions, and how using novel materials like cross-laminated timber can significantly cut the embodied carbon of data center infrastructure. Karl discusses replacing traditional diesel backup generators with cleaner alternatives like HVO, as well as designing modular, open-source hardware for increased sustainability and transparency. The conversation also covers the growing need for energy-integrated, community-friendly data centers to support the evolving demands of AI and the energy transition in a sustainable fashion.Learn more about our people:Chris Adams: LinkedIn | GitHub | WebsiteKarl Rabe: LinkedIn | WebsiteFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterResources:Windcloud [02:31]Open Compute Project [03:36]Software Carbon Intensity (SCI) Specification [35:47] Sustainability » Open Compute Project [38:48]Swiss Data Center Association [39:07]Solar Microgrids for Data Centers [47:24]How to green the world's deserts and reverse climate change | Allan Savory [53:39]Wooden DataCenter - YouTube [55:33] If you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Karl Rabe: That's a perfect analogy, having like a good neighbor approach saying, "look, we are here now, we look ugly, we always box, you know, but we help, you know, powering your homes, we reduce the cost of the energy transition, and we also heat your homes." Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.I'm your host, Chris Adams. Hello, and welcome to another edition of Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Chris Adams. How do you green the bits of a computing system that you can't normally control with software? We've discussed before that one option that you can do might be to shift where you run computing jobs from one part of the world to another part of the world where the energy is greener.And we've spoken about how this is essentially a way to run the same code, doing the same thing, but with a lower carbon footprint. But even if you have two data centers with the same efficiency on the same grid, one can still be greener than the other simply because of the energy gone into making the data center in the first place and the materials used. So does this make a meaningful difference though, and can it make a meaningful difference? I didn't know this. So I asked Karl Rabe the founder of Wooden Data Center and Windcloud, and now increasingly involved in the Open Compute Project, to come on and help me navigate these questions as he is the first person who turned me onto the idea that there are all these options available to green the shell, the stuff around the servers that we have that also has an impact on the software we run.Karl, thank you so much for joining me. Can I just give you the floor to introduce yourself before we start?Karl Rabe: Thanks, Chris. This is an absolute honor and I'll have to admit, you know, you're a big part on my carbon aware journey, and so I'm very glad that we finally get to speak. I'm Karl, based out of North Germany. We initially, I always say I had a one proper job. I'm a technical engineer by training,and then I moved into the data. Then I fell into the data center business, we can touch on it a little later, which was Windcloud, which remains, which was data center thought from the energy perspective, which is a very important idea in 2025. But we pivoted about four years ago to Wooden Data Center, probably can touch upon those a little later, in also realizing there is this supply chain component to the data center.And there are also tools to action against those. And I'm learning and supporting and providing, you know, as a co-lead in the data center facilities group of the OCP where we work, you know, with the biggest organizations directly in order to shape and define the latest trends in the data centerand especially navigating the AI buildout in somewhat of a, yeah, sustainable way.Chris Adams: Okay, cool. And when you say OCP, you're referring to the Open Compute Project, the kind of project with Microsoft, Meta, various other companies, designing essentially open source server designs, right?Karl Rabe: Correct. That is the, initially started by then Facebook now Meta, in order yeah, to create or to cut out waste on the server design. It meanwhile involves and grew into cooling environments, data center design, chiplet design. It's a whole range of initiatives.Very interesting to look into. And, happy to talk about some of those projects. Yeah.Chris Adams: All right, thanks Karl. So if you are new to this podcast, my name is Chris Adams. I am the director of technology and policy at the Green Web Foundation, a small Dutch non-profit focused on a fossil free internet by 2030. And I also work with the Green Software Foundation, the larger industry body, in their policy working group.And we are gonna talk about various projects and we'll add as many all the show notes to all the links we can think of as we discuss. So if there's any particular things that caught your eye, like the OCP or Wooden Data Centers, if you follow the link to this website, to this podcast's website, you'll see all the links there.Alright then Karl, are you sitting comfortably?Karl Rabe: I am sitting very well. Yeah.Chris Adams: Good stuff. Alright, then I guess we can start. So maybe I should ask you, where are you calling me from today, actually?Karl Rabe: I'm calling you today from the west coast of the North Sea Shore in northern Germany. We are not a typical data center region for Germany, per se. We, which is Frankfurt, you know, 'cause of the big internet hub there. But we are actually located right within a wind farm.You know, in my home, which is, initially was, you know, home growing up and turned to my home office and eventually to what was somewhat considered the international headquarter of Wooden Data Center. Yeah, and we're very close to the North Sea and we have a lot of renewable power around.Chris Adams: Oh, I see. Okay. So near the north of Germany, near Denmark, where Denmark has loads of wind, you've got the similar thing where, okay. SoKarl Rabe: Yeah, absolutely. Yeah.Chris Adams: Oh, I see. I get you. So, ah, alright. For people who are not familiar with the geography of like Europe, or Northern Europe in particular, the north part of Germany has loads of wind turbines and loads of wind energy, but lots of the power gets used in other parts of it.So, Karl is in the windiest part of Germany, basically.Karl Rabe: That's correct, yeah. We basically have offshore conditions on shore. And it's a community owned wind farm, which is also a special setup, which is very easy to get, you know, the people's acceptance. We have about a megawatt per inhabitant of this small community.And so this is becoming, you know, the biggest, yeah, economic factor of the small community.Chris Adams: Wow. A megawatt per, okay, so just for context, for people who are not familiar with megawatts and kilowatts, the typical house might use what may be about half a kilowatt of constant draw on average over the year. So that's a lot of power per person for that space. So that's a, you're in a place of power abundance compared to the scenario people are wondering where's the power gonna be coming from? Wow, I did not know that.Karl Rabe: No, that, is, yeah, that is the, so it's a bit of that background, so to speak. We are now trying to go from 300 megawatts to 400 megawatts. There has been, you know, Germany's pushing for more renewable energy, and we have still some spots that we can, under new regulations now, build out.And the goal or the big dream of our organization, the company running this wind farm for us is trying to produce a billion kilowatt hours per year. And so we're now slightly below that and we're trying to, Yeah, add another, yeah. For, we need to reach probably another 25 percent more production. And, it is, so to speak, you are absolutely right, we are in an energy abundance and that was one of the prerequisites for Windcloud. 'Cause you know, the easiest innovations, is one and one is two. And so we have in, we had energy, I was aware that we also had fiber infrastructure in the north to run those set wind, parts.So we said, why don't we bring a load to those? That was the initial start of Windcloud.Chris Adams: Okay, so maybe we should talk a little bit about that. I hadn't realized the connection between the geography and the fact that you're literally in the middle of a wind farm, which is why this came together. Okay. So, the, so as I understand it, and now this makes sense why you are so involved in Windcloud.So for context, my understanding of Windcloud is it's essentially a company where rather than like connecting data centers via big power lines to like somewhere else where the actual generation is miles away from where the data centers are, the ideals instead was to actually put the data centers literally inside the towers of the wind turbines themselves.So you don't need to have any cables and, well you've obviously got green energy because it's right there, you're literally using the wind turbine. So, apart from this sounding kind of cool, can you tell me like why you do this from a sustainability perspective in the first place?Karl Rabe: Yeah, so the way we discovered that I wanted to, and this is the, probably the biggest reference that I can give on the software developer front, and I came out of a study in the UK. We had a really nice cohort.We were constantly bouncing ideas off of each other. I wanted to actually build small aircraft, because we have a wind farm and we have wealth with that. We actually have people building small planes in our location. They told me I needed about 5 million euros to do it, which I didn't have.So I started pivoting to a software idea. And why the software did to host that, I just quickly discovered, you know, the amount of energy going into data centers, the amount of, you know, associated issues, and back then, 2015, 16, we were literally just discovering the energy aspect of it. We need didn't discuss, you know, water and land use and all of that.We really focused on the energy and then we say, "look, well wait a second. You know, we have all this excess of energy. We literally cannot deliver that at this point. So we have a very high share of shutting down our wind turbines when there's just too much energy to move it around. Why not bring the data center as a flexible load close to the production, and enable, you know, sustainable computeto then send package rather than energy, which is way easier, you know, over the global fiber grids." And that's how I got started and fell into the data industry. Big benefit and big learnings from that stand that I didn't know nothing about data centers. And as an engineer, a lot of things were not adding up. We looked at the servers back then, and even then it said, okay, this is good, you know, to run from 15 to 32 degrees. I said "32 degrees? Why? What is data center cooling and why is data center cooling? We don't have 32 degrees in the north." Most likely now, we probably ought to do within eight years.Karl Rabe: But the important thing was really challenging this, and we started with very little money and we couldn't afford like the proper fancy stuff that all of this data center make, you know, like a chillers, you know, spending electric energy to cool something which really does not need cooling, in my opinion, up to now.That was the start of this, you know, and so this is, the company of Windcloud is still ongoing. We had what we were, what we had as a huge problem. And I was always, my gut feeling for this was always we need to find a way to be able to compete with the Nordics.So we have renewable energy, but we need to have it cost effective. And that was something that we tried two or two and a half times, I would say with the, with always having a legal way to access the energy in a proper setting. It was always extremely difficult and extremely frustrating also because the German energy system is very complicated.It is, you know, geared or developed from a centralized view of this, and is benefiting, you know, large scale industry and large scale energy companies, to putting other terms, as you know, in, you're probably familiar with the, Asterix comics. You know, that far off and north in Germany that probably people, you know, there was a bit suspect, you know, what we're doing there or now we start producing energy and now we also want to use the energy so that is not adding up.It's very hard and close to impossible to access your own produced energy at scale, you know, which is in an abundance. And that was, yeah, that was something what we always faced, which led to other innovations. So we build the first data center or one of the few data centers to reuse the heat in Germany, putting into an algae farm.And we trying to create really efficient, PUEs already back then, you know, whereas the industry stranded is quite still quite high in ours. Claim I never had enough money to build a data center with a PUE above a 1.2, or even 1.1. The first servers were cooled with a, you know, a temperature regulated fan, you know, out of the, we built with the same guy who built, you know, a pig stale for my father, you know.that was, you know, we nearly didn't call it Windcloud. We nearly called it Swines and Servers, Chris Adams: Okay. Pigcloud.Yeah.Karl Rabe: Yeah, Pigcloud, but it could have been, you know, could have been misleading. And the, so the good thing turning out of that, you know, and going back to that, to those struggles in getting started is that we were forced to uncover a lot of the cooling change and the energy distribution change, which are were not, you know, not really adding up for us.And that is, you know, still one of the biggest support for us to build efficient data centers and to create, you know, sustainable solutions.Chris Adams: Okay. Cool. Alright then. So. Okay. There's, I didn't realize anything about the Schweins und Servers aspect at all actually. Would you even, I'm not sure what German is for server would actually be in this context. Was it literally gonna be Schweins und Servers, or?Karl Rabe: Yes. Some. So, yeah, something like that.Chris Adams: Okay. Wow. That's, I was not expecting that.I think Windcloud sounds a bit better, to be honest.Karl Rabe: Yeah, thanks. No, the brand, the name is great. I think it's still, yeah, I'm very simple like that. You know, we had Windcloud, so we take wind, we make cloud. Now we are have, we are Wooden Data Center. We build data centers outta wood. So we, but there's this, but it's, to be fully honest, is right now, is so to speak,we call Wooden Data Center, but what we do is try to decarbonize the data center. So wood is obviously, is a massive component of that, but we do see real good effort in the supply chains. Happy to go into that a little later, but there are some examples from fluids. We just found, you know, bio-based polycarbonate for hot and cold containments.So the amount of components throughout the data center that have, that has a bio-based, ergo, a low carbon alternative is ever so increasing. Chris Adams: Can I come back to that a little bit later? 'Cause I just wanna, touch on theKarl Rabe: Yeah. no.Chris Adams: So the wind thing, so basically Windcloud, the big idea was putting data centers in the actual wind turbines themselves. So that gives you access to green energy straight away, because you're literally using power that otherwise either couldn't be transmitted because there were, because the pipes weren't big enough essentially in some cases.And, I guess plus point to that in that if you are already using a building that's already there, you don't have to build a whole new building to put the data centers inside. So there's presumably some kind of embodied energy advantage there because there's a load of energy going into, kind of, that goes into making concrete and stuff that you don't have to do because you are already using an existing, like, building, right?Karl Rabe: Yeah. So to clarify on that, it is good that you touch on that because there is, this is literally is a bit of a crossover because the company you're referring to is Wind Cause, which is a good friend of ours and they are using the turbine towers. Chris Adams: Ah, Karl Rabe: They can do so because they use a little bit different type of turbine.And they're also based in the south of Germany, we had the same idea because it's also very difficult to build next to a wind farm. The big difference is that the towers used at Wind Cause, they are concrete and they have quite a lot of space. They're about 27 meters wide. because of the initial, discussion that we have onshore, or offshore conditions onshore, we have steel towers, which are shorter and hence don't have this big diameters.You know, we build tall. And so we always had the challenge of still needing a data center. And so that's where our learnings inspirations came from For Wooden Data Center. But we still tried to reuse existing infrastructure. So we were at one point within the Windcloud journey,I was the co-owner of a former military bunker area. And so we wanted to place within those long concrete tubes, we want to place data centers in order to yeah. Have a security aspect and don't need, you know, a lot of additional housing or even bunkering. And there's obviously the dodging bullets where has spent a lot of concrete and steel concrete in order for those facilities.Chris Adams: I see. So you're reusing some of the existing infrastructure, so rather than building totally new things, you're essentially reusing same, you're reusing stuff that's already had a bunch of like energy and emissions spent to create it in the first place. I see. Okay. All right. So,Karl Rabe: And, back then, you know, also to, because it's such a short time back then, really need to emphasize that we were, we really, you know, only had a hunch and a feeling, oh yeah, sort of has CO2 associated to it and probably also the building of a data center.You know, we have, we really, it was so hard to quantify, and I think we still, carbon accounting is still, is somewhat of, not wizardry, but it's really hard to pull the right numbers. You know, only two years ago at the OCP Summit, so in a Google presentation, the range that they mentioned, you know, for steel and concrete carbon was, you know, 7 to 11 for equally both. So the range of the total uncertainty, I feel, is quite high. You know, and this is the biggest, one of the biggest and most funded, best funded organizations in the world. You know, we're still not being able to get it more concrete, you know, and that's something we really need to work with the industry and supply chains in order to be even aware to specify the problem.Chris Adams: So, can I unpack that for a second before we talk a little bit about this? And so you're basically saying even the largest companies in the world, they don't necessarily have that good access to know how, what the carbon intensity of the concrete they've used in one data center compared to another one,it can quite, it can vary quite a lot. Is that what you're saying there?Karl Rabe: So this was basically specifying the global numbers for steel and concrete. So, I do believe that we have now relatively good visibility for our own builds and projects and also what we do now moving forward. But to really try to grasp the global problem of it, that was still, you know, two years ago was still had this high uncertainty, you know, 'cause we were working with numbers,maybe they're now five years older, we don't know the complete, you know, build out of every city, every building globally. You know, it's just a lot of guesswork in that, globally. And so I especially believe that although we, Wooden Data Center, the amount of innovation that is put into concrete, you know, has the potential to drastically reduce that for buildings.You know, the, it was a, it's definitely still a huge problem in, for the data quality and the data, yeah, the emissions, you know, guesswork that's in there, you know, and a lot of those things are based on scenarios, you know, and those are getting ever so more real. But the best example for Wooden Data Center is, there's a comparison comparing a steel concrete building to a CLT one,Chris Adams: Yeah. Karl Rabe: and it is assuming that after 20 years, it's only living for 20 years, which, you know, can easily be 200 years, but that afterwards it is being reduced into, you know, building chairs or tools or toys. But if you take then the CLT and burn it, then obviously you have a zero sum gain. Every, all the carbon that was stored.It's Cross-Laminated Timber, you know.Chris Adams: Yeah. So this is the kind of like the special, the, this, it's a special kind of, essentially like machined timber that is, that provides some of the kind of strength properties of maybe steel or stuff like that, but is made from wood, basically, right?Karl Rabe: Correct. So we need to stretch the importance that this is actually a material innovation. It's a relatively young material based on a, I think a thesis, PhD thesis from Austria. And so we only have CLT or cross-laminated timber for about 25 years. Chris Adams: Oh, I see.Karl Rabe: Or maybe now 26, years.So the, you probably are familiar, or you have seen there are those huge wooden beams in, you know, storage buildings. Chris Adams: Yeah. Karl Rabe: Those are called GLT, like glue laminated timbers. And the difference is those boards are basically glued in one direction and they're really good for those beams or for posts.But to have like ceilings, walls, and roofs, those massive panels, you now have the material of cross laminated.Chris Adams: Oh, okay. In both directions, right? Yeah.Karl Rabe: Correct. And those now enable like full massive wooden buildouts. And that's something, and so the biggest challenge is that we, if we say wood, then the association we probably will touch on now or later is fire.Chris Adams: Yeah. Karl Rabe: But in reality, in nature, we don't have those massive panels which don't, you know, just flame up. They have, they're fully tested and certified to glim down, which is, you know, they turn black and then they slowly, you know, in a thousand degrees, they slowly, you know,shrink Chris Adams: Like smolder, right? Yeah.Karl Rabe: Yeah. And so, but, the, how we design data centers is basically factor in this component, and we are able to create really fire secure data centers built out of those new wood materials basically.Chris Adams: Okay. All right. So a lot of us are typically thinking of data centers as things made entirely with wood and with steel, concrete and plastic all over the place. And essentially you can introduce wood into this place and it's not gonna burn down because you have this material, which is treated in such a way that it is actually very fire resistant.And that means you could probably replace, I mean, maybe you could talk to him a little bit about which bits you can replace. Like, can you rep, would you replace like a rack or a wall or like a roof? Maybe we can talk about that so we can like, make it a bit easier to kind of picture what this stuff looks like.Karl Rabe: No, absolutely. I'm afraid I'm still, always very liberal in sending out samples to my clients, you know, but I don't have it here in my hand, but, so that is a very good the question, is basically like, if we would talk like slide deck or something that I'll try to show in terms of scope one, two, and three, what we can do and what we have now, and that it's like the, biggest component is in obviously the housing. You know, what is your building or your room of a data center? When you are touching on existing buildings we CLT is also ideal for building and building concept of existing large storage or logistic buildings to put in data centers.We can build that up quite quickly out or create rooms very quickly in those, and there is other huge advantage of CLT is that we get those pre-manufactured and they just fit,Chris Adams: Oh, like stick them together like Lego rather than have to pour Karl Rabe: concrete?Yeah. Little, Yeah, a little bit. You need like, a little bit of leveling foundation.If you have an existing floor, still, some datas, you know, preferred to in the greenfield also have a new floor. But that is is something that it helps to, with those, we can create the IT room relatively quickly and then have the build out of those averaging up to 40% quicker times than traditional steel sandwich concrete, you know, data center.So it is enormously easy to work too. It's very precise to pre-design and pre-manufacture and then very easy to work with. If there's something, if there's a problem on site, you know, you just crank out the chainsaw and adapt and adjust.Chris Adams: Okay. Just to carve it down a bit.Karl Rabe: And yeah, so to speak. But once you have then those assembled and secured, it has like a lot of mass to it and a lot of volume to that which creates very good fire protectivephysical resistance and availability properties. And that is something that we now, it's really being seen as one of the core benefits. You know, the speed what we can build this out.Chris Adams: Oh, ok.Karl Rabe: We have introduced wooden racks, and we also see more and more attention for those.Chris Adams: Wait, sorry. Can I just, you said wooden rack, like as in the big steel rack that holds the servers themselves, you could, that could be made of wood as well now, so you'd have like a rack thing holding a bunch of servers, right?Karl Rabe: Correct, So we built this also. We have, one of our clients has send us like a server casing and ask to also think about to do the casing, but we probably, we're not a hundred percent there yet. In order to do that, we would have, we would've an idea, in terms of the spirit of OCP, which is, you know, like,reduce and cut out stuff. You know, one vision of that would be just a wooden, you know, board where you have dedicated spaces. You slide in your main board, connect power, connect liquid cooling, have fans on the back and then cycle through only the boards. Remove, you know, not even fancy, but just base frames of a server.But right now the, it's a combination, for the 19 inch standard and also the OCP standard, to use, you know, reduce up to 98% of the steel in those constructions and then only have functional parts in order to stick in the servers made from steel railings and have then wooden frames.And we do that for the OCP format, which is very popular. We get a lot of the special requirements because we are the only ones who producing like a small version of the rack, which, the OCP has a lot of advantage, but the base rack format is a two meter 30 high, which is like a really hyperscale, you know, mass density approach.Which doesn't fit even through the doors of most data centers I know, you know, they still have relatively, you know, standard two meter high doors or able to fit in like a 42 inch rack. But you need like a very special facilities because those racks come also pre-integrated and then you roll them in place.So you need a facility that has high doors, ramps with small inclines, you know, or no ramps at all, in order to be able to place a fully integrated rack. We started building OCP racks because back at the time only hyperscalers were really getting those, and we wanted to do more of this open compute format and were able to offer that.And the version three rack, you know, was a good candidate to convert to a wooden based structure.Chris Adams: All right, so we'll come to that a little bit later because I actually came across some of your work when you were building, designing some of these on YouTube so people can see what all this stuff looks like. But if I just come back to the original question, essentially, so it sounds like you can replace quite a lot in a data center.So you can replace the shell of the building, like literally green the shell by replacing the concrete, which is one of the largest sources of, you know, creating concrete and cement is one of the largest sources of emissions globally. So you can switch, you can move from a source of emissions to, is it a sink?Cause CO2 and carbon gets sucked out of the sky to be turned into trees. So you've gone from something which is a source to a sink and that, and you can replace not just the walls, the outer building, but also quite a lot of the actual structure itself. Just not the servers yet.So that's probably like a, I mean, maybe I could ask you then like, If I'm switching from maybe regular concrete and regular steel, I mean, is there any, like, do you folks have any idea about like what the kind of changing quantitative terms might actually be if I was to have like an entirely concrete, entirely steel data center and then replaced all of that with, say, wooden alternatives, for example? Like is it like a 5% reduction or is there any, like, what kind of changes are we looking at for the embodied figures, for example? Karl Rabe: So the conservative industry figures are somewhat off between minimum 20%, only having the production change up to 40%. So Microsoft, we, the good thing also we have to mention is that we are an industry now. Microsoft announced those productions I know the other hyperscalers are looking at that.We only, in Germany we had two other companies started getting into construction. That's why it's for us really important to be on the decarbonization path.Chris Adams: Ah, I Karl Rabe: see. So we do come with our own data center, even concepts and philosophies, which I can talk about a little later.But coming back to the point it is still very hard to quantify. And the, but really positive things about carbon accounting or calculations, as I mentioned, we now have as a data center, we have this negative component, which I have to laugh 'cause an engineer immediately and said, can we then just use more wood?You know, can we make the wall thicker? You know, obviously yes, you could do that, but there's a cost to that and there's also, you know, it be betrays the idea, you know. But, the really exciting thing is that I now go to show, from show to show, and I was two weeks ago in Londonand just on the flight somebody showed me, a picture of an air handling unit inside of a wooden enclosure. And I was chasing an hour through the London show, 'cause I assumed it was there, but it was on a, it was on a different show. but that is the kind of things that we can really think about is enclosures.So also we have started, we have one, well, for the OCP reg or for this AI build out, we have also created a rear door, which is, so to speak, a wooden rear door. So the fans are traditional, the heat exchanges obviously needs to be traditional, but it is also like a micro aluminum micro channel heat exchanger, which is derived from other industries, which is, you know, helping mass production, reducing cost, reducing emissions.And that is the other thing that is happening in the industry that we're trying to find, not data center specific solutions, but rather find mass produced industry solutions and adapt them to the data center in enhanced reducing cost and time.Chris Adams: Alright. So in the same way that basically cross laminated timber and the use of wood is something that has been in use in not just in the data center industry, like people make, what are they called? Are they called plyscrapers? You know, skyscrapers with wood. Plyscrapers.It's, so the idea was that, okay, things which are made, being made in volume here can be made more efficiently and like this is one way that you are adapting 'em to a new domain.And it may be that if people are getting much, much better at making say very efficient heat pumps, 'cause they can cool things down as well as heat them up, that might be another thing you're looking at. Say, "well actually that might be able to be used in this context as well." Okay. Alright. And if I am, so if I go back to the original thing about saying, okay, we're looking at possible savings maybe 20% up to like possibly 40%, like that's the kind ofKarl Rabe: yeah. That's the range that we have, you know, I think, so the problem is do we, if, did Microsoft evaluate with IT or without IT? So for the facility, I think we can potentially come to net zero approach, which we, you know, but by first principle, I think we can at least achieve realistic reductions to let's say 80, 70-80, 85% with those tools that are set, you know, basically the easy steel replacements, the, like, the rack, the enclosures, the housing, fluids is something we have. There's a very interesting, you know, no-brainer replacement for fossil diesel on backend generators.It's a liquid called HVOChris Adams: Yeah. Let's come to that in a second actually. 'Cause I did wanna ask a little bit about some of the things you can do for the fuel here. So maybe if we just, so basically the, so there are some savings available there and these should be something that you could, some, this is something that should show up in some kind of numerical description.So if you had like, maybe two data centers and one was using wood in strategic places, then the embodied carbon should be lower. So if, I mean, if I was looking for this there like a label to look for or a standard I can look for? Because in the Green Software Foundation we have this idea called Software Carbon Intensity, which includes like the carbon intensity of the energy you use and stuff like that.But they also look at the building itself. So theoretically, if you had a wooden data center and a bog standard concrete data center, you know, if you run your code in the greener data center, you would probably have a better score if you had access to these, the data or stuff like that. Do you know, like, do, any places share this data or have like a label for anything like this?Karl Rabe: They definitely share the data. I, for example, so we definitely also need to Eco Data Centers in Sweden's and we, which were, you know, basically we approach to them. Our whole world was shook. You know? It's like, oh, so we come from this energy perspective, but they didn't have idea and they build it, you know, sustainably. They build it sustainably.So we need to change, you know. That was, you know, it was a huge eyeopener. And they also are the few first ones to, I'm not sure if they used like the LCA method, but they were quantifying the embed carbon and are certifying to you annually too, as a client, which I think is the way to go.And we need to figure out how to standardize that. I assume there's potentially a standard that we can use. I know that other data center providers are building sustainably and putting this effort forward. But we don't have a unified label yet, I'm afraid.Chris Adams: Okay. Well thisKarl Rabe: I know that some, also challenge, like, there's like a data center climate neutral act and some of them specifically exclude scope three, which, you know, I know where they're coming from.Also in Germany and Germans, you know, they're all about energy efficiency. They love to talk about, you know, just the, energy and the scope two, basically. But then, you know,Chris Adams: Most of theKarl Rabe: missing out, this dimension, you know,. Missing out the dimension is being faithful to your girlfriend or wife, you know, like three days out of a week.You know? It is, it's notChris Adams: You are not showing the full picture, right?Karl Rabe: Yeah. You're not, doing it at all basically. Right. I would probably, you know, just need to Google it and there are, you know, building labels that you could be used in construction. Quantifications, I'm sure, but there's not yet like a data center specific label.There is good work also in OCP to do metrics and key performance indicators, and they're all looking at that and there is, I think they're trying to build towards something like real, like true net zero.Chris Adams: Oh yeah. Okay.Karl Rabe: But... Chris Adams: So there are some, so there are some initiatives going on to kind of make this something that you could plausibly see, and, but it's quite early at the moment right now. So like, let's say that I, you know, we spoke before about, okay, I can run my computing jobs in one data center or I can choose to run it somewhere else.These numbers don't show up just yet, but there is work going on. Actually, I've just realized there is actually a embedded carbon working group inside the OCP who have been looking at some of this stuff. So maybe we'll share a link to that, because that's actually one of the logical places you'd look for that.Okay,Karl Rabe: And they do real good work. They do a lot of good initiatives, happening there. There's also, it's Swiss from the Swiss Data Center Association. They also have a label, that is looking at some of this, and they want also to include scope three.So this is coming up, but it's, not as easy as, you know, having an API, you know, pushing it to the software developer and saying, look, we have this offset because this was constructed, you know, with concrete or steel, and this is, you know,Chris Adams: Okay. So we're not there yet, but that's a, that's the direction we might be heading towards. Okay. Alright. We'll add some links to that. And now I'd like to pick up the other thing you mentioned about HVO and stuff like that because you spoke before about, you know, Windcloud or wind node and like data centers running in,or like, you know, relying on wind right now, we know it's a really common refrain that the wind doesn't blow all the time. And like it's news to some people that sun, that's, you know, it is not always sunny, for example. So there'll be cases where there'll be times where you need to get the power from somewhere and, you know, in the form of backup power. And like loads of data centers, you said before they rely on like fossil diesel generators, right.And that can be, it's bad from a climate point of view, it's also quite expensive, but it's also terribly really bad from an air quality point of view as well, because, you know, people are up, you know, you can see elevated cases of like asthma and all kinds of respiratory problems around data centers and things like that.But you mentioned there's options there to kind of reduce the impact or have like more responsible options there. Maybe we could talk a little bit about like what's available to me there if I wanted to reduce that part, for example.Karl Rabe: No, happy to go into that. That is something that we are now thinking about quite heavily this year. And we're already presenting on two occasions, a sense. So the easy options in order to reduce your carbon on the scope one part for data center, which is basically, you know, that's just the direct burning of fossil resources and that is the testing of your backup generators. The easy option for that is this second gen diesel, HVO 100. And the, when I realized the key feature of this fuel, which about 15% more expensive, is that it doesn't age. Fossil diesel and especially, you know, biodiesel, the first generation and fossil diesel with biogen, there's always, in Europe there you have a certain degree of mixed in of this, it ages through bacteria biologically.So it's degrading. So, the, which is, you know, really bad because this diesel sitting there in a tank, you run it half an hour every two weeks, and you maybe change the fuel filter twice, once, twice a year. But if you really have an issue, you know, all of a sudden you use this diesel for four hours and then your system, your filter clocks, and you still have a problem, right? IfChris Adams: So your backup isn't a very good backup.So backup needs to be a good backup. Yeah.Yeah. Karl Rabe: Yeah. so your backup can runChris Adams: you had one job, right? Yeah.Karl Rabe: Yeah. Yeah. And so, how it's mitigated is people try to use 'pure' diesel or, you know, heating oil, you know, which is not so prone to it, but still ages. They are recycling or, you know, really pumping out the fuel and pumping it in again every three years or they continuously filter it.All of this is either adding energy or cost. And so, the, this new form of biodiesel, which is, you know, your old frying fat, cracked with hydrogen to, is it looks very clear and it's very chemically treated that it's not really aging. People don't know really yet how long it stays.They certified 10 years, potentially it's stays, good longer and is also burning cleaner. SoChris Adams: Ah,so it'sn't going to be bad like bad air and stuff as well then?Karl Rabe: Yeah. So for the majority of your enterprise IT, your standard data center that's around you, you know, cutting out the whole AI discussion, probably that's the easiest way to do something about that.This is like a drop in replacement. You just, you know, you empty your tank, you put it in, or you burn your old fuel and put a new, that is something that is, you know, easily increasing the availability of your facility and you can change with that. Chris Adams: Can I just try to like summarize that? So, because I don't work with data centers on a daily, so there's like basically fossil diesel, the kind of stuff that, you know, you might associate with dieselgate and like all kinds of bad air, air quality issues. And then the, kind of the other option, which is maybe a little bit more expensive, you said around 15%,there's something called HVO, which is essentially like biodiesel that's been treated in a particular way to get rid of lots of the gunk so it burns more cleanly and works better as a reliable form of backup. So the backup is actually a decent backup rather than a thing which might not be a very good backup.Oh okay. So that's like one of the things, and that's like the direction that we might be moving towards and that's kind of what we would like to see more of for the case where you need to rely on some kind of liquid fuel power. Right.Karl Rabe: Yeah.Chris Adams: Okay.Karl Rabe: That is, I think is for most people, you know, just a very easy low hang fruit to just replace, you know, it does not, you know, most engines are certified for, nowadays, all engines run on it, you know, it's, it has the same, Yeah, criteria, properties like traditional diesel, the only thing that's different is it's 4% lighter, you know?Chris Adams: Oh, I see.Karl Rabe: So that's the only real on the spec sheet Chris Adams: Oh, okay. Alright. So if I may, so that's one of the options. These, so you can replace fossil diesel with essentially non-fossil cleaner, slightly less, you know, less toxic diesel. So that's one thing that you might have in for your backup. Now, I've heard various people talking about, say hydrogen, for example.Now hydrogen can come from fossil sources. So people, most of the time, actually, most hydrogen does come from basically cracking natural gas or methane gas, but it can come from green places. And that's why is, that's another option that you might have to generate power locally.Karl Rabe: Is that something that people tend to use? So I think the best, the best reference for hydrogen is like the champagne of our energy transition. You know, we need, we need to put in a lot of energy to put, to produce it. It's not easy to store, that we need a lot of facilities to actually create green hydrogen.Karl Rabe: The majority of hydrogen is not green hydrogen, but it's gray or blue, which is basicallyChris Adams: like a carbon capture hydrogen, which is still a bit questionable. Yeah.Karl Rabe: all based from fossil tracking, you know, so it's, it potentially, you know, you, you also have the same goal. everything that we do for our clients is under this extremely short impact of time.You know, we have solve everything within now, within five years time, not even five years. Right. And so that's also something that I'm always, you know, spark a good discussion. When we talk about SMRs, you know, have the big pushback for nuclear over in the US, and also in Europe we have voices for that.And the short answer is, the three reasons I don't believe in it. They're not quick, you know, they're not cheap. Two projects were just, a year ago, there were two potential very, you know, hopeful projects for SMRs were canceled in the US, and half a year later it was a big thing.The big solution. like, what changed, you know? And then the third point is that is the, very German, perspective, you know, all the fears or the, challenges around the fuel, like getting it mostly 70% from Russia or, then the waste, you know, dumping it somewhere is not solved, still.And so, this is not a 2030 technology basically. That's my, the point what we can do and what I'm happy also to link, there's a good article from some of the, hyperscalers looking into solar combined with batteries, combined with gas based backup. The gas based has the one flexibility that it can start fossil, can move to bio, and potentially also can run on hydrogen. So this is, in terms of the speed with which we are now deploying hundreds of megawatts, you know, every data center for AI is now, you know, 100, 200, 300 gigawatts.You know, things that we did not,yeah,yeah. So it's things that we, you know, like yeah, we're discussing, you know, five or one to five gigawatts for the large people. And every other data center is all of a sudden is now a hundred megawatts, which used to be like a mega facility, just two years back.So that build out can only really be achieved not with grids or interconnects, those are too slow. This can only be basically with micro grids. Chris Adams: I see. Okay.Karl Rabe: Helping, you know, that are battery backed and gas based backed. And the big advantage of this is that if we think about the data center, traditionally a data center is a data fortress, right?You don't get in, data doesn't get out. It is, you know, is like a bank, you know, in terms of the security measures to do that. And also all of the infrastructure was handled that way. But if we are thinking about the UPS, and the genset not being sitting straight at the data center or only sitting straight at the data center but technically belonging to the utility and being able to provide flexible power, you know, because we have this, as mentioned, underlying flexible build out of renewable energy, and we need, you know, reliable switch-on power, which data centers all have. And so if we can put those together, there's a little bit of this working together, finding the right location where it would make most sense, and then allowing for SLAs and with clients to bidirectionally use batteries, gas turbines,Chris Adams: Oh, I see.Karl Rabe: engine power.This would, you know, help our, yeah, help us to transition, especially if we go into, you know, renewable shares, 60% and above and at latest from 80% we need those backup technologies. And then, and that is coming back to the question of hydrogen. Hydrogen is a technology that would, is so expensive that it would need to run all the time, basically.With renewable energy, we have high loads ofabundance of energy and only need short times of flexible energy generation for which gas and batteries is virtually ideal. And so we promote this idea of an energy-integrated data center, which has the electrical part supporting into the grid and is also, you know, taking advantage of the heat reuse, especially for liquid-cooled facilities in order to give heat out.And the benefit of that is not only from an economical perspective, but also we see more and more discussions about not in my backyard. If a data center is energy integrated, it's not a question, you know, it's a must have. And there's also a reason why it needs to be there, you know, in order to be able to stabilize the, your town grid or your local area.And so that's what we are trying to promote. We got a lot of good feedback and we see the first, hopefully we'll have the first data center realized with a medium voltage UPS this year, which is like a first step in moving the availability components of a data center, the batteries and the gen sets to a higher area, which, a lot of the cost in a data center is from the low voltage distribution.The power that you put in the batteries is also first transferred down, and then it's moved, you know, through the data center until it sits in the battery and then needs to go out. And all of those are rectification steps. And all of this makes, yeah, all ofChris Adams: You lose, so do you lose power every single time you switch between them? Oh, okay. So it sounds like you, there's a shift from, like, data center as a fortress where, you know, you could do that before to like something where you have to be like a bit more symbiotic with your local environment because for a start, if you don't, you're not gonna allow it,you won't be allowed to build it. But also it's gonna change the economics in your favor if you are prepared to like play nicely and integrate with your, essentially be a good neighbor. All right. That seems to be what you seem to be suggesting. Karl Rabe: That's a perfect analogy. Having like a good neighbor approach. Saying, "look, we are here now, we look ugly, we always box, you know, but we help, you know, powering your homes, we reduce the cost of the energy transition, and we also heat your homes." You know, and that is then, is then a relatively easy sale.Chris Adams: Okay. So that points to possibly that, honestly, that points to quite a different strategy required for people who are, whose job it is to get data centers built. They need to figure out how to honestly relate to communities and say, well, which bits are we supposed to be useful? Rather than the approach that you come to sometimes see where people basically say, "well, we're not even gonna tell you who, it is or who the company is, but we're gonna use all your power and use all your water." That approach's days are probably numbered, and that's not a very good strategy to use. It does make more sense to actually have a much more like neighborly approach and these are maybe new skills that need to be developed inside the industry then.Karl Rabe: Absolutely correct. And so, you need the, you need an open collaboration approach to that, and that is, you know, mirrored, so we trying to be a bit of an example there. And if you go, if you talk about, you had a good point in there, which we usually don't have a lot of time to expand on,but I think podcast a good format for that. You ask like, where do you get the ideas or what's the guiding star on that? And so, I was fortunate to be an exchange worker, you know, on a farm in Canada. And they introduced to me the idea of holistic management, which is like a, basically, decision making framework, that is based on financially viable, socially viable, and economically viable. And so those three bases are necessary in order to create sustainable decisions or holistic decisions. Those need to be short and long term viable.And that has been, you know, my guiding star as an entrepreneur and really being able to cut out those things. You know, there's a lot of startups, especially in Germany. We had those Berlin startups who all came from a business school and all of their ideas worked on an Excel sheet,always cutting out like a social perspective, you know? And so that was, you know, that's the opposite basically of what we are trying to do. And this framework was found by a farmer who first applied it to grass management and cattle farming, technically. But it is, and it is wildly interesting what he's able to do. He's basically retaking, stopping desertification and reversing effects in subtropical, semi arid areas. Yeah. So we'll definitely put that in the notes.It's a tED Talk from Alan Kettle, which I think he's still alive. He's in his, he must be 90 now. And it's fascinating. But that was a guiding star. And in order to promote our ideas, you know, a lot of our designs, you know, we put on YouTube, but we also put the files up.The, racks, you know, you can download the CD files. There's, we believe they're created with open source tools. So especially in engineering, we only recently really have powerful open source tools for CAD, for single line diagram. So we can give the source files with that.And that is is something how we believe that open collaboration and openness helps to build, you know, the trust Chris Adams: Ah, Karl Rabe: to build with speed and to really work together, you know? And that's what we get mirrored in the Open Compute Foundation. Yeah, that is something that we believe is, for challenges that we face as humanity,you know, I believe that only this open approach, and especially an open source, open hardware, open data, framework can help us.Chris Adams: All right. Okay, so we're coming up to time and I just wanted, and you did alluded to it a few times and I just wanted to provide a bit of space to let you talk a little bit about that before we kind of finish up. You spoke a few times about the fact that these models, when you work, bunch of designs for the racks and things are like online and available, and did you say that they're on YouTube, like people can see the videos of this or can like download like something in blender to mess around with themselves or work with it? Maybe you could just expand on that a little bit because I haven't come across that before andKarl Rabe: Okay, sure, sure. So, yeah, we initially, when we started, you know, we designed everything and we put it, we still need to, shamefully, we still need to put, do the push for, to GitLab and GitHub. We use right now, we put those model on a construction setup, of course, called, GrabCAD.Chris Adams: Mm-hmm. Karl Rabe: And for our, it, you know, it's not only our own thoughts to open source this and to build the trust, but it's also our biggest, easiest marketing tool. You know, create a model, publishing it, put a video tape. We are a bit behind. We have a lot of new and great ideas and things to share.But that's how we approach it, you know, we'll come up with idea, put it out there and, also, you know, make ourselves criticizable, you know, we'll, we are the only ones comfortably saying, look, we have the best data centers in the world, 'cause you can go, you can download, you can fact check our ideas, and if you have something against it, you know, just give us a feedback.And we are open to change that. And this way forward, you know, helps us also to approach the biggest companies in the world. They really like this open approach, you know, and they're happy to take the files in the models and to work on that.Chris Adams: So you basically have like models of like, this is a model ofKarl Rabe: Our rack, you know, this is our module data center. These are ideas behind that. And so that's how we are moving this forward. So people can approach this, they can download, they can see if it fits. They can make suggestions. Chris Adams: And like see if it's tall enough for the door and all of the basic or the practical things.Karl Rabe: Yeah. All those things, you know, and see, okay, we have smaller data center, oh, the base design doesn't fit in this setup, or we need to change something where we place, you know, the dry coolers or something like that.And so that is really, you know, really good feedback and sparks discussions.Chris Adams: Yeah, I haven't heard about that before. All right. Well, Karl, thank you so. This has been a lot of fun. Now, we've come up to time and I really enjoyed this tour through all the stuff hap that happens below the software stack for engineers like us, for example. If someone does wanna look at this or learn about this or maybe kind of check out any of the models themselves, if they wanted to build any of this stuff themselves, where should they look?Like, how do we, where do people find you online or any other projects that you're looking at, you're So, working on?Karl Rabe: So the best thing technically to, is LinkedIn. This is, you know, our strong platform, to be honest, we are very active there. We publish most there. The webpage is still under construction. You know, people already understand what we do from going to that.LinkedIn is great. Look, go and, you know, trying to reach us and see what we do at the Open Compute Foundations is also often very great. But yeah, just technically why Google is very easy to find us on LinkedIn and to reach Chris Adams: So Karl Rabe on LinkedIn, Wooden Data Center, there aren't that many other companies who are called Wooden Data Center. And then for any of the Open Compute Project stuff, that's the other place to look at where you're working. 'Cause you're doing the open compute modular data center stuff.Those are the ones, yeah?Karl Rabe: Yeah. Correct.Chris Adams: Brilliant. Karl, thank you so much for this. This has been loads of fun and I hope that we've had listeners follow us along as well to see all the options and things available to them. Alright, Karl Rabe: It was a pleasure. Thanks so much. And, Chris Adams: Likewise, Karl. And, hope the wind turbines treat you wellwhere you're staying. All right, take care mate.Karl Rabe: Yeah. Thank you. Bye bye. Cheers.  Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode. 
undefined
Apr 10, 2025 • 46min

GreenOps with Greenpixie

Host Chris Adams sits down with James Hall, Head of GreenOps at Greenpixie, to explore the evolving discipline of GreenOps—applying operational practices to reduce the environmental impact of cloud computing. They discuss how Greenpixie helps organizations make informed sustainability decisions using certified carbon data, the challenges of scaling cloud carbon measurement, and why transparency and relevance are just as crucial as accuracy. They also discuss using financial cost as a proxy for carbon, the need for standardization through initiatives like FOCUS, and growing interest in water usage metrics.Learn more about our people:Chris Adams: LinkedIn | GitHub | WebsiteJames Hall: LinkedIn Greenpixie: WebsiteFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterNews:The intersection of FinOps and cloud sustainability [16:01]What is FOCUS? Understand the FinOps Open Cost and Usage Specification [22:15]April 2024 Summit: Google Cloud Next Recap, Multi-cloud Billing with FOCUS, FinOps X Updates [31:31]Resources:Cloud Carbon Footprint [00:46]Greenops - Wikipedia [02:18]Software Carbon Intensity (SCI) Specification [05:12]GHG Protocol [05:20]Energy Scores for AI Models | Hugging Face [44:30]What is GreenOps - Newsletter | Greenpixie [44:42]Making Cloud Sustainability Actionable with FinOps Fueling Sustainability Goals at Mastercard in Every Stage of FinOps If you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:James Hall: We want get the carbon data in front of the right people so they can put climate impact as part of the decision making process. Because ultimately, data in and of itself is a catalyst for change. Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.I'm your host, Chris Adams. Hello and welcome to Environment Variables where we explore the developing world of sustainable software development. We kicked off this podcast more than two years ago with a discussion about cloud carbon calculators and the open source tool, Cloud Carbon Footprint, and Amazon's cloud carbon calculator.And since then, the term GreenOps has become a term of art in cloud computing circles when we talk about reducing the environmental impact of cloud computing. But what is GreenOps in the first place? With me today is James Hall, the head of GreenOps at Greenpixie, the cloud computing startup, cloud carbon computing startup,to help me shed some light on what this term actually means and what it's like to use GreenOps in the trenches. James, we have spoken about this episode as a bit of a intro and I'm wondering if I can ask you a little bit about where this term came from in the first place and how you ended up as the def facto head of GreenOps in your current gig.Because I've never spoken to a head of GreenOps before, so yeah, maybe I should ask you that.James Hall: Yeah, well, I've been with Greenpixie right from the start, and we weren't really using the term GreenOps when we originally started. It was cloud sustainability. It was about, you know, changing regions to optimize cloud and right sizing. We didn't know about the FinOps industry either. When we first started, we just knew there was a cloud waste problem and we wanted to do something about it.You know, luckily when it comes to cloud, there is a big overlap between what saves costs and what saves, what saves carbon. But I think the term GreenOps has existed before we started in the industry. I think it, yeah, actually originally, if you go to Wikipedia, GreenOps, it's actually to do with arthropods and Trilobites from a couple million years ago, funnily enough, I'm not sure when it started becoming, you know, green operations.But, yeah, it originally had a connotation of like data centers and IT and devices and I think Cloud GreenOps, where Greenpixie specializes, is more of a recent thing because, you know, it used to be about, yeah, well it is about how do you get the right data in front of the right people so they can start making better decisions, ultimately.And that's kind of what GreenOps means to me. So Greenpixie are a GreenOps data company. We're not here to make decisions for you. We are not a consultancy. We want get the carbon data in front of the right people so they can put climate impact as part of the decision making process. Because ultimately, data in and of itself is a catalyst for change.You know, whether you use this data to reduce carbon or you choose to ignore it, you know, that's up to the organization. But it's all about being more informed, ignoring or, you know, changing your strategy around the carbon data.Chris Adams: Cool. Thank you for that, James. You mentioning Wikipedia and Greenops being all about Trilobites and Arthropods, it makes me realize we definitely should add that to the show notes and that's the thing I'll quickly just do because I forgot to just do the usual intro folks. Yeah, my name's Chris Adams.I am one of the policy director, technology and policy director at the Green Web Foundation, and I'm also the chair of the policy working group inside the Green Software Foundation. All the things that James and I'll be talking about, we'll do our best to judiciously add show notes so you can, you too can look up the origins of, well, the etymology of GreenOps and find out all about arthropods and trilobites and other.And probably a lot more cloud computing as well actually. Okay. Thank you for that James. So you spoke a little and you did a really nice job of actually introducing what Greenpixie does. 'Cause that was something I should have asked you earlier as well. So I have some experience using these tools, like Cloud Carbon Footprint and so on to estimate the environmental impact of digital services. Right. And a lot of the time these things use billing data. So there are tools out there that do already do this stuff. But one thing that I saw that sets Greenpixie apart from some other tools as well, was the actual, the certification process, the fact that you folks have, I think, an ISO 14064 certification.Now, not all of us read over ISO standards for fun, so can you maybe explain why that matters and what that actually, what that changes at all, or even what that certification means? 'Cause, It sounds kind of impressive and exciting, but I'm not quite sure, and I know there are other standards floating around, like the Software Carbon Intensity standard, for example.Like yeah, maybe you could just provide an intro, then see how that might be different, for example.James Hall: Yeah, so ISO 14064 is a kind of set of standards and instructions on how to calculate a carbon number, essentially based on the Greenhouse Gas Protocol. So the process of getting that verification is, you know, you have official auditors who are like certified to give out these certifications, and ultimately they go through all your processes, all your sources, all the inputs of your data, and kind of verify that the outputs and the inputsmake sense. You know, do they align with what the Greenhouse Gas Protocol tells you to do? And, you know, it's quite a, it's a year long process as they get to know absolutely everything about your business and processes, you really gotta show them under the hood. But from a customer perspective, it means you know, that it proves thatthe methodology you're using is very rigorous and it gives them confidence that they can use yours. I think if a company that produces carbon data has an ISO badge, then you can probably be sure that when you put this data in your ESG reports or use it to make decisions, the auditors will also agree with it.'Cause the auditors on the other side, you know, your assurers or from EY and PWC, they'll be using the same set of guidance basically. So it's kind of like getting ahead of the auditing process in the same way, like a security ISO would mean the security that the chief security officer that would need to, you know, check a new vendor that they're about to procure from.If you've got the ISO already, you know they meet our standards for security, it saves me a job having to go and look through every single data processing agreement that they have.Chris Adams: Gotcha. Okay. So there's a few different ways that you can kind of establish trust. And so one of the options is have everything entirely open, like say Cloud Carbon Footprint or OpenCost has a bunch of stuff in the open. There's also various other approaches, like we maintain a library called CO2.js, where we try to share our methodologies there and then one of the other options is certification. That's another source of trust. I've gotta ask, is this common? Are there other tools that have this? 'Cause when I think about some of the big cloud calculators, do you know if they have this, let's say I'm using say, a very, one of the big three cloud providers.Do these have, like today, do you know if they actually have the same certification or is that a thing I should be looking for or I should be asking about if I'm relying on the numbers that I'm seeing from our providers like this.James Hall: Yeah, they actually don't. Well, technically, Azure. Azure's tool did get one in 2020, but you need to get them renewed and reordered as part of the process. So that one's kind of becoming invalid. And I'm not sure AWS or Google Cloud have actually tried, to be honest, but it's quite a funny thought that, you know, it's arguably because this ISO the, data we give you on GCP and AWS is more accurate than the data, or at least more reliable than the data that comes directly out the cloud providers.Chris Adams: Okay. Alright. Let's, make sure we don't get sued. So I'm just gonna stop there before we go any further. But that's like one of the things that it provides. Essentially it's an external auditor who's looked through this stuff. So rather than being entirely open, that's one of the other mechanisms that you have.Okay, cool. So maybe we can talk a little bit more about open source. 'Cause I actually first found out about Greenpixie a few years ago when the Green Software Foundation sent me to Egypt, for COP 27 to try and talk to people about green software. And I won't lie, I mostly got blank looks from most people.You know, they, the, I, there are, people tend to talk about sustainability of tech or sustainability via tech, and people tend not to see them as, most of the time I see people like conflating the two rather than actually realizing no, we're talking about of the technology, not just how it's good for stuff, for example, and he told me, I think one of your colleagues, Rory, was this, yeah.He was telling me a bit about, that Greenpixie was initially using, when you just first started out, you started looking at some tools like Cloud Carbon Footprint as maybe a starting point, but you've ended up having to make various changes to overcome various technical challenges when you scale the use up to like a large, to well, basically on a larger clients and things like that. Could you maybe talk a little bit about some of the challenges you end up facing when you're trying to implement GreenOps like this? Because it's not something that I have direct experience myself. And it's also a thing that I think a lot of people do reach for some open source tools and they're not quite sure why you might use one over the other or what kind of problems they, that they have to deal with when you start processing that, those levels of like billing and usage data and stuff like that.James Hall: I think with the, with cloud sustainability methodologies, the two main issues are things like performance and the data volume, and then also the maintenance of it. 'Cause just the very nature of cloud is you know, huge data sets that change rapidly. You know, they get updated on the hour and then you've also got the cloud providers always releasing new services, new instance types, things like that.So, I mean, like your average enterprises with like a hundred million spend or something? Yeah. Those line items of usage data, if you like, go down to the hour will be billions of rows and terabytes of data. And that is not trivial to process. You know, a lot of the tooling at the moment, including Cloud Carbon Footprint, will try to, you know, use a bunch of SQL queries to truncate it, you know, make it go up to monthly.So you kind of take out the rows by, you know, a factor of 24 times 30 or whatever that is. It's about 740, I think. Something like that (720). Yeah. Yeah. So, and they'll remove things like, you know, there's certain fields in the usage data that will, that are so unique that when you start removing those and truncating it, you're really reducing the size of the files, but you are really losing a lot of that granularity.'Cause ultimately this billing data is to be used by engineers and FinOps people. They use all these fields. So when you start removing fields because you can't handle the data, you're losing a lot of the familiarity of the data and a lot of the usability for the people who need to use it to make decisions.So one of the big challenges is how do you make a processor that can easily handle billions of line items without, you know, falling over. And CCF, one of the issues was the performance really when you start trying to apply it to big data sets. And then on the other side is the maintenance.You know, arguably it's probably not that difficult to make a methodology of a point in time, but you know, over the six months it takes you to create it, it's way out date. You know, they've released a hundred new instance types across the three providers. There's a new type of storage, there's a brand new services, there's new AI models out there.And so now, like Greenpixie's main job is how do we make sure the data is more, we have more coverage of all the skews that come out and we can deliver the data faster and customers have more choices of how to ingest it. So if you give customers enough choice and you give it to them quick enough and it's, you know, covering all of their services, then you know, that's what those, lack of those three things is really what's stopping people from doing GreenOps, I think.Chris Adams: Ah, okay, so one of them was, one of the things you mentioned was just the volume, the fact that you've got, you know, hours multiply the number of different, like a thousand different computers or thousands of computers. That's a lot of data. And then there's a, there's like one of the issues about like the metrics issue, like you, if you wanna provide a simple metric, then you end up losing a lot of data.So that's one of the things you spoke about. And the other one was just the idea of models themselves not being, there's natural cost associated with having to maintain these models. And as far as I'm aware, there aren't, I mean, are there any kind of open sources of models so that you can say, well this is what the figures probably would be for an Amazon EC, you know, 6XL instance, for example.That's the stuff you're talking to when you say the models that you, they're hard to actually up to, hard to keep up to date, and you have to do that internally inside the organization. Is that it?James Hall: Yes, we've got a team dedicated to doing that. But ultimately, like there will always be assumptions in there. 'Cause some of these chip sets you actually can't even get your hands on. So, you know, if Amazon release a new instance type that uses an Intel Xeon 7850C, that is not commercially available.So how do you get your hands on an Intel Xeon 7850B that is commercially available and you're like, okay, it, these six things are similar in terms of performance in hardware. So we're using this as the proxy for the M5 large or whatever it is. And then once you've got the power consumption of those instance types,then you can start saying, okay, this is how we, this is how we're mapping instances to real life hardware. And then that's when you've gotta start being really transparent about the assumptions, because ultimately there's no right answer. All you can do is tell people, this is how we do it. Do you like it?Do you?And you know, over the four years we've been doing this, you know, there's been a lot of trial and error. Actually, right at the start, one of the questions was, what are my credentials? How did I end up as head of GreenOps? I wouldn't have said four years ago I have any credentials to be, you know, a head of GreenOps.So it was a while when I was the only head of GreenOps in the world, according to a Sales Navigator. Why me? But I think it's like, you know, they say if you do 10,000 hours of anything, you kind of, you become good at it. And I wouldn't say I'm a master by any means, but I've made more mistakes and probably tried more things than anybody else over the four years.So, you know, just, from the war stories, I've seen what works. I've seen what doesn't work. And I think that's the kind of, that's the kind of experience people wanna trust. And why Greenpixie made me the head of GreenOps.Chris Adams: Okay. All right. Thanks for that, James. So maybe this is actually a nice segue to talk about a common starting point that lots of people do actually have. So over the last few years, we've also seen people talk about move from not moved away, not just talking about DevOps, but talking about like FinOps.This idea that you might apply kind of some financial thinking to how you purchase and consume, say, cloud services for example. And this tends to, as far as I understand, kinda nudge people towards things like serverless or certain kinds of ways of buying it in a way, which is almost is, you know, very much influenced by fi by I guess the financial sector.And you said before that there's some overlap, but it's not totally over there, it's not, you can't just basically take a bunch of FinOps practices and think it's gonna actually help here. Can we explore that a bit and maybe talk a little bit about what folks get wrong when they try to like map this straight across as if it's the same thing?Please.James Hall: Yeah, so one of the big issues is cost proxies, actually. Yeah, a lot of FinOps as well, how do you fix, or how do you optimize from a cost perspective? What already exists? You know, you've already emitted it. How do you now make it cheaper? The first low hanging fruit that a finance guy trying to reduce their cloud spend would do is things like, you know, buy the instances up front.So you've paid for the full year and now you've been given a million hours of compute.That would might, that might cut your bill in half, but if anything that would drive your usage up, you know, you've got a million hours, you are gonna use them.Chris Adams: Commit to, so you have to commit to then spending a billion. You're like, "oh, great. I have the cost, but now I definitely need to use these." Right?James Hall: Yeah, exactly. And like, yeah, you say commitments. Like I promise AWS I'm gonna spend $2 million, so I'm gonna do whatever it takes to spend that $2 million. If I don't spend $2 million, I'll actually have to pay the difference. So if I only do a million in compute, I'm gonna have to pay a million and get nothing for it.So I'm gonna do as much compute as humanly possible to get the most bang for my back. And I think that's where a lot of the issues is with using costs. Like if you tell someone something's cheap, they're not gonna use less, they're gonna be like, "this looks like a great deal." I'm guilty of it myself. I'll buy clothes I don't need 'cause it's on a clearance sale.You know? And that's kind of how cloud operates. But when you start looking at, when you get a good methodology that really looks at the usage and the nuances between chip sets and storage tiers, you know, there is a big overlap between, you know, cutting the cost from a 2X large to a large that may halve your bill, and it will halve your carbon. And that's the kind of things you need to be looking out for. You need a really nuanced methodology that really looks at the usage more than just trying to use costs.Chris Adams: Okay, so that's one place where it's not so helpful. And you said a little bit like there are some places where it does help, like literally just having the size of the machine is one of the things you might actually do. Now I've gotta ask, you spoke before about like region shifting and stuff, something you mentioned before.Is there any incentive to do anything like that when you are looking at buying stuff in this way? Or is there any kind of, what's the word I'm after, opinion that FinOps or GreenOps has around things like that because as far as I can tell, there isn't, there is very rarely a financial incentive to do anything like that.If anything, it costs, usually costs more to use, maybe say, run something in, say Switzerland for example, compared to running an AWS East, for example. I mean, is that something you've seen, any signs of that where people kind of nudge people towards the greener choice rather than just showing like a green logo on a dashboard for example?James Hall: Well, I mean, this is where GreenOps comes into its own really, because I could tell everyone to move to France or Switzerland, but when you come to each individual cloud environment, they will have policies and approved regions and data sovereignty things, and this is why all you can do is give them the data and then let the enterprise make the decision. But ultimately, like we are working with a retailer who had a failover for storage and compute, but they had it all failing over to one of the really dirty regions, like I think they were based in the UK and they failed over to Germany, but they did have Sweden as one of the options for failover, and they just weren't using it.There's no particular reason they weren't using it, but they had just chosen Germany at one point. So why not just make that failover option Sweden? You know, if it's within the limits of your policies and what you're allowed to do. But, the region switching is completely trivial, unfortunately, in the cloud.So you know, you wouldn't lift and shift your entire environment to another place because there are performance, there are cost implications, but again, it's like how do you add sustainability impact to the trade-off decision? You know, if increasing your cost 10% is worth a 90% carbon reduction for you, great.Please do it if you know the hours of work are worth it for you. But if cost is the priority, where is the middle ground where you can be like, okay, these two regions are the same, they have the same latency, but this one's 20% less carbon. That is the reason I'm gonna move over there. So it's all about, you've already, you can do the cost benefit analysis quite easily, and many people do.But how do you enable them to do a carbon benefit analysis as well? And then once they've got all the data in front of them, just start making more informed decisions. And that's why I think the data is more important than, you know, necessarily telling them what the processes are, giving them the, here's the Ultimate Guide to GreenOps. You know, data's just a catalyst for decisions and if you just need to give them trustworthy data. And then how many use cases does trustworthy data have? You know, how many, how long is a piece of string? I've seen many, but every time there's a new customer, there's new use cases.Chris Adams: Okay, cool. Thank you for that. So, one thing that we spoke before in this kind of pre-call was the fact that, sustainability is becoming somewhat more mainstream. And there's now, within the kind of FinOps foundation or the people who are doing stuff for FinOps are starting to kind of wake up to this and trying to figure out how to incorporate some of this into the way they might kind of operate a team or a cloud or anything like that.And you. I believe you told me about a thing called FOCUS, which is, this is like something like a standardization project across all the FinOps and then, and now there's a sustainability working group, particularly inside this FOCUS group. For people who are not familiar with this, could you tell me what FOCUS is and what this sustainability working group as well working on?You know, 'cause working groups are supposed to work on stuff, right?James Hall: Yeah, so as exactly as you said, FOCUS is a standardization of billing data. So you know, when you get your AWS bill, your Azure bill, they have similar data in them. But they will be completely different column names. Completely different granularities, different column sizes. And so if you're trying to make a master report where you can look at all of your cloud and all of your SaaS bills, you need to do all sorts of data transformations to try and make the columns look the same.You know, maybe AWS has a column that goes one step more granular than Azure, or you're trying to, you know, do a bill on all your compute, but Azure calls it virtual machines. AWS calls it EC2. So you either need to go and categorize them all yourself to make a, you know, a master category that lets you group by all these different things or, you know, thankfully FOCUS have gone and done that themselves, and it started off as a, like a Python script you could run on your own data set to do the transformation for you, but slowly more cloud providers are adopting the FoCUS framework, which means, you know, when you're exporting your billing data, you can ask AWS give me the original or give me a FOCUS one. So they start giving you the data in a way where it's like, I can easily combine all my data sets. And the reason this is super interesting for carbon is because, you know, carbon is a currency in many ways, in the fact that the, Chris Adams: there's price on it in Europe. There's a price on it in the UK. Yeah.James Hall: There's a price on it, but also like the way Azure will present you, their carbon data could be, you know, the equivalent of yen, AWS could be the equivalent of dollars.They're all saying CO2 E, so you might think they're equivalent, but actually they're almost completely different currencies. So this effort of standardization is how do we bring it back? Maybe like, don't give us the CO2 E, but how do we go a few steps before that point and like, how do we start getting similar numbers?So when we wanna make a master report for all the cloud providers, it's apples to apples, not apples to oranges. You know, how do we standardize the data sets to make the reporting, the cross cloud reporting more meaningful for FinOps people?Chris Adams: Ah, I see. Okay. So I didn't realize that the FOCUS stuff has actually listing, I guess like what the, let's, call them primitives, like, you know, compute and storage. Like they all have different names for that stuff, but FOCUS has a kind of shared idea for what the concept of cloud compute, a virtual machine might be, and likewise for storage.So that's the thing you are trying, you're trying to apply, attach a carbon value to in these cases, so you can make some meaningful judgment or so you can present that information to people. James Hall: Yeah, it's about making the reports at the same, but also how do you make the numbers, the source of the numbers more similar? 'Cause currently, Azure may say a hundred tons in their dashboard. AWS may say one ton in their dashboard. You know, the spend and the real carbon could be identical, but it's just the formula behind it is so vastly different that you're coming out with two different numbers.Chris Adams: I see. I think you're referring to at this point here. Some places they might share a number, which is what we refer to as a location based figure. So that's like, what was kind of considered on the ground based on the power intensity from the grid in like a particular part of the world.And then a market based figure might be quite a bit lower. 'Cause you said, well, we've purchased all this green energy, so therefore we are gonna kind of deduct that from what a figure should be. And that's how we'd have a figure of like one versus 100. But if you're not comparing these two together. It's gonna, these are gonna look totally different.And you, like you said, it's not apples. With apples. It's apples with very, yeah. It's something totally different. Okay. That is helpful.James Hall: It gets a lot more confusing than that 'cause it's not just market and location based. Like you could have two location based numbers, but Azure are using the grid carbon intensity annual average from 2020 because that's what they've got approved. AWS may be using, you know, Our World in Data 2023 number, you know, and those are just two different sources for grid intensity.And then what categories are they including? Are they including Scope 3 categories? How many of the scope 2 categories are they including? So when you've got like a hundred different inputs that go into a CO2 number, unless all 100 are the same, you do not have a meaningful comparison between the two.Even location/market based is just one aspect of what goes into the CO2 number, and then where do they get the kilowatt hour numbers from? Is it a literal telemetry device? Or are they using a spend based property on their side? Because that's not completely alien to cloud providers to ultimately rely on spend at the end of the day.So does Azure use spend or does AWS use spend? What type of spend are they using? And that's where you need the transparency as well, because if you don't understand where the numbers come from, it could be the most accurate number in the world, but if they don't tell you everything that went into it, how are you meant to know?Chris Adams: I see. Okay. That's really interesting. 'Cause the Green Web Foundation, the organization I'm part of, there is a gov, there's a UK government group called the Government Digital Sustainability Alliance. And they've been doing these really fascinating lunch and learns andone thing that showed up was when the UK government was basically saying, look, these are, this is the carbon footprint, you know, on a kind of per department level. Like this is what the Ministry of Justice is, or this is what say the Ministry of Defense might be, for example. And that helps explain why you had figures where you had a bunch of people saying the carbon footprint of all these data centers is really high.And then you said they, there were people talking about saying, well, we're comparing this to cloud looks great, but 'cause the figures for cloud are way lower. But the thing they, the thing that I was that people had to caveat that with, they basically said, well, we know that this makes cloud look way more efficient here, and it looks like it's much more, much lower carbon, but because we've only got this final kind of market based figure, we know that it's not a like for like comparison, but until we have that information, we're, this is the best we actually have. And this, is an organization which actually has like legally binding targets. They have to reduce emissions by a certain figure, by a certain date. This does seem like it has to be, I can see why you would need this transparency because it seems very difficult to see how you could meaningfully track your progress towards a target if you don't have access to that.Right?James Hall: Yeah. Well, I always like to use the currency conversion analogy. If you had a dashboard where AWS is all in dollars, Azure, or your on premise is in yen. There's 149 yen in 1 dollar. So, but if you didn't know this one's yen and this one's dollars, you'd be like, "this one's 149 times cheaper. Why aren't we going all in on this one?"But actually it's just different currencies. And they are the same at the end of the day. Under the hood, they're the same. But, know, just the way they've turned it into an accounting exercise has kind of muddied the water, which is why I love electricity metrics more. You know, they're almost like the, non fungible token of, you know, data centers and cloud.'Cause you can use that to calculate location-based. You can use calculate market-based. You can use electricity to calculate water cooling and metrics and things like that. So if you can get the electricity, then you're well on your way to meaningful comparisons.Chris Adams: And that's the one that everyone guards very jealously a lot of the time, right?James Hall: Exactly. Yeah. Well that's directly related to your cost of running business and that is the proprietary information.Chris Adams: I see. Okay. Alright, so we spoke, we've done a bit of a deep dive into the GSG protocol, scope 3, supply chain emissions and things like that. If I may, you mentioned, you, referenced this idea of war stories before. Right. And I. It's surprisingly hard to find people with real world stories about okay, making meaningful changes to like cloud emissions in the world.Do you have any like stories that you've come across in the last four years that you think are particularly worth sharing or that might be worth, I dunno, catch people's attention, for example. Like there's gotta be something that you found that you are allowed to talk about, right.James Hall: Yeah, I mean, MasterCard, one of our Lighthouse customers, they've spoken about the work we're doing with them a lot in, at various FinOps conferences and things like that. But they're very advanced in their GreenOps goals. They have quite ambitious net zero goals and they take their IT sustainability very seriously.Yeah, when we first spoke to them. Ultimately the name of the game was to get the cloud measurement up to the point of their on-premise. 'Cause their on-premise was very advanced, daily electricity metrics with pre-approved, CO2 numbers or CO2 carbon coefficients that multiplied the, you multiply the electricity with.But they were getting, having no luck with cloud, essentially, you know, they spend a lot in the cloud and, but they, they were honestly like, rather than going for just the double wins, which is kind of what most people wanna do, where it's like, I'm gonna use this as a mechanism to save more money.They honestly wanted to do no more harm and actually start making decisions purely for the sustainability benefits. And we kind of went in there with the FinOps team, worked on their FinOps reporting, combined it with their FinOps recommendations and the accountability, which is their tool of choice.But then they started having more use cases around. How do they use our carbon data, not our electricity data from the cloud or like, because we have a big list of hourly carbon coefficients. They wanna use that data to start choosing where they put their on-premise data centers as well, and like really making the sustainability impact a huge factor in where they place their regions, which I think is a very interesting one. 'Cause we had only really focused on how do we help people in their public cloud. But they wanted to align their on-premise reporting with their cloud reporting and ultimately start even making decisions. Okay, I know I need to put a data center in this country.Do I go AWS, Azure, or on-prem for this one? And what is the sustainability impact of all three? And, you know, how do I weigh that against the cost as well? And it's kind of like the golden standard of making sustainability a big part of the trade-off decision. 'Cause they would not go somewhere, even if it saved them 50% of their cost, if it doubled their carbon. They're way beyond that point. So they're a super interesting one. And even in public sector as well, like the departments we are working with are relatively new to FinOps and they didn't really have like a proper accountability structure for their cloud bill. But when you start adding carbon data to it, you are getting a lot more eyes onto the, onto your bills and your usage.And ultimately we help them create that more of a FinOps function just with the carbon data. 'Cause people find carbon data typically more interesting than spend data. But if you put them on the same dashboard, now it's all about how do you market efficient usage? And I think that's one of the main, use cases of GreenOps is to get more eyes or more usage.So, 'cause the more ideas you've got piling in, the more use cases you find and.Chris Adams: Okay. Alright, so we spoke, so you spoke about carbon as one of the main things that people are caring about, right. And we're starting to develop more of an awareness that maybe some data centers might themselves be exposed to kind of climate risks themselves. Because I know they were built on a floodplain, for example.And you don't want a data center on a floodplain in the middle of a flood, for example. Right. but there's also like the flip side, you know, that's too much water. But there are cases where people worry about not enough water, for example. I mean, is that something that you've seen people talk about more of?Because there does seem to be a growing awareness about the water footprint of digital infrastructure as well now. Is that something you're seeing people track or even try to like manage right now?James Hall: Well, we find that water metrics are very popular in the US more so than the CO2 metrics, and I think it's because the people there feel the pain of lack of water. You know, you've got the Flint water crisis. In the UK, we've got an energy crisis stopping people from building homes. So what you really wanna do is enable the person who's trying to use this data to drive efficiency, to tell as many different stories asis possible,. You know, the more metrics and the more choice they have of what to present to the engineers and what to present to leadership, the better outcomes they're gonna get. Water is a key one because data centers and electricity production uses tons of water. And the last thing you wanna do is, you know, go to a water scarce area and put a load of servers in there that are gonna guzzle up loads of water. One, because if that water runs out, your whole data center's gonna collapse. So it's, you're exposing yourself to ESG risk. And also, you know, it doesn't seem like the right thing to do. There are people trying to live there who need to use that water to live.But you know, you've got data centers sucking that water out, so you know, can't you use this data to again, drive different decisions, could invoke an emotional response that helps people drive different decisions or build more efficiently. And if you're saving cost at the end of that as well, then everyone's happy.Chris Adams: So maybe this is actually one thing we can talk about because, or just like, drill into before we kind of, move on to the next question and wrap up. So we, people have had incentives to track cost and cash for obvious reasons, carbon, as you're seeing more and more laws actually have opinions about carbon footprint and being able to report that people are getting a bit more aware of it.Like we've spoken about things like location based figures and market based figures. And we have previous episodes where we've explored and actually kind of helped people define those terms. But I feel comfortable using relatively technical terminology now because I think there is a growing sophistication, at least in certain pockets, for example.Water still seems to be a really new one, and it seems to be very difficult to actually have, find access to meaningful numbers. Even just the idea of like water in the first place. Like you, when you hear figures about water being used, that might not be the same as water. Kind of.It's not, it might not be going away, so it can't be used. It might be returned in a way that is maybe more difficult to use or isn't, or is sometimes it's cleaner, sometimes it's dirtier, for example. But this, it seems to be poorly understood despite being quite an emotional topic. Have you, yeah, what's your experience been like when people try to engage with this or when you try to even find some of the numbers to present to people and dashboards and things?James Hall: Yeah. So yeah, surprisingly, all the cloud providers are able to produce factors. I think it's actually a requirement that when you have a data center, you know what the power usage effectiveness is, so what the overhead electricity is, and you know what the water usage effectiveness is. So you know, what is your cooling system, how much water does it use, how much does it withdraw?Then how much does it actually consume? So the difference between withdrawal and consumption, is withdrawal is you let you take clean water out, you're able to put clean water back relatively quickly. Consumption is you have either poisoned the water with some kind of, you know, you've diluted it or you know, with some kind of coolant that's not fit for human consumption or you've now evaporated it.And there is some confusion sometimes around "it's evaporated, but it'll rain. It'll rain back down." But, you know, a lake's evaporation and redeposition processs is ike a delicate balance. If it, you know, evaporates 10,000 liters a day and rains 10,000 liters a day after, like a week of it going into the clouds and coming back down the mountain nearby.If you then have a data center next to it that will accelerate the evaporation by 30,000 leases a day, you really upset the delicate balance that's in there and that, you know, you talk about are these things sustainable? Like financial sustainability is, do you have enough money and income to last a long time, or will your burn rate run out next month?And it's the same with, you know, sustainability. I think fresh water is a limiting resource in the same way a company's bank balance is their limiting resource. There's a limited amount of electricity, there's a limited amount of water out there. I think it was the cEO of Nvidia. I saw a video of him on LinkedIn that said, right now the limit to your cloud environment is how much money you can spend on it.But soon it will be how much electricity is there? You know, you could spend a trillion dollars, but if there's no more room for electricity, there's no more electricity to be produced, then you can't build anymore data centers or solar farms. And then water's the other side of that.I think water's even worse because we need water to even live. And you know what happens when there's no more water because the data centers have it. I think it invokes a much more emotional response. When you have good data that kind of is backed by good sources, you can tell an excellent story of why you need to start reducing.Chris Adams: Okay, well hopefully we can see more of those numbers because it seems like it's something that is quite difficult to get access to at the moment. Water's it, water in particular. Alright, so we're coming to time now and one thing we spoke about in the prep call was talking about the GSG protocol.We did a bit but nerd like nerding into this and you spoke a little bit about yes, accuracy is good, but you can't just only focus on accuracy if you want someone to actually use any of the tools or you want people to adopt stuff, and you said that in the GHG protocol, which is like the gold standard for people working out kind of the, you know, carbon footprint of things.You said that there were these different pillars inside of that matter. And if you just look at accuracy, that's not gonna be enough. So can you maybe expand on that for people who maybe aren't as familiar with the GSG protocol as you? Because I think there is something that, I think, that there, there's something there that's worth, I think, worth exploring.James Hall: Yeah. So it just as a reminder for those out there, the pillars are accuracy, yes, completeness, consistency, transparency, and relevance. A lot of people worry a lot about the accuracy, but, you know, just to give an example that if you had the most amazing, accurate number for your entire cloud environment, you know, 1,352 tons 0.16 grams, but you are one engineer under one application, running a few resources, the total carbon number is completelyuseless to you, to be honest. Like how do you make, use that number to make a decision for your tiny, you know, maybe five tons of information. So really you've got to balance all of these things. You know, the transparency is important because you need to build trust in the data. People need to understand where it comes from.The relevance is, you know, again, are you filtering on just the resources that are important to me? And the consistency touches on, aWS is one ton versus Azure is 100 tons. You can't decide which cloud provider to go into based on these numbers because you know, they're marking their own homework. They've got a hundred different ways to calculate these things. And then the completeness is around, if you're only doing compute, but 90% is storage, you are missing out on loads of information. You know, you could have a super accurate compute for Azure, but if you've got completely different numbers for AWS and you dunno where they come from, you've not got a good data set, a good GreenOps data set to be able to drive decisions or use as a catalyst.So you really need to prioritize all five of these pillars in an equal measure and treat them all as a priority rather than just go for full accuracy.Chris Adams: Brilliant. We'll sure make a point of sharing a link to that in the show notes for anyone else who wants to dive into the world of pillars of sustainability reporting, I suppose. Alright. Okay. Well, James, I think that takes us to time. So just before we wrap up, there's gonna be usual things like where people can find you, but are there any particular projects that are catching your eye right now that you are kind of excited about or you'd like to direct people's attention to? 'Cause we'll share a link to the company you work for, obviously, and possibly yourself on LinkedIn or whatever it is. But is there anything else that you've seen in the last couple of weeks that you find particularly exciting in the world of GreenOps or kind of the wider sustainable software field? James Hall: Yeah, I mean, a lot of work being done around AI sustainability is particularly interesting. I recommend people go and look at some of the Hugging Face information around which models are more electrically efficient. And from a Greenpixie side, we've got a newsletter now for people wanting to learn more about GreenOps and in fact, we're building out a GreenOps training and certification that I'd be very interested to get a lot of people's feedback on.Chris Adams: Cool. Alright, well thank you one more time. If people wanna find you on LinkedIn, they would just look up James Hall Greenpixie, presumably right? Or something like that.James Hall: Yeah, and go to our website as well.Chris Adams: Well James, thank you so much for taking me along to this deep dive into the world of GreenOps ,cloud carbon reporting and all the, and the rest. Hope you have a lovely day and yeah. Take care of yourself mate. Cheers.James Hall: Thanks so much, Chris.  Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.Chris Adams: To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode. 
undefined
Apr 3, 2025 • 44min

The Week in Green Software: Data Centers, AI and the Nuclear Question

Christopher Liljenstolpe, Senior Director for Data Center Architecture and Sustainability at Cisco, shares his expertise on the energy demands of AI-driven data centers. He discusses the potential role of nuclear power in sustainable tech and the advantages of small modular reactors. The conversation also touches on the importance of efficient design for AI infrastructure and the unforeseen role of internet infrastructure during the pandemic. Chris highlights how collaboration between hardware and software sectors can drive innovation in green technology.
undefined
Mar 27, 2025 • 12min

Backstage: Green Software Patterns

In this episode, Chris Skipper takes us backstage into the Green Software Patterns Project, an open-source initiative designed to help software practitioners reduce emissions by applying vendor-neutral best practices. Guests Franziska Warncke and Liya Mathew, project leads for the initiative, discuss how organizations like AVEVA and MasterCard have successfully integrated these patterns to enhance software sustainability. They also explore the rigorous review process for new patterns, upcoming advancements such as persona-based approaches, and how developers and researchers can contribute. Learn more about our people:Chris Skipper: LinkedIn | WebsiteFranziska Warncke: LinkedInLiya Mathew: LinkedInFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterResources:Green Software Patterns | GSF [00:23]GitHub - Green Software Patterns | GSF [ 05:42] If you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Chris Skipper: Welcome to Environment Variables, where we bring you the latest news from the world of sustainable software development. I am the producer of the show, Chris Skipper, and today we're excited to bring you another episode of Backstage, where we uncover the stories, challenges, and innovations driving the future of green software.In this episode, we're diving into the Green Software Patterns Project, an open source initiative designed to curate and share best practices for reducing software emissions.The project provides a structured approach for software practitioners to discover, contribute, and apply vendor-neutral green software patterns that can make a tangible impact on sustainability. Joining us today are Franziska Warncke and Liya Mathew, the project leads for the Green Software Patterns Initiative.They'll walk us through how the project works, its role in advancing sustainable software development, and what the future holds for the Green Software Patterns. Before we get started, a quick reminder that everything we discuss in this episode will be linked in the show notes below. So without further ado, let's dive into our first question about the Green Software Patterns project. My first question is for Liya. The project is designed to help software practitioners reduce emissions in their applications.What are some real world examples of how these patterns have been successfully applied to lower carbon footprints?Liya Mathew: Thanks for the question, and yes, I am pretty sure that there are a lot of organizations as well as individuals who have greatly benefited from this project. A key factor behind the success of this project is the impact that these small actions can have on longer runs. For example, AVEVA has been an excellent case of an organization that embraced these patterns.They created their own scoring system based on Patterns which help them measure and improve their software sustainability. Similarly, MasterCard has also adopted and used these patterns effectively. What's truly inspiring is that both AVEVA and MasterCard were willing to share their learnings with the GSF and the open source community as well.Their contributions will help others learn and benefit from their experiences, fostering a collaborative environment where everyone can work towards a more sustainable software.Chris Skipper: Green software patterns must balance general applicability with technical specificity. How do you ensure that these patterns remain actionable and practical across different industries, technologies and software architectures?Liya Mathew: One of the core and most useful features of patterns is the ability to correlate the software carbon intensity specification. Think of it as a bridge that connects learning and measurement. When we look through existing catalog of patterns, one essential thing that stands out is their adaptability.Many of these patterns not only align with sustainability, but also coincide with security and reliability best practices. The beauty of this approach is that we don't need to completely rewrite our software architecture to make it more sustainable. Small actions like catching static data or providing a dark mode can make significant difference.These are simple, yet effective steps that can lead us a long way towards sustainability. Also, we are nearing the graduation of Patterns V1. This milestone marks a significant achievement and we are already looking ahead to the next exciting phase: Patterns V2. In Patterns V2, we are focusing on persona-based and behavioral patterns, which will bring even more tailored and impactful solutions to our community.These new patterns will help address specific needs and behaviors, making our tools even more adaptable and effective.Chris Skipper: The review and approval process for new patterns involves multiple stages, including subject matter expert validation and team consensus. Could you walk us through the workflow for submitting and reviewing patterns?Liya Mathew: Sure. The review and approval process for new patterns involve multiple stages, ensuring that each pattern meets a standard before integration. Initially, when a new pattern is submitted, it undergoes an initial review by our initial reviewers. During this stage, reviewers check if the pattern aligns with the GSF's mission of reducing software emissions, follows the GSF Pattern template, and adheres to proper formatting rules. They also ensure that there is enough detail for the subject matter expert to evaluate the pattern. If any issue arises, the reviewer provides clear and constructive feedback directly in the pull request, and the submitter updates a pattern accordingly.Once the pattern passes the initial review, it is assigned to an appropriate SME for deeper technical review, which should take no more than a week, barring any lengthy feedback cycles. The SME checks for duplicate patterns validates the content as assesses efficiency and accuracy of the pattern in reducing software remission.It also ensures that the pattern's level of depth is appropriate. If any areas are missing or incomplete, the SME provides feedback in the pull request. If the patterns meet all the criteria, SME will then remove the SME review label and adds a team consensus label and assigns this pull request back to the initial reviewer.Then the Principles and Patterns Working Group has two weeks to comment or object to the pattern, requiring a team consensus before the PR can be approved and merged in the development branch. Thus the raw process ensures that each pattern is well vetted and aligned with our goals.Chris Skipper: For listeners who want to start using green software patterns in their projects, what's the best way to get involved, access the catalog, or submit a new pattern?Liya Mathew: All the contributions are made via GitHub pull requests. You can start by submitting a pull request on our repository. Additionally, we would love to connect with everyone interested in contributing. Feel free to reach out to us on LinkedIn or any social media handles and express your interest in joining our project's weekly calls.Also, check if your organization is a member of the Green Software Foundation. We warmly welcome contributions in any capacity. As mentioned earlier, we are setting our sights on a very ambitious goal for this project, and your involvement would be invaluable.Chris Skipper: Thanks to Liya for those great answers. Next, we had some questions for Franziska. The Green Software Patterns project provides a structured open source database of curated software patterns that help reduce software emissions. Could you give us an overview of how the project started and its core mission? Franziska Warncke: Great question. The Green Software Patterns project emerged from a growing recommendation of the environmental impact of software and the urgent need for sustainable software engineering practices. As we've seen the tech industry expand, it became clear that while hardware efficiency has been a focal point for sustainability, software optimization was often overlooked. A group of dedicated professionals began investigating existing documentation, including resources like the AWS Well-Architected Framework, and this exploration laid to groundwork for the project. This allows us to create a structured approach to the curating of the patterns that can help reduce software emissions.We developed a template that outlines how each pattern should be presented, ensuring clarity and consistency. Additionally, we categorize these patterns into the three main areas, cloud, web, and AI. Chris Skipper: Building an open source knowledge base and ensuring it remains useful, requires careful curation and validation. What are some of the biggest challenges your team has faced in developing and maintaining the green software patterns database? Franziska Warncke: Building and maintaining an open source knowledge base like the Green Software Patterns database, comes with its own set of challenges. One of the biggest hurdles we've encountered is resource constraints. As an open source project, we often operate with limited time personnel, which makes it really, really difficult to prioritize certain tasks over others.Despite this challenge, we are committed to continuous improvement, collaboration, and community engagement to ensure that the Green Software Patterns database remains a valuable resource for developers looking to adopt more sustainable practices.Chris Skipper: Looking ahead, what are some upcoming initiatives for the project? Are there any plans to expand the pattern library or introduce new methodologies for evaluating and implementing patterns? Franziska Warncke: Yes, we have some exciting initiatives on the horizon. So one of our main focuses is to restructure the patterns catalog to adopt the persona-based approach. This means we want to create tailored patterns for various worlds within the software industry, like developers, project managers, UX designers, and system architects.By doing this, we aim to make the patents more relevant and accessible to a broader audience. We are also working on improving the visualization of the patterns. We recognize that user-friendly visuals are crucial for helping people understand and adopt these patterns in their own projects, which was really missing before.In addition to that, we plan to categorize the patterns based on different aspects. Such as persona type, adoptability and effectiveness. This structured approach will help users quickly find the patterns that are most relevant to their roads and their needs, making the entire experience much more streamlined. Moreover, we are actively seeking new contributors to join us.And we believe that the widest set of voices and perspective will enrich our knowledge base and ensure that our patterns reflect a wide range of experience. So, if anyone is interested, we'd love to hear from you. Chris Skipper: The Green Software Patterns Project is open source and community-driven. How can developers, organizations, and researchers contribute to expanding the catalog and improving the quality of the patterns?Franziska Warncke: Yeah, the Green Software Patterns Project is indeed open source and community driven, and we welcome contributions from developers, organizations, and researchers to help expand our catalog and improve the quality of the patterns. We need people to review the existing patterns critically and provide feedback.This includes helping us categorize them for a specific persona, ensuring that each pattern is tailored to each of various roles in the software industry. Additionally, contributors can assist by adding more information and context to the patterns, making them more comprehensive and useful. Visuals are another key area where we need help.Creating clear and engaging visuals that illustrate how to implement these patterns can significantly enhance their usability. Therefore, we are looking for experts who can contribute their skills in design and visualization to make the patterns more accessible. So if you're interested, then we would love to have you on board.Thank you.Chris Skipper: Thanks to Franziska for those wonderful answers. So we've reached the end of the special backstage episode on the Green Software Patterns Project at the GSF. I hope you enjoyed the podcast. To listen to more podcasts about green software, please visit podcast.greensoftware.foundation. And we'll see you on the next episode.Bye for now.​ 
undefined
Mar 20, 2025 • 50min

The Week in Green Software: Sustainable AI Progress

For this 100th episode of Environment Variables, guest host Anne Currie is joined by Holly Cummins, senior principal engineer at Red Hat, to discuss the intersection of AI, efficiency, and sustainable software practices. They explore the concept of "Lightswitch Ops"—designing systems that can easily be turned off and on to reduce waste—and the importance of eliminating zombie servers. They cover AI’s growing energy demands, the role of optimization in software sustainability, and Microsoft's new shift in cloud investments. They also touch on AI regulation and the evolving strategies for balancing performance, cost, and environmental impact in tech. Learn more about our people:Chris Adams: LinkedIn | GitHub | WebsiteHolly Cummins: LinkedIn | GitHub | WebsiteFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterNews:AI Action Summit: Two major AI initiatives launched | Computer Weekly [40:20]Microsoft reportedly cancels US data center leases amid oversupply concerns [44:31]Events:Data-driven grid decarbonization - Webinar | March 19, 2025The First Eco-Label for Sustainable Software - Frankfurt am Main | March 27, 2025 Resources:LightSwitchOps Why Cloud Zombies Are Destroying the Planet and How You Can Stop Them | Holly CumminsSimon Willison’s Weblog [32:56]The GoalIf you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Holly Cummins: Demand for AI is growing, demand for AI will grow indefinitely. But of course, that's not sustainable. Again, you know, it's not sustainable in terms of financially and so at some point there will be that correction. Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.I'm your host, Chris Adams. Anne Currie: So hello and welcome to Environment Variables, where we bring you the latest news and updates from the world of sustainable software. Now, today you're not hearing the dulcet tones of your usual host, Chris Adams. I am a guest host on this, a common guest, a frequent guest host, Anne Currie. And my guest today is somebody I've known for quite a few years and I'm really looking forward to chatting to, Holly.So do you want to introduce yourself, Holly?Holly Cummins: So I'm Holly Cummins. I work for Red Hat. My day job is that, I'm a senior principal engineer and I'm helping to develop Quarkus, which is Java middleware. And I'm looking at the ecosystem of Quarkus, which sounds really sustainability oriented, but actually the day job aspect is I'm more looking atthe contributors and, you know, the extensions and that kind of thing. But one of the other things that I do end up looking a lot at is the ecosystem aspect of Quarkus in terms of sustainability. Because Quarkus is a extremely efficient Java runtime. And so when I joined the team, one of the things we asked well, one of the things I asked was, can we, know this is really efficient. Does that translate into an environmental, you know, benefit? Is it actually benefiting the ecosystem? You know, can we quantify it? And so we did that work and we were able to sort of validate our intuition that it did have a much lower carbon footprint, which was nice.But some things of what we did actually surprised us as well, which was also good because it's always good to be challenged in your assumptions. And so now part of what I'm doing as well is sort of broadening that focus from, instead of measuring what we've done in the past, thinking about, well, what does a sustainable middleware architecture look like?What kind of things do we need to be providing?Anne Currie: Thank you very much indeed. That's a really good overview of what I really primarily want to be talking about today. We will be talking about a couple of articles as usual on AI, but really I want to be focused on what you're doing in your day job because I think it's really interesting and incredibly relevant.So, as I said, my name is Anne Currie. I am the CEO of a learning and development company called Strategically Green. We do workshops and training around building green software and changing your systems to align with renewables. But I'm also one of the authors of O'Reilly's new book, Building Green Software, and Holly was probably the most, the biggest single reviewer/contributor to that book, and it was in her best interest to do so because, we make, I make tons and tons of reference to a concept that you came up with.I'm very interested in the backstory to this concept, but perhaps you can tell me a little bit more about it because it is, this is something I've not said to you before, but it is, this comes up in review feedback, for me, for the book, more than any other concept in the book. Lightswitch Ops. People saying, "Oh, we've put in, we've started to do Lightswitch Ops."If anybody says "I've started to do" anything, it's always Lightswitch Ops. So tell us, what is Lightswitch Ops?Holly Cummins: So Lightswitch Ops, it's really, it's about architecting your systems so that they can tolerate being turned off and on, which sounds, you know, it sounds sort of obvious, but historically that's not how our systems have worked. And so the first step is architect your system so that they can tolerate being turned off and on.And then the next part is once you have that, actually turn them off and on. And, it sort of, it came about because I'm working on product development now, and I started my career as a performance engineer, but in between those two, I was a client facing consultant, which was incredibly interesting.And it was, I mean, there was, so many things that were interesting, but one of the things that I sort of kept seeing was, you know, you sort of work with clients and some of them you're like, "Oh wow, you're, you know, you're really at the top of your game" and some you think, "why are you doing this way when this is clearly, you know, counterproductive" or that kind of thing.And one of the things that I was really shocked by was how much waste there was just everywhere. And I would see things like organizations where they would be running a batch job and the batch job would only run at the weekends, but the systems that supported it would be up 24/7. Or sometimes we see the opposite as well, where it's a test system for manual testing and people are only in the office, you know, nine to five only in one geo and the systems are up 24 hours.And the reason for this, again, it's sort of, you know, comes back to that initial thing, it's partly that we just don't think about it and, you know, that we're all a little bit lazy, but it's also that many of us have had quite negative experiences of if you turn your computer off, it will never be the same when it comes back up.I mean, I still have this with my laptop, actually, you know, I'm really reluctant to turn it off. But now we have, with laptops, we do have the model where you can close the lid and it will go to sleep and you know that it's using very little energy, but then when you bring it back up in the morning, it's the same as it was without having to have the energy penalty of keeping it on overnight. And I think, when you sort of look at the model of how we treat our lights in our house, nobody has ever sort of left a room and said, "I could turn the light off, but if I turn the light off, will the light ever come back on in the same form again?"Right? Like we just don't do that. We have a great deal of confidence that it's reliable to turn a light off and on and that it's low friction to do it. And so we need to get to that point with our computer systems. And you can sort roll with the analogy a bit more as well, which is in our houses, it tends to be quite a manual thing of turning the lights off and on.You know, I turn the light on when I need it. In institutional buildings, it's usually not a manual process to turn the lights off and on. Instead, what we end up is, we end up with some kind of automation. So, like, often there's a motion sensor. So, you know, I used to have it that if I would stay in our office late at night, at some point if you sat too still because you were coding and deep in thought, the lights around you would go off and then you'd have to, like, wave your arms to make the lights go back on.And it's that, you know, it's this sort of idea of like we can detect the traffic, we can detect the activity, and not waste the energy. And again, we can do exactly this our computer systems. So we can have it so that it's really easy to turn them off and on. And then we can go one step further and we can automate it and we can say, let's script to turn things off at 5pm because we're only in one geo.And you know, if we turn them off at 5pm, then we're enforcing quite a strict work life balance. So...Anne Currie: Nice, nice work.Holly Cummins: Yeah. Sustainable. Sustainable pace. Yeah. Or we can do sort of, you know, more sophisticated things as well. Or we can say, okay, well, let's just look at the traffic and if there's no traffic to this, let's turn it off.off Anne Currie: Yeah, it is an interestingly simple concept because it's, when people come up with something which is like, in some ways, similar analogies, a light bulb moment of, you know, why don't people turn things off? Becasue, so Holly, everybody is an unbelievably good public speaker.One of the best public speakers out there at the moment. And we first met because you came and gave talks at, in some tracks I was hosting on a variety. Some on high performance code, code efficiency, some on, being green. One of the stories you told was about your Lightswitch moment, the realization that actually this was a thing that needed to happen.And I thought it was fascinating. It was about how, I know everybody, I've been in the tech industry for a long time, so I've worked with Java a lot over the years and many years ago. And one of the issues with Java in the old days was always, it was very hard to turn things off and turn them back on again.And that was fine in the old world, but you talked about how that was no longer fine. And that was an issue with the cloud because the cloud, using the cloud well, turning things on and off and things, doing things like auto scaling is utterly key to the idea of the cloud. And therefore it had to become part of Quarkus, part of the future of Java. Am I right in that understanding? Holly Cummins: Yeah, absolutely. And the cloud sort of plays into both parts of the story, actually. So definitely we, the things that we need to be cloud native, like being able to support turning off and on again, are very well aligned to what you need to support Lightswitch Ops. And so the, you know, there with those two, we're pulling in the same direction.The needs of the cloud and the needs of sustainability are both driving us to make systems that, I just saw yesterday, sorry this is a minor digression, but I was looking something up, and we used to talk a lot about the Twelve-Factor App, and you know, at the time we started talking about Twelve-Factor Apps, those characteristics were not at all universal. And then someone came up with the term, the One-Factor App, which was the application that could just tolerate being turned off and on.And sometimes even that was like too much of a stretch. And so there's the state aspect to it, but then there's also the performance aspect of it and the timeliness aspect of it. And that's really what Quarkus has been looking at that if you want to have any kind of auto scaling or any kind of serverless architecture or anything like that, the way Java has historically worked, which is that it eats a lot of memory and it takes a long time to start up, just isn't going to work.And the sort of the thing that's interesting about that is quite often when we talk about optimizing things or becoming more efficient or becoming greener, it's all about the trade offs of like, you know, "oh, I could have the thing I really want, or I could save the world. I guess I should save the world." But sometimes what we can do is we can just find things that we were paying for, that we didn't even want anymore. And that's, I think, what Quarkus was able to do. Because a lot of the reason that Java has a big memory footprint and a lot of the reason that Java is slow to start up is it was designed for a different kind of ops.The cloud didn't exist. CI/CD didn't exist. DevOps didn't exist. And so the way you built your application was you knew you would get a release maybe once a year and deployment was like a really big deal. And you know, you'd all go out and you'd have a party after you successfully deployed because it was so challenging.And so you wanted to make sure that everything you did was to avoid having to do a deployment and to avoid having to talk to the ops team because they were scary. But of course, even though we had this model where releases happen very rarely, or the big releases happen very rarely, of course, the world still moves on, you know, people still had defects, people, so what you ended up with was something that was really much more optimized towards patching.So can we take the system and without actually taking, turning it off and on, because that's almost impossible, can we patch it? So everything was about trying to change the engine of the plane while the plane was flying, which is really clever engineering. If you can support that, you know, well done you.It's so dynamic. And so everything was optimized so that, you know, you could change your dependencies and things would keep working. And, you know, you could even change some fairly important characteristics of your dependencies and everything would sort of adjust and it would ripple back through the system.But because that dynamism was baked into every aspect of the architecture, it meant that everything just had a little bit of drag, and everything had a little bit of slowdown that came from that indirection. And then now you look at it in the cloud and you think, well, wait a minute. I don't need that. I don't need that indirection.I don't need to be able to patch because I have a CI/CD pipeline, and if I'm going into my production systems and SSHing in to change my binaries, something has gone horribly wrong with my process. And you know, I need to, I have all sorts of problems. So really what Quarkus was able to do was get rid of a whole bunch of reflection, get rid of a whole bunch of indirection,do more upfront at build time. And then that gives you much leaner behavior at runtime, which is what you want in a cloud environment.Anne Currie: Yeah. And what I love about this and love about the story of Quarkus is, it's aligned with something, non functional requirements. It's like, it's an unbelievably boring name, and for something which is a real pain point for companies. But it's also, in many ways, the most important thing and the most difficult thing that we do.It's like, being secure, being cost effective, being resilient. A lot of people say to me, well, you know, actually all you're doing with green is adding another non functional requirement. We know those are terrible. But I can say, no, we need to not make it another non functional requirements. It's just a good, another motivator for doing the first three well, you know. Also scaling is about resilience. It's about cost saving, and it's about being green. And it's about, and being able to pave rather than patch, I think is, was the term. It's more secure, you know. Actually patching is much less secure than repaving, taking everything down and bringing it back up.All the modern thinking about being more secure, being faster, being cheaper, being more resilient is aligned or needs to be aligned with being green and it can be, and it should be, and it shouldn't just be about doing less.Holly Cummins: Absolutely. And, you know, especially for the security aspect, when you look at something like tree shaking, that gives you more performance by getting rid of the code that you weren't using. Of course, it makes you more secure as well because you get rid of all these code paths and all of these entry points and vulnerabilities that had no benefit to you, but were still a vulnerability.Anne Currie: Yeah, I mean, one of the things that you've talked about Lightswitch Ops being related to is, well, actually not Lightswitch Ops, but the thing that you developed before Lightswitch Ops, the concept of zombie servers. Tell us a little bit about that because that not only is cost saving, it's a really big security improvement.So tell us about zombie, the precursor to Lightswitch Ops.Holly Cummins: Yeah, zombie servers are again, one of those things that I sort of, I noticed it when I was working with clients, but I also noticed it a lot in our own development practices that what we would do was we would have a project and we would fire up a server in great excitement and you know, we'd register something on the cloud or whatever.And then we'd get distracted and then, or then we, you know, sometimes we would develop it but fail to go to production. Sometimes we'd get distracted and not even develop it. And I looked and I think some of these costs became more visible and more obvious when we move to the cloud, because it used to be that when you would provision a server, once it was provisioned, you'd gone through all of the pain of provisioning it and it would just sit there and you would keep it in case you needed it.But with the cloud, all of a sudden, keeping it until you needed it had a really measurable cost. And I looked and I realized, you know, I was spending, well, I wasn't personally spending, I was costing my company thousands of pounds a month on these cloud servers that I'd provisioned and forgotten about.And then I looked at how Kubernetes, the sort of the Kubernetes servers were being used and some of the profiles of the Kubernetes servers. And I realized that, again, there's, each company would have many clusters. And I was thinking, are they really using all of those clusters all of the time?And so I started to look into it and then I realized that there had been a lot of research done on it and it was shocking. So again, you know, the sort of the, I have to say I didn't coin the term zombie servers. I talk about it a lot, but, there was a company called the Antithesis Institute.And what they did, although actually, see, now I'm struggling with the name of it because I always thought they were called the Antithesis Institute. And I think it's actually a one letter variant of that, which is much less obvious as a word, but much more distinctive. But I've, every time I talked about them, I mistyped it.And now I can't remember which one is the correct one, but in any case, it's something like the Antithesis Institute. And they did these surveys and they found that, it was something like a third of the servers that they looked at were doing no work at all. Or rather no, no useful work. So they're still consuming energy, but there's no work being done.And when they say no useful work as well, that sounds like a kind of low bar. Because when I think about my day job, quite a lot of it is doing work that isn't useful. But they had, you know, it wasn't like these servers were serving cat pictures or that kind of thing. You know, these servers were doing nothing at all.There was no traffic in, there was no traffic out. So you can really, you know, that's just right for automation to say, "well, wait a minute, if nothing's going in and nothing's coming out, we can shut this thing down." And then there was about a further third that had a utilization that was less than 5%.So again, you know, this thing, it's talking to the outside world every now and then, but barely. So again, you know, it's just right for a sort of a consolidation. But the, I mean, the interesting thing about zombies is as soon as you talk about it, usually, you know, someone in the audience, they'll turn a little bit green and they'll go, "Oh, I've just remembered that server that I provisioned."And sometimes, you know, I'm the one giving the talk and I'm like, Oh, while preparing this talk, I just realized I forgot a server, because it's so easy to do. And the way we're measured as well, and the way we measure our own productivity is we give a lot more value to creating than to cleaning up.Anne Currie: Yeah. And in some ways that makes sense because, you know, creating is about growth and cleaning up you know, it's about degrowth. It's about like, you know, it's like you want to tell the story of growth, but I've heard a couple of really interesting, sales on zombie servers since you started, well, yeah, since you started talking about it, you may not have invented it, but you popularized it. One was from, VMware, a cost saving thing. They were, and it's a story I tell all the time about when they were moving data centers in Singapore, setting up a new data center in Singapore.They decided to do a review of all their machines to see what had to go across. And they realized that 66 percent of their machines did not need to be reproduced in the new data center. You know, they had a, and that was VMware. People who are really good at running data centers. So imagine what that's like.But moving data centers is a time when it often gets spotted. But I will say, a more, a differently disturbing story from a company that wished to remain nameless. Although I don't think they need to because I think it's just an absolutely bog standard thing. They were doing a kind of thriftathon style thing of reviewing their data center to see if there was stuff that they could save money on, and they found a machine that was running at 95, 100 percent CPU, and they thought, they thought, Oh my God, it's been hacked.It's been hacked. Somebody's mining Bitcoin on this. It's, you know, or maybe it's attacking us. Who knows? And so they went and they did some searching around internally, and they found out that it was somebody who turned on a load test, and then forgot to turn it off three years previously. And And the, I would say that obviously that came up from the cost, but it also came up from the fact that machine could have been hacked.You know, it could be, could have been mining Bitcoin. It could have been attacking them. It could have been doing anything. They hadn't noticed because it was a machine that no one was looking at. And I thought it was an excellent example. I thought those two, excellent examples of the cost and the massive security hole that comes from machines that nobody is looking at anymore.So, you know, non functional requirements, they're really important. AndHolly Cummins: Yeah.Anne Currie: doing better on them is also green. And also, they're very, non functional requirements are really closely tied together.Holly Cummins: Yeah. I mean, oh, I love both of those stories. And I've heard the VMware one before, but I hadn't heard the one about the hundred percent, the load test. That is fantastic. One of the reasons I like talking about zombies and I think one of the reasons people like hearing about it I mean, it's partly the saving the world.But also I think when we look at greenness and sustainability, some of it is not a very cheerful topic, but the zombie servers almost always when you discover the cases of them, they are hilarious. I mean, they're awful, but they're hilarious And you know, it's just this sort of stuff of, "how did this happen?How did we allow this to happen?" Sometimes it's so easy to do better. And the examples of doing bad are just something that we can all relate to. And, but on the same time, you know, you sort of think, oh, that shouldn't have happened. How did that happen?Anne Currie: But there's another thing I really like about zombie servers, and I think you've pointed out this yourself, and I plagiarized from your ideas like crazy in Building Green Software, which is one of the reasons why I got you to be a reviewer, so you could complain about it if you wanted to early on. The, Holly Cummins: It also means I would agree with you a lot. Yes. Oh This is very, sensible. Very sensible. Yes.Anne Currie: One of the things that we, that constantly comes up when I'm talking to people about this and when we're writing the book and when we're going out to conferences, is people need a way in. And it's often that, you know, that people think the way into building green software is to rewrite everything in C and then they go, "well, I can't do that.So that's the end. That's the only way in. And I'm not going to be able to do it. So I can't do anything at all." Operations and zombie servers is a really good way in, because you can just do it, you can, instead of having a hackathon, you can just do a thrift a thon, get everybody to have a little bot running that doesn't need to be running, instantly halve your, you know, it's not uncommon for people to find ways to halve their life.Yeah. carbon emissions and halve their hosting costs simultaneously in quite a short period of time and it'd be the first thing they do. So I quite like it because it's the first thing they do. What do you think about that? It's, is it the low hanging fruit?Holly Cummins: Yeah, absolutely, I think, yeah, it's the low hanging fruit, it's easy, it's, kind of entertaining because when you find the problems you can laugh at yourself, and there's, again, there's no downside and several upsides, you know, so it's, you know, it's this double win of I got rid of something I wasn't even using, I have more space in my closet, and I don't have to pay for it.Anne Currie: Yeah, I just read a book that I really should have read years and years ago, and I don't know why I didn't, because people have been telling me to read it for years, which was the goal. Which is, it's not about tech, but it is about tech. It's kind of the book that was the precursor to the Phoenix Projects, which I think a lot read.And it was, it's all about TPS, the Toyota Production System. In a kind of like an Americanized version of it, how are the tires production system should be brought to America. And it was written in the 80s and it's all about work in progress and cleaning your environment and getting rid of stuff that gets in your way and just obscures everything., you can't see what's going on. Effectively, it was a precursor to lean, which I think is really very well aligned. Green and lean, really well aligned. And, it's something that we don't think about, that cleaning up waste just makes your life much better in ways that are hard to imagine until you've done it.And zombie, cleaning zombie servers up just makes your systems more secure, cheaper, more resilient, more everything. It's a really good thing to do.Holly Cummins: Yeah. And there's sort of another way that those align as well, which I think is interesting because I think it's not necessarily intuitive. Which is, sometimes when we talk about zombie servers and server waste, people's first response is, this is terrible. The way I'm going to solve it is I'm going to put in barriers in place so that getting a server is harder.And that seems really intuitive, right? Because it's like, Oh yes, we need to solve it. But of course, but it has the exact opposite effect. And again it seems so counterintuitive because it seems like if you have a choice between shutting the barn door before the horses left and shutting the barn door after the horses left, you should shut the barn door before the horses left.But what happens is that if those barriers are in place, once people have a server, if they had to sweat blood to get that server, they are never giving it up. It doesn't matter how many thriftathons you do, they are going to cling to that server because it was so painful to get. So what you need to do is you need to just create these really sort of low friction systems where it's easy come, easy go.So it's really easy to get the hardware you need. And so you're really willing to give it up and that kind of self service model, that kind of low friction, high automation model is really well aligned again with lean. It's really well aligned with DevOps. It's really well aligned with cloud native.And so it has a whole bunch of benefits for us as users as well. If it's easier for me to get a server, that means I'm more likely to surrender it, but it also means I didn't have to suffer to get it, which is just a win for me personally. Anne Currie: It is. And there's something at the end of the goal in the little bit at the end, which I thought was my goodness, the most amazing, a bit of a lightswitch moment for me, when it was talking to this still about 10 years ago, but it was, it's talking about, ideas about stuff that, basically underpin the cloud, underpin modern computing, underpin factories and also warehouses and because I worked for a long time in companies that had warehouses, so you kind of see that there are enormous analogies and it was talking about how a lot of the good modern practice in this has been known since the 50s.And, it, even in places like japan, where it's really well known, I mean, Toyota is so, the Toyota production system is so well managed, almost everybody knows it, and everybody wants to, every company in Japan wants to be operating in that way. Still, the penetration of companies that actually achieve it is very low, it's only like 20%.I thought, it's interesting, why is that? And then I realised that you'd been kind of hinting why it was throughout. And if you look on the Toyota website, they're quite clear about it. They say the Toyota production system is all about trial and error. Doesn't matter, you can't read a book that tells you what we did, and then say, "oh well if I do that, then I will achieve the result."They say it's all about a culture of trial and error. And then you achieve, then you build something which will be influenced by what we do, and influenced by what other people do, and influenced by a lot of these ideas. But fundamentally, it has to be unique to you because anything complicated is context-specific.Therefore, you are going to have to learn from it. But one of the, one of the key things for trial and error is not making it so hard to try something and so painful if you make an error that you never do any trial and error. And I think that's very aligned with what you were saying about if you make it too hard, then nobody does any trial and error.Holly Cummins: Yeah. Absolutely.Anne Currie: I wrote a new version of it, called the cloud native attitude, which was all about, you know, what are people doing? You know, what's the UK enterprise version of the TPS system, and what are the fundamentals and what are people actually doing?And what I realized was that everybody was doing things that were quite different, that was specific to them, that used some of the same building blocks and were quite often in the cloud because that reduced their bottlenecks over getting hardware. Because that's always, that's a common bottleneck for everybody.So they wanted to reduce the bottleneck there of getting the access to hardware. But what they were actually doing was built trial and error wise, depending on their own specific context. And every company is different and has a different context. And, yeah, so you have to be able to, that is why failure is so, can't be a four letter word.Holly Cummins: Yeah. Technically, it's a seven letter word if you say failure, but...Anne Currie: And it should be treated that way.Yeah.  I'm very aware that actually our brief for this was to talk about three articles on AI.Holly Cummins: I have to say, I did have a bit of a panic when I was reviewing the articles because they were very deep into the sort of the intricacies of, you know, AI policy and AI governance, which is not my specialty area.Anne Currie: No, neither is it mine. All that and when I was reading it, I thought quite a lot about what we've just talked about. It is a new area. It's something that, as far as AI is concerned, I love AI. I have no problem with AI. I think it's fantastic. It's amazing what it can produce.And if you are not playing around on the free version of ChatGPT, then you are not keeping on top of things because it changes all the time. And it's, very like managing somebody. You get out of it what you put in. If you put in, if you make a very cursory, ask it a couple of cursory questions, you'll get a couple of cursory answers.If you, you know, leaning back on Toyota again, you almost need to five wise it. You need to No, go, no, but why? Go a little bit deeper. Now go a little bit deeper. Now go a little bit deeper. And then you'll notice that the answers get better and better, like a person, better and better.So if you, really do, it is worth playing around with it. Holly Cummins: Just on that, I was just reading an article from Simon Willison this morning and he, was talking about sort of, you know, a similar idea that, you know, you have to put a lot into it and that to get good, he was talking about it for coding assistance that, you know, to get good outputs, it's not trivial.And a lot of people will sort of try it and then be disappointed by their first result and go, "Oh, well, it's terrible" and dismiss it. But he was saying that one of the mistakes that people make is to anthropomorphize it. And so when they see it making mistakes that a human would never make, they go, "well, this is terrible" and they don't think about it in terms of, well, this has some weaknesses and this has some strengths and they're not the same weaknesses and strengths as a person would have.And so I can't just see this one thing that a human would never do and then dismiss it. I, you know, you need to sort of adapt how you use it for its strengths and weaknesses, which I thought was really interesting. The sort of the, you know, it's so tempting to anthropomorphize it because it is so human ish in its outputs because it's trained on human inputs, but it is not, it does not have the same strengths and weaknesses as a person.Anne Currie: Well, I would say the thing is, it can be used in lots of different ways. There are ways you can use it which, actually, it can react like a person, and therefore does need to be called. I mean, if you ask it to do creative things, it's quite human like. And it will come up with, and it will blag, and it will, you know, it's, you just have to treat it to certainly, certain creative things.You have to go, "is that true?" Can you double check that? Is that, I appreciate your enthusiasm there, but it might not be right. Can you just double check that? In the same way that you would do for, with a very enthusiastic graduate. And you wouldn't have fired them because they said something that seemed plausibleand, well, unless you'd said, do not tell me anything that seems plausible, then you don't double check. Because to a certain extent, they're always enthused. And that's where ideas come from. Stretching what's saying, well, you know, I don't know if this is happening, but this could happen. You have to be a little bit out there to generate new ideas and have new thoughts. I heard a very interesting podcast yesterday where one of the Reeds, I can never remember if it was Reed Hastings or Reed Hoffman, you know, it's like it was talking about AI, it was AI energy use.And he was saying, we're not stupid, you know, if there's, basically, there are two things that we know are coming. One is AI and one is climate change. We're not going to build, to try and create an AI industry that's requires the fossil fuel industry because that would be crazy talk, you know, we do all need to remember that climate change is coming and it is a different model for how, and, you know, if you are building an AI system that relies on fossil fuels, then you are an idiot because, the big players are not. You know, it's, I love looking at our world in data and looking at what is growing in the world?And if you look to a chart that's really interesting to look at, if you ever feel depressed about climate change is to look at the global growth in solar power in solar generated power. It's going up like it's not even exponential. It's, you know, it's, it looks vertically asymptotic.You know, it's super exponential. It's going faster than exponential, nothing else is developing that way. Except maybe AI, but AI from a from a lower point and, actually I think the AI will, and then you've got things with AI, you've got stuff like DeepSeek that's coming out of field and saying, "do you know?You just didn't need to write this so inefficiently. You could, you know, you could do this on a lot less, and it'd be a lot cheaper, and you could do things on the edge that you didn't know that you could do." So, yeah, I'm not too worried about AI. I think that DeepSeek surprised me.Holly Cummins: Yeah, I agree. I think we have been seeing this, you know, sort of enormous rise in energy consumption, but that's not sustainable, and it's not sustainable in terms of climate, but it's also not sustainable financially. And so financial corrections tend to come before the climate corrections.And so what we're seeing now is architectures that are designed to reduce the energy costs because they need to reduce the actual financial costs. So we get things like DeepSeek where there's the sort of fundamental efficiency in the model of the architecture or the architecture of the model rather.But then we're also seeing things as well, like you know, up until maybe a year ago, the way it worked was that the bigger the model, the better the results. Just, you know, absolutely. And now we're starting to see things where the model gets bigger. And the results get worse and you see this with RAG systems as well, where when you do your RAG experiment and you feed in just two pages of data, it works fantastically well and then you go, "okay, I'm going to proceed."And then you feed in like 2000 pages of data and your RAG suddenly isn't really working and it's not really giving you correct responses anymore. And so I think we're seeing an architectural shift away from the really big monolithic models to more orchestrated models. Which is kind of bad in a way, right?Because it means we as engineers have to do more work. We can't just like have one big monolith and say, "solve everything." But on the other hand, what do engineers love? We love engineering. So it means that there's opportunities for us. So, you know, a pattern that we're seeing a lot now is that you have your sort of orchestrator model that takes the query in and triages it.And it says, "is this something that should go out to the web? Because, actually, like, that's the best place for this news topic. Or is this something that should go to my RAG model? Is this something..." You know, and so it'll choose the right model. Those models are smaller, and so they have a much more limited scope.But, within that scope, they can give you much higher quality answers than the huge supermodel, and they cost much less to run. So you end up with a system, again, it's about the double win, where you have a system which maybe took a little bit more work to architect, but gives you better answers for a lower cost. Anne Currie: That is really interesting and more aligned as well with how power is being developed potentially, you know, that there is, that you really want to be doing more stuff at the edge, which that you want, and you want people to be doing stuff at home on their own devices, you know, rather than just always having to go to, as you say, Supermodels are bad.We all disapprove of supermodels. Holly Cummins: Yeah. and in terms of, you know, that aligns with some of the sort of the, you know, the privacy concerns as well, which is, you know, people want to be doing it at home and certainly organizations want to be keeping their data in house. And so then that means that they need the more organization local model to be keeping their, dirty secrets in house.Anne Currie: Well, it is true. I mean, the thing is you, it is very hard to keep things secure and sometimes just do want to keep things in house, some of your data in house, you don't necessarily even want to stick it on Amazon if you can avoid it. But yes, so that's been a really interesting discussion and we have completely gone off topic and we've hardly talked at all about, the AI regulation.I think we both agree that AI regulation, it's quite soon to be doing it. It's interesting. I can see why, the Americans have a tendency to take a completely different approach to the EU. If you look at their laws and I have to, I did do some lecturing in AI ethics and legalities and American laws do tend to be like, well, something goes wrong, you know, get your pantsuit off and fix it. EU laws tend to be about, don't even, don't do it. You know, as you said before, close the door before the horse has, you know, has bolted. And the American law is about bringing it back.But in some ways, that is, that exemplifies why America grows much faster than Europe does. , Holly Cummins: Yeah.I was, when I was looking at some of the announcements that did come out of the AI summit, I think, yeah, I have really mixed feelings about it because I think I generally feel that regulation is good, but I also agree with you that it can have a stifling effect on growth, but one thing that I think is fairly clearly positive that did seem to be emphasized in the announcements as well is the open source aspect.So, like, we're, I mean, we have, you know, sort of open source models now, but they're not as open source as, you know, open source software in terms of how reproducible they are, how accessible they are for people to see the innards of, but I think I was thinking a little bit again when I was sort of the way the AI summit isis making these sort of bodies that have like the public private partnerships, which isn't anything new, but you know, we're sort of seeing quite a few governments coming together. So like the current AI announcement, I think had nine governments and dozens of companies, but it reminded me a little bit of the sort of the birth of radio. When we had this resource which was the airwaves, the frequencies that, you know, had, nobody had cared about. And then now all of a sudden it was quite valuable and there was potentially, you know, the sort of wild west of like, okay, who can take this and exploit it commercially? And then government stepped in and said, "actually, no, this is a resource that belongs to all of us.And so it needs to be managed." Who has access to it and who can just grab it. And I feel a bit like, even though in a technical sense, the data all around us isn't all of ours. It's, you know, a lot of it is copyrighted and that kind of thing. But if you look at the sort of aggregate of like all of the data that humanity has produced, that is a collective asset.And so it should be that how it gets used is for a collective benefit and that regulation, and making sure that it's not just one or two organizations that have the technical potential to leverage that data is a collectively good thing.Anne Currie: Especially at the moment, we don't want everything to be happening in the US, because, maybe the US is not the friendly partner that we would always thought it would be, it's, diversityHolly Cummins: diversity is good. Diversity of geographic interests.Anne Currie: Indeed. Yeah, it is. So yeah, it's, but it is early days. I'm not an anti AI person by any stretch. In fact, I love AI. I think it's really is an amazing thing. And we just need to align it with the interests of the rest of the humanity in termsHolly Cummins: Yes.Anne Currie: but it is interesting. They're saying that in terms of being green, the big players are not idiots. They know that things need to be aligned. But in terms of data, they certainly will be acting in their best interests. So, yeah, I can see they, yeah, indeed. Very interesting. So, we are now coming to time, we've done quite a lot, we've done quite a lot. There won't be much to edit out from what we've talked about today.I think it's great, it's very good. But, Holly Cummins: Shall we talk about the Microsoft article though? Cause that, I thought that was really interesting.Anne Currie: oh yeah, go for it, Yes,Holly Cummins: Yeah, so one of the other articles that we have is, It said that Microsoft had, was reducing its investment in data centers, which was, I was quite shocked to read that because it's the exact opposite of all of the news articles that we normally see, including one I saw this morning that said that, you know, the big three are looking at increasing their investment in nuclear.But I thought it was sort of interesting because we've, I think we always tend to sort of extrapolate from the current state and extrapolate it indefinitely forward. So we say demand for AI is growing, demand for AI will grow indefinitely, but of course, that's not sustainable. Again you know, it's not sustainable in terms of financially and so at some point there will be that correction and it seems like, Microsoft has perhaps looked at how much they've invested in data centers and said "oh, perhaps this was a little bit much, perhaps let's rollback that investment just a little bit, because now we have an over capacity on data centers."Anne Currie: Well, I mean, I wonder how much of DeepSeek had an effect on which is that everybody was looking at it and going, the thing is, I mean, Azure is, it's, not, well, I say this is a public story. So I could, because I have it in the book, the story of during the pandemic, the team, the Microsoft Teams folks looking at what they were doing and saying, "could this be more efficient?" And the answer was yes, because had really no effort in whatsoever to make what they were doing efficient. Really basic efficiency stuff they hadn't done. And so there was tons of waste in that system. And the thing is, when you gallop ahead to do things, you do end up with a lot of waste.DeepSeek was a great example of, you know this AI thing, we can do it on like much cheaper chips and much fewer machines. And you don't have to do it that way. So I'm hoping that this means that Microsoft have decided to start investing in efficiency. It's a shame because they used to have an amazing team who were fantastic at this kind of stuff, who used it, so we, I was saying, Holly spoke at a conference I did last year about code efficiency. And Quarkus being a really good example of a more efficient platform for running Java on. The first person I had on that used to work for Azure. And he used to, was probably the world's expert in actual practical code efficiency. He got made redundant. Yeah. Because, Microsoft at the time were not interested in efficiency. So "who cares? Pfft, go on, out." But he's now working at NVIDIA doing all the efficiency stuff there. Because some people are not, who paying attention to, I, well I think the lesson there is that maybe Microsoft were not paying that much attention to efficiency, the idea that actually you don't need 10 data centers. A little bit of easy, well, very difficult change to make it really efficient. But quite often there's a lot of low hanging fruit in efficiency.Holly Cummins: Absolutely. And you need to remember to do it as well, because I think that, I think probably it is a reasonable and correct flow to say, innovate first, optimize second. So, you know, you, don't have be looking at that efficiency as you're innovating because that stifles the efficiency and you know, you might be optimizing something that never becomes anything, but you have to then remember once you've got it out there to go back and say, "Oh, look at all of these low hanging fruit. Look how much waste there is here. Let's, sort it out now that we've proven it's a success."Anne Currie: Yeah. Yeah, it is. Yes. It's like "don't prematurely optimize does" not mean "never optimize."Holly Cummins: Yes. Yes.Anne Currie: So, I, my strong suspicion is that Microsoft are kind of waking up to that a little bit. The thing is, if you have limitless money, and you just throw a whole load of money at things, then, it is hard to go and optimize. As you say, it's a bit like that whole thing of going in and turning off those zombie machines.You know, you have to go and do it know, it's, you have to choose to do it. If you have limitless money, you never do it, because it's a bit boring, it's not as exciting as a new thing. Yeah, but yeah, limitless money has its downsides as well as up.Holly Cummins: Yes. Who knew?Anne Currie: Yeah, but so I think we are at the end of our time. Is there anything else you want to say before you, it was an excellent hour.Holly Cummins: Nope. Nope. This has been absolutely fantastic chatting to you Anne.Anne Currie: Excellent. It's been very good talking to you as always. And so my final thing is, if anybody who's listening to this podcast has not read building green software from O'Reilly, you absolutely should, because a lot of what we just talked about was covered in the book. Reviewed by Holly.Holly Cummins: I can recommend the book.Anne Currie: I think your name is somewhere as a, some nice thing you said about it somewhere on the book cover, but, so thank you very much indeed. And just a reminder to everybody, everything we've talked about all the links in the show notes at the bottom of the episode. And, we will see, I will see you again soon on the Environment Variables podcast.Goodbye. Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode. 
undefined
Mar 6, 2025 • 57min

AI Energy Measurement for Beginners

Host Chris Adams is joined by Charles Tripp and Dawn Nafus to explore the complexities of measuring AI's environmental impact from a novice’s starting point. They discuss their research paper, A Beginner's Guide to Power and Energy Measurement and Estimation for Computing and Machine Learning, breaking down key insights on how energy efficiency in AI systems is often misunderstood. They discuss practical strategies for optimizing energy use, the challenges of accurate measurement, and the broader implications of AI’s energy demands. They also highlight initiatives like Hugging Face’s Energy Score Alliance, discuss how transparency and better metrics can drive more sustainable AI development and how they both have a commonality with eagle(s)! Learn more about our people:Chris Adams: LinkedIn | GitHub | WebsiteDawn Nafus: LinkedInCharles Tripp: LinkedInFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterNews:The paper discussed: A Beginner's Guide to Power and Energy Measurement and Estimation for Computing and Machine Learning [01:21] Measuring the Energy Consumption and Efficiency of Deep Neural Networks: An Empirical Analysis and Design Recommendations [13:26]From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate | Luccioni et al [45:46]Will new models like DeepSeek reduce the direct environmental footprint of AI? | Chris Adams [46:06]Frugal AI Challenge [49:02] Within Bounds: Limiting AI's environmental impact [50:26]Events:NREL Partner Forum Agenda | 12-13 May 2025Resources:Report: Thinking about using AI? - Green Web Foundation | Green Web Foundation [04:06]Responsible AI | Intel [05:18] AIEnergyScore (AI Energy Score) | Hugging Face [46:39]AI Energy Score [46:57]AI Energy Score - Submission Portal - a Hugging Face Space by AIEnergyScore [48:23]AI Energy Score - GitHub [48:43] Digitalisation and the Rebound Effect - by Vlad Coroama (ICT4S School 2021) [51:11]The BUTTER Zone: An Empirical Study of Training Dynamics in Fully Connected Neural NetworksBUTTER-E - Energy Consumption Data for the BUTTER Empirical Deep Learning Dataset [51:44]OEDI: BUTTER - Empirical Deep Learning Dataset [51:49]GitHub - NREL/BUTTER-Better-Understanding-of-Training-Topologies-through-Empirical-ResultsBayesian State-Space Modeling Framework for Understanding and Predicting Golden Eagle Movements Using Telemetry Data (Conference) | OSTI.GOV [52:26]Stochastic agent-based model for predicting turbine-scale raptor movements during updraft-subsidized directional flights - ScienceDirect [52:46]Stochastic Soaring Raptor Simulator [53:58]NREL HPC Eagle Jobs Data [55:02]Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI AIAAIC | The independent, open, public interest resource detailing incidents and controversies driven by and relating to AI, algorithms and automationIf you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Charles Tripp: But now it's starting to be like, well, we can't build that data center because we can't get the energy to it that we need to do the things we want to do with it. we haven't taken that incremental cost into account over time, we just kind of ignored it. And now we hit like the barrier, right? Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.I'm your host, Chris Adams. Welcome to Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host Chris Adams. If you follow a strict media diet, you switch off the Wi-Fi on your house and you throw your phone into the ocean, you might be able to avoid the constant stream of stories about AI in the tech industry. For the rest of us, though, it's basically unavoidable. So having an understanding of the environmental impact of AI is increasingly important if you want to be a responsible practitioner navigating the world of AI, generative AI, machine learning models, DeepSeek, and the rest. Earlier this year, I had a paper shared with me with the intriguing title A Beginner's Guide to Power and Energy Measurement, an Estimation for Computing and Machine Learning. And it turned out to be one of the most useful resources I've since come across for making sense of the environmental footprint of AI. So I was over the moon when I found out that two of the authors were both willing and able to come on to discuss this subject today. So joining me today are Dawn Nafus and Charles Tripp, who worked on the paper and did all this research. And well, instead of me introducing them, well, they're right here. I might as well let them do the honors themselves, actually. So, I'm just going to work in alphabetical order. Charles, I think you're slightly ahead of Dawn. So, if I, can I just give you the room to, like, introduce yourself?Charles Tripp: Sure. I'm a machine learning researcher and Stanford algorithms researcher, and I've been programming pretty much my whole life since I was a little kid, and I love computers. I researched machine learning and reinforcement learning in particular at Stanford, started my own company, but kind of got burnt out on it.And then I went to the National Renewable Energy Lab where I applied machine learning techniques to energy efficiency and renewable energy problems there. And while I was there, I started to realize that computing energy efficiency was a risingly, like, an increasingly important area of study on its own.So I had the opportunity to sort of lead an effort there to create a program of research around that topic. And it was through that work that I started working on this paper, made these connections with Dawn. And I worked there for six years and just recently changed jobs to be a machine learning engineer at Zazzle.I'm continuing to do this research. And, yeah. Chris Adams: Brilliant. Thank you, Charles. Okay, so national, that's NREL that some people referCharles Tripp: That's right. It's one of the national labs. Chris Adams: Okay. Brillinat. And Dawn, I guess I should give you the space to introduce yourself, and welcome back again, actually. Dawn Nafus: Thank you. Great to be here. My name is Dawn Nafus. I'm a principal engineer now in Intel Labs. I also run the Socio Technical Systems Lab. And I also sit on Intel's Responsible AI Advisory Council, where we look after what kinds of machine learning tools and products do we want to put out the door. Chris Adams: Brilliant, thank you, Dawn. And if you're new to this podcast, I mentioned my name was Chris Adams at the beginning of the podcast. I work at the Green Web Foundation. I'm the director of technology and policy there. I'm one of the authors of a report all about the environmental impact of AI last year, so I have like some background on this. I also work as the policy chair in the Green Software Foundation Policy Working Group as well. So that's another thing that I do. And if you, if there, we'll do our best to make sure that we link to every single paper and project on this, so if there are any particular things you find interesting, please do follow, look for the show notes. Okay, Dawn, I'm, let's, shall we start? I think you're both sitting comfortably, right? Shall I begin?Okay, good. So, Dawn, I'm really glad you actually had a chance to both work on this paper and share and let me know about it in the first place. And I can tell when I read through it, there was quite an effort to, like, do all the research for this.So, can I ask, like, what was the motivation for doing this in the first place? And, like, was there any particular people you feel really should read it?Dawn Nafus: Yeah, absolutely. We primarily wrote this for ourselves. In a way. And I'll explain what I mean by that. So, oddly, it actually started life in my role in Responsible AI, where I had recently advocated that Intel should adopt a Protect the Environment principle alongside our suite of other Responsible AI principles, right?Bias and inclusion, transparency, human oversight, all the rest of it. And so, the first thing that comes up when you advocate for a principle, and they did actually implement it, is "what are you going to do about it?" And so, we had a lot of conversation about exactly that, and really started to hone in on energy transparency, in part because, you know, from a governance perspective, that's an easy thing to at least conceptualize, right? You can get a number.Chris Adams: Mmm. Dawn Nafus: You know, it's the place where people's heads first go to. And of course it's the biggest part of, or a very large part of the problem in the first place. Something that you can actually control at a development level. And so, but once we started poking at it, it was, "what do we actually mean by measuring? And for what? And for whom?" So as an example, if we measured, say, the last training run, that'll give you a nice guesstimate for your next training run, but that's not a carbon footprint, right? A footprint is everything that you've done before that, which folks might not have kept track of, right?So, you know, we're really starting to wrestle with this. And then in parallel, in labs, we were doing some socio technical work on, carbon awareness. And there too, we had to start with measuring. Right? You had to start somewhere. And so that's exactly what the team did. And they found interestingly, or painfully depending on your point of view, look, this stuff ain't so simple, right?If what you're doing is running a giant training run, you stick CodeCarbon in or whatever it is, sure, you can get absolutely a reasonable number. If you're trying to do something a little bit more granular, a little bit trickier, it turns out you actually have to know what you're looking at inside a data center, and frankly, we didn't, as machine learning people primarily. And so, we hit a lot of barriers and what we wanted to do was to say, okay, there are plenty of other people who are going to find the same stuff we did, so, and they shouldn't have to find out the hard way. So that was the motivation.Chris Adams: Well, I'm glad that you did because this was actually the thing that we found as well, when we were looking into this, it looks simple on the outside, and then it turned, it feels a bit like a kind of fractal of complexity, and there's various layers that you need to be thinking about. And this is one thing I really appreciated in the paper that we actually, that, that was kind of broken out like that.So you can at least have a model to think about it. And Charles, maybe this is actually one thing I can, like, hand over to you because I spoke about this kind of hierarchy of things you might do, like there'sstuff you might do at a data facility level or right all the way down to a, like, a node level, for example.Can you take me through some of the ideas there? Because I know for people who haven't read the paper yet, that seemed to be one of the key ideas behind this, that there are different places where you might make an intervention. And this is actually a key thing to take away if you're trying to kind of interrogate this for the first time.Charles Tripp: Yeah, I think it's, both interventions and measurement, or I should, it's really more estimation at any level. And it also depends on your goals and perspective. So it, like, if you are operating a data center, right? You're probably concerned with the entire data center, right? Like the cooling systems, the idle power draw, the, converting power to different levels, right?Like transformer efficiency, things like that. Maybe even the transmission line losses and all of these things. And you may not really care too much about, like, the code level, right? So the types of measurements you might take there or estimates you might make are going to be different. They're gonna be at, like, the system level.Like, how much is my cooling system using in different conditions, different operating conditions, environmental conditions? From a user's perspective, you might care a lot more about, like, how much energy, how much carbon is this job using? And that's gonna depend on those data center variables. But there's also a degree of like, well, the data center is going to be running whether or not I run my job.Right? So I really care about my jobs impact more. And then I might be caring about much shorter term, more local estimates, like ones that, might be from measuring the nodes that I'm running on's power or which was what we did it at NREL or, much higher frequency, but less accurate measurements that come from the hardware itself.Most modern computing hardware has a way to get these hardware estimates of the current power consumption. And you could log those. And there's also difficulties. Once you start doing that is the measurement itself can cause energy consumption. Right? And also potentially interfere with your software and cause it to run more slowly and potentially use more energy.And so, like, there's difficulties there at that level. Yeah, but there's a whole suite of tools that are appropriate for different uses and purposes, right? Like measuring the power at the wall, going into the data center may be useful at the data center or multiple data center level. Still doesn't tell you all the story, right?Like the losses in the transmission lines and where did that power come from are still not accounted for, right? But it also doesn't give you a sense for, like, what happens that I take interventions at the user level? It's very hard to see that from that high level, right? Because there's many things running on the system, different conditions there. From the user's point of view, they might only care about, like, you know, this one key piece of my software that's running, you know, like the kernel of this deep learning network.How much energy is that taking? How much additional energy is that taking? And that's like a very different thing that very different measurements are appropriate for and interventions, right?Like changing that little, you know, optimizing a little piece of code versus like, maybe we need to change the way our cooling system works on the whole data center or the way that we schedule jobs. Yeah, and the paper goes through many of these levels of granularity.Chris Adams: Yeah, so this is one thing that really kind of struck out at me because when you, it started at the kind of facility level, which is looking at an entire building where you mentioned things like say, you know, power coming into the entire facility. And then I believe you went down to looking at say the, within that facility, there might be one or more data centers, then you're going down to things like a rack level and then you're going down tokind of at a node level and then you're all even going all the way down to like a particularly tight loop or the equivalent for that. And when you're looking at things like this, there are questions about like what you what... if you would make something particularly efficient at, say, the bottom level, the node level, that doesn't necessarily impact, it might not have an impact higher up, for example, because that capacity might be just reallocated to someone else.For example, it might just be that there's a certain kind of minimum amount of power draw that you aren't able to have much of an impact on. I mean, like, this is, these are some of the thingsI was surprised by, or not surprised by, but I really appreciated breaking some of that, these out, because one thing that seemed to, one thing that was, I guess, counterintuitive when I was looking at this was that things you might do at one level can actually be counter, can hinder steps further down, for example, and vice versa.Charles Tripp: Yeah, that's right. I mean, I think, two important sort of findings are, yeah, like battle scars that we got from doing these measurements. And one data set we produced is called BUTTER-E, which is like a really large scale measurement of energy consumption of training and testing neural networks and how the architecture impacts it.And we were trying to get reasonable measurements while doing this. And, of the difficulties is in comparing measurements between runs on different systems, even if they're identically configured, can be tricky because different systems based on, you know, manufacturing variances, the heat, you know, like how warm is that system at that time?Anything that might be happening in the background or over the network, anything that might be just a little different about its environment can have, real measurable impacts on the energy consumed. So, like comparing energy consumption between runs on different nodes, even with identical configurations, we had to account for biases and they're like, oh, this node draws a little bit more power than this one at idle.And we have to like, adjust for that in order to make a clear comparison of what the difference was. And this problem gets bigger when you have different system configurations or even same configuration, but running in like a totally different data center. So that was like one tricky finding. And I think two other little ones I can mention, maybe we could go into more detail later. But, another one, like you mentioned, is the overall system utilization and how that's impacted by a particular piece of software running a particular job running is going to vary a lot on what those other users of the system are doing and how that system is scheduled.So, you can definitely get in the situations where, yeah, I reduced my energy consumption, but that total system is just going to, that energy is going to be used some other time, especially if the energy consumption savings I get are from shortening the amount of time I'm using a resource and then someone else.But it does mean that the computing is being done more efficiently, right? Like, if everyone does that, then more computing can be done within the same amount of energy. But it's hard to quantify that. Like, what is my impact? It's hard to say, right?Chris Adams: I see, yeah, and Dawn, go on, I can, see you nodding, so I want you to come in now.Dawn Nafus: If I can jump in a bit, I mean, I think that speaks to one of the things we're trying to bring out, maybe not literally, but make possible, is this. Those things could actually be better aligned in a certain way, right? Like, the energy that is, you know, for example, when there is idle time, right?I mean, there are things that data center operators can do to reduce that, right? you know, you can bring things into lower power states, all the rest of it, right? So, in a way, kind of, but at the same time, the developer can't control it, but if they don't actually know that's going on, and it's just like, well, it's there anyway, there's nothing for me to do, right, that's also a problem, right?So in a way, you've got two different kinds of actors looking at it in very different perspectives. And the clearer we can get about roles and responsibilities, right, you can start to do things like reduce your power when things are idling. Yes, you do have that problem of somebody else is going to jump in. But Charles, I think as your work shows, you know, there's still some idling going on, even though you wouldn't think, so maybe you could talk a little bit about that.Charles Tripp: Yeah, so one really interesting thing that I didn't expect going into doing these measurements in this type of analysis was, well, first, I thought, "oh great, we can just measure the power on each node, run things and compare them." And we ran into problems immediately. Like, you couldn't compare the energy consumption from two identically configured systems directly, especially if you're collecting a lot of data, because one is just going to use like slightly more than the other because of the different variables I mentioned.And then when you compare them, you're like, well, that run used way more energy, but it's not because of anything about how the job was configured. It's just, that system used a little bit more. So if I switch them, I'd get the opposite result. So that was one thing. But then, as we got into it and we were trying to figure out, okay, well, now that we figured out a way to account for these variations, let's see what the impact is of running different software with different configurations, especially like neural networks, different configurations on energy consumption and our initial hypothesis was that it was based on mainly the size of the neural network and, you know, like how many parameters basically, like how many calculations, these sorts of things.And if you look in the research, A lot of the research out there about making neural networks and largely algorithms in general more efficient focuses on how many operations, how many flops does this take, you know? And look, we reduced it by a huge amount. So that means that we get the same energy consumption reductions.We kind of thought that was probably true for the most part. But as we took measurements, we found that had almost no connection to how much energy was consumed. And the reason was that the amount of energy consumed had way more to do with how much data was moved around on the computer. So how much data was loaded from the network?How much data was loaded from disc? How much data was loaded from disc into memory, into GPU RAM for using the GPU, into the different caching levels and red, even the registers? So if we computed like how much data got moved in and out of like level two cache on the CPU, we could see that had a huge correlation, like almost direct correlation with energy consumption. Not the number of calculations.Now, you could get in a situation where, like, basically no data is leaving cache, and I'm doing a ton of computing on that data. In that case, probably a number of calculations does matter, but in most cases, especially in deep learning, has almost no connections, the amount of data moved. So then we thought, okay, well, it's amount of data moved.It's the data moving. The data has a certain cost. But then we look deeper, and we saw that actually. The amount of data moved is not really what's causing the energy to be consumed. It's the stalls while the system is waiting to load the data. It's waiting for the data to come from, you know, system memory into level three cache.It needs to do some calculations on that data. So it's pulling it out while it's sitting there waiting. It's that idle power draw. Just it could be for like a millisecond or even a nanosecond or something, right? But it adds up if you have, you know, billions of accesses. Each of those little stalls is drawing some power, and it adds up to be quite a significant amount of power.So we found that actually the driver of the energy consumption, the primary driver by far in what we were studying in deep learning was the idle power draw while waiting for data to move around the system. And this was like really surprising because we started with number of calculations, it turns out almost irrelevant.Right. And then we're like, well, is it the amount of data moved around? It's actually not quite the amount of data moved around, but that does like cause the stalls whenever I need to access the data, but it's really that idle power draw. And and I think that's probably true for a lot of software.Chris Adams: Yes. I think that does sound about right.I'm just gonna try if I follow that, because there was, I think there was a few quite key important ideas there. But there's also, if you aren't familiar with how computers are designed, you it might, there. I'll try to paraphrase it. So we've had this idea that the main thing is like, the number of calculations being done. That's like what we thought was the key idea.But, Charles Tripp: How much work, you know.Chris Adams: Yeah, exactly. And, what we actually, what we know about is inside a computer you have like multiple layers of, let's call them say, caches or multiple layers at where you might store data so it's easy and fast to access, but that starts quite small and then gets larger and larger, which a little bit slower over time.So you might have, like you said, L2 cache, for example, and that's going to be smaller, much, much faster, but smaller than, say, the RAM on your system, and then if you go a bit further down, you've got like a disk, which is going to be way, what larger, and then that's going to be somewhat slower still, so moving between these stages so that you can process, that was actually one of the things that you were looking at, and then it turned out that actually, the thing that, well, there is some correlation there, one of the key drivers actually is the chips kind of in a ready state, ready to actually waiting for that stuff to come in.They can't really be asleep because they know the data is going to have to come in, have to process it. They have to be almost like anticipating at all these levels. And that's one of the things we, that's one of the big drivers of actually the resource use andthe energy use. Charles Tripp: I mean, so, like, what we saw was, we actually estimated how much energy it took, like, per byte to move data from, like, system RAM to level three cache to level two to level one to a register at each level. And at some cases, it was so small, we couldn't even really estimate it. But in most cases, we were able to get an estimate for the For that, but a much larger cost was initiating the transfer, and even bigger than that was just the idle power draw during the time that the program executed and how long it executed for. And by combining those, we were able to estimate that most of that power consumption, like 99 percent in most cases was from that idle time, even those little micro stalls waiting for the data to move around. And that's because moving the data while it does take some energy doesn't take that much in comparison to the amount of energy of like keeping the ram on and the data is just like alive in the ram or keeping the CPU active, right?Like CPUs can go into lower power states, but generally, at least part of that system has to shut down. So like doing it like at a very, fine grain scale is not really feasible. Many systems can change power state at a like a faster rate than you might imagine, but it's still a lot slower than like out of, you know, per instruction per byte level of, like, I need to load this data.Like, okay, shut down the system and wait a second, right? Like, that's, it just, not a second, like a few nanoseconds. It's just not practical to do that. And it's so it's just keeping everything on during that time. That's sucking up most of the power. the So one strategy, simple strategy, but it's difficult to implement in some cases is to initiate that load that transfer earlier.So if you can prefetch the data into the higher levels of memory before you hit the stall where you're waiting to actually use it,you can probably significantly reduce this power consumption, due to that idle wait. But it's difficult to figure out how to properly do that prefetching. Chris Adams: Ah, I see. Thanks, charles. So it sounds like, okay, they, we might kind of approach this and there might be some things which feel kind of intuitive but it turns out there's quite a few counterintuitive things.And like, Dawn, I can see you nodding away sagely here and I suspect there's a few things that you might have to add on this. Because this is, I mean, can I give you a bit of space, Dawn, to kind of talk about some of this too, because I know that this is something that you've shared with me before, is that yeah, there are maybe some rules of thumb you might use, but it's never that simple, basically, or you realise actually that there's quite a bit more to it than that, for example.Dawn Nafus: Exactly. Well, I think what I really learned out of this effort is that measurement can actually recalibrate your rules of thumbs, right? So you don't actually have to be measuring all the time for all reasons, but even just that the simple, I mean, not so simple story that Charles told like, okay, you know, so I spent a lot of time talking with developers and trying to understand how they work and at a developer perception level, right?What do they feel like? What's palpable to them, right? Send the stuff off, go have a cup of coffee, whatever it is, right? So they're not seeing all that, you know, and, you know, when I talk to them, most of them aren't thinking about the kinds of things that were just raised, right? Like how much data are you looking at a time?You can actually set and tweak that. And that's the kind of, you know, Folks develop an idea about that, and they don't think too hard about it usually, right. So, with measuring, you can start to actually recalibrate the things you do see, right? I think this also gets back to, you know, why is it counterintuitive that, you know, some of these mechanisms and how you actually are training, as opposed to how many flops you're doing, how many parameters, why is that counterintuitive?Well, at a certain level, you know, the number of flops do actually matter, right? If we do actually have a gigantic, you know, I'm gonna call myself a foundation model type size stuff, I'm gonna build out an entire data center for it, it does matter. But as you get, you know, down and down and more specific, it's a, different ball game.And there are these tricks of scale that are sort of throughout this stuff, right? Like the fact that, yes, you can make a credible claim, that foundation model will always be more energy intensive than, you know, something so small you can run on a laptop, right? That's always going to be true, right? No measurement necessary, right? You keep going down and down, and you're like, okay, let's get more specific. You can get to actually where this, where our frustration really started was, you, if you try to go to the extreme, right, try to chase every single electron through a data center, you're not going to do it. It feels like physics, it feels objective, it feels true, but at minimum you start to hit the observer effect, right, that, you know, which is what we did.We were, my colleague Nicole Beckage was trying to measure at an epoch level, right, sort of essentially round, you know, mini round of training. And what she found was that, you know, she was trying to sample so often that she's pulling energy out of the processing and it just, it messed up the numbers, right? So you can try to get down, you know, into that, you know, what feels like more accuracy and then all of a sudden you're in a different ballpark. So these, tricks of like aggregation and scale and what can you say credibly at what level, I think are fascinating, but you kind of got to get a feel for it in the same way that you can get a feel for, "yep, if I'm sending my job off, I know I have at least, you know, however many hours or however many days," right?Charles Tripp: There's also so much variation that's out of your control, right? Like one run to another one system to another, even different times where you ran on the same system can cause measureable and in some cases significant variations in the energy consumption.So it's more about, I think about understanding what's causing the energy consumption.I think that's the more valuable thing to do. But it's easy to like, be like, "I already understand it." And I think there's a, there's like a historical bias towards number of operations because in old computers without much caching or anything like this, right? Like I restore old computers and, like an old 386 or IBM XT, right?Like it's running, it has registers in the CPU and then it has main memory. And it, and almost everything is basically how many operations I'm doing is going to closely correlate with how fast the thing runs andprobably how much energy it uses, because most of the energy consumption on those systems Is just basically constant, no matter what I'm doing, right?It's just it doesn't like idle down the processor while it's not working, right? And there's a historical bias. It's built up over time that, like, was focused on the, you know, and it's also at the programmer level. Like, I'm thinking about what is the computer doing? Chris Adams: What do I have controll over?Charles Tripp: But it's only through it's only through actually measuring it that you gain a clearer picture of, like, what is actually using energy.And I think if you get that picture, then you'll gain an understanding more ofhow can I make this software or the data center or anything in between like job allocation more energy efficient, but it's only through actually measuring that we can get that clear picture. Because if we guess, especially using kind of our biases from how we learn to use computers, how we learn about how computers work, we're actually very likely to get an incorrect understanding, incorrect picture of what's driving the energy consumption.It's much less intuitive than people think.Chris Adams: Ah, okay, there's a couple of things I'd like to comment on, and then Dawn, i might give you a bit of space on this, because, you said, so there's one, so we're just talking about like flops as a thing that people, okay, are used to looking at, and are like, it's literally written into the AI Act, like, things above a certain number of flops are considered, you know, foundational models, for example, so, you know, that's a really good example of what this actually might be.And I guess the other thing that I wanted to kind of like touch on is that, I work in the kind of web land, and like, I mean, the Green Web Foundation is a clue in our organization's name. We've had exactly the same thing, where we've been struggling to understand the impact of, say, moving data around, and whether, how much credence you should give to that versus things happening inside a browser, for example.It looks like you've got some similar kinds of issues and things to be wrestling, with here. But Dawn, I wanted to give you a bit of space because both of you alluded to this, about this idea of having an understanding of what you can and what you can't control and, how you might have a bias for doing one thing without, and then miss something really much larger elsewhere, for example.Can I maybe give you a bit of space to talk about this idea of, okay, well, which things do you, should you be focusing on, and also understanding of what's within your sphere of influence? What can you control? What can't you control, for example?Dawn Nafus: Exactly. I think it's in a sense you've captured the main point, which is, you know, that measurements are most helpful when they are relevant to the thing you can control, right? So as a very simple example, you know, there are plenty of AI developers who have a choice in what data centers they can use.There are plenty who don't, right? You know, when Charles works or worked at NREL, right. The supercomputer was there. That was it. You're not moving, right? So, if you can move, you know, that overall data center efficiency number that really matters because you can say, alright, "I'm putting my stuff here and not there." If you can't move, like, there's no need to mess with. It it is what it is, right? At the same, and this gets us into this interesting problem, again, a tension between what you might look at it from a policy perspective versus what a developer might look at. We had a lot of kind of, you know, can I say, come to Jesus?We had a little momentwhere we, is that on a podcast? I think I can. Where there was this question of, are we giving people a bum steer by focusing at, you know, granular developer level stuff, right? Where it's so much actually is on how you run the data center, right? So you, again, you talk about tricks of scale. On the one hand, you know, the amount of energy that you might be directly saving just by, you know, not using or not using, by the time all of those things move through the grid and you're talking about coming, you know, energy coming off of the transmissions cables, right, in aggregate might not actually be directly that big. It might be, but it might not be. And then you flip that around and you think about what aggregate demand looks like and the fact that so much of AI demand is, you know, that's what's putting pressure on our electricity grid.Right? Then that's the most effective thing you could do, is actually get these, you know, very specific individual jobs down and down, right? So, again, it's all about what you can control, but there are these, whatever perspective you take is just going to flip your, you know, your understanding of the issue around.Chris Adams: So this was actually one thing I quite appreciated from the paper. There were a few things saying, and it does touch on this idea, that yeah, you, might be focusing on the thing that you feel that you're able to control, but just because you're able to, like, Make very efficient part of this spot here that doesn't necessarily translate into a saving higher up in the system. Simply because if it's, if you don't, if higher up in the system isn't set to actually take advantage of that, then you might never achieve some of these savings It's a little bit like when you're working in cloud, for example, people tell you do all these things to kind of optimize your cloud savings. But if people are not turning data centers off, at best, you might be slowing the growth of infrastructure rollout in future, and like these are, and these are much, much harder things to kind of claim responsibility for, or say that, "yeah, it was definitely, if it weren't for me doing those things, we wouldn't have had that happen."This is one of the things that I appreciated the paper just making some allusions to and saying, look, yeah, this is, you know, this is why I mean, to be honest, when I was reading this, I was like, wow, there is, there was obviously some stuff for, beginners, but there's actually quite a lot here, which is quite meaty for people who are thinking of it as a much larger systemic level.So there's definitely things like experts could take away from this as well. So, I just want to check, are there any particular takeaways the two of you would like to kind of draw people's attention to beyond what we've been discussing so far? Because I quite enjoyed the paper and there's a few kind of nice ideas from this. Charles, if I just give you a bit of space to, kind of, come in. Charles Tripp: Yeah. I've got, kind of two topics that I think build on what we talked about before, but could be really useful for people to be aware of. So one is, sort of one of the outcomes of our studying of the impact of different architectures, data sets, hyper parameter settings on deep neural network energy consumption was that the most efficient networks, most energy efficient networks, and largely that correlates with most time efficient as well, but not always, the most efficient ones were not the smallest ones, and they were not the biggest ones, right?The biggest ones were just required so much data movement. They were slow. The smallest ones, they took a lot more iterations, right? It took a lot more for them to learn the same thing. And the most efficient ones were the ones where the working sets, where the amount of data that was moved around, matched the different cache sizes.So as you made the network bigger, it got more efficient because it learned faster. Then when it got so big that the data in like between layers, the communication between layers, for example, started to spill out of a cache level. Then it became much less energy efficient, because of that data movement stall happening.So we found that like there is like an optimum point there. And for most algorithms, this is probably true where if the working set is sized appropriately for the memory hierarchy, you gain the most efficiency, right? Because generally, like, as I can use more data at a time, I can get my software to work better, right, more efficiently. But there's a point where it falls out of the cache and that becomes less efficient. Exactly what point is going to depend on the software. But I think focusing on that working set size and how it matches to the hardware is a really key piece for almost anyone looking to optimize software for energy efficiency is to think about that. How much data am I moving around and how does that map to the cache? So that's like a practical thing.Chris Adams: Can I stop you Because I find that quite interesting, in that a lot of the time as developers we're kind of taught to kind of abstract away fromthe underlying hardware, and that seems to be going the other way. That's saying, "no, you do need to be thinking about this.You can't.There, you know, there's no magic trick." Charles Tripp: Right? And so, like, for neural networks, that could mean sizing my layers so that those working sets match the cache hierarchy, which is something that no one even considers. It's not even close in most architectures. Like, no one has even thought about this. The other thing is on your point about data center operations and kind of the different perspectives,one thing that we started to think about as we were doing some of this work was it might make sense to allocate time or in the case of like commercial data center, commercial cloud operator, even like charge field based on at least partly the energy rather than the time, as to incentivize them to use less energy, right?Like make things more energy efficient. Those can be correlated, but not always right. And another piece of it that I want to touch on of that same puzzle is, from a lot of data center operators perspective, they want to show their systems fully utilized, right? Like there's demand for the system, so we should build an even bigger system and a better system. When it comes to energy consumption.That's probably not the best way to go, because that means that those systems are sitting there probably doing inefficient things. Maybe even idling a lot of time, right? Like a user allocated the node, but it's just sitting there doing nothing, right? It may be more useful instead of thinking about, like, how much is the system always being utilized?But think about how much, how much computation or how many jobs or whatever your, like, utilization metric is, do I get, like, per unit energy, right? And you may think about how much, or per unit carbon, right? And you may also think about, like, how much energy savings can I get by doing things like shutting down nodes when they're unlikely to be utilized and more about like having a dynamic capacity, right?Like full tilt. I can use I can do how many flops or whatever, right? But I can also scale that down to reduce my idle power draw by, you know, 50 percent in low demand conditions. And if you have that dynamic capacity, you may actually be able to get even more throughput. But it's with less energy because when there's no demand, I'm like shutting,I'm like scaling down my data center, right? And then when there's demand, I'm scaling it up. But these are things that are requiring cultural changes in data center operations to happen.Chris Adams: I'm glad you mentioned this thing here because, Dawn, I know that you had some notes about, it sounds like in order for you to do that, you need, you probably need different metrics exposed or different kinds of transparency to what we have right now.Probably more actually. Dawn, can I give you a bit of space to talk about this? Because this is one thing that you told me about before and it's something that is actually touched on in the paper quite a few times actually.Dawn Nafus: Yeah, I mean, I think we can notice a real gap in a way between the kinds of things that Charles brings his attention to, and the kinds of things that show up in policy environments, in responsible AI circles, right, where I'm a bit closer, we can be a bit vague, and I think we are at the stage where, at least my read on the situation, is that, you know, there's, regardless of where you sit in the debates, and there are rip roaring debates about what to do about the AI energy situation, but I think transparency is probably the one thing we can get the most consensus on, but then, like, just back to that, what the heck does that mean? And I think we need a little, like a, more beats than are currently given to actually where, what work are those measurements doing?You know, some of the feedback we've gotten is, you know, "well, can't you just come up with a standard?" Like, what's the right standard? It's like, well, no, actually, if data centers aren't standard, and there are many different ways to build a model, then, yes, you can have a standard as a way of having a conversation across a number of different parties to do a very specific thing, like for example, Charles's example, you know, suggested that if we're charging on a per energy basis, that changes a whole lot. Right? But what you can't do is to say, this is the standard that is the right way to do it, and then that meets the requirement, because that's, you know, what we found is that clearly the world is far more, you know, complicated and specific than that.So, I, you know, I would really encourage the responsible AI community to start to get very specific very quickly, which I don't yet see happening, but I think it's just on the horizon. Chris Adams: Okay. Well I'm glad you mentioned about maybe taking this a little bit wider 'cause we've dived quite spent a lot of time talking about this paper, but there's other things happening in the world of AI actually, and I wanna give you folks a bit of space to kind of talk about anything that like, or things that you are, that you would like to kind of direct some attention to or you've seen that really you found particularly interesting.Charles, can I give you some space first and then give Dawn the same, to like say it to like I know, either shout out or point to some particular things that, if they've found this conversation interesting so far, what they might want to be looking at. More data.Charles Tripp: Yeah. I mean, I think, both in like computer program, computer science at large and especially in machine learning, we've kind of had an attitude, especially within deep learning within machine learning, an attitude of throwing more compute at the problem, right? And more data. The more data that we put through a model and the bigger, the more complicated the model is, the more capable it can be.But this brute force approach is one of the main things that's driving this increasing computing energy consumption. Right? And I think that it is high time that we start taking a look at making the algorithms we use more energy efficient instead of just throwing more compute. It's easy to throw more compute at it, which is why it's been done.And also because there hasn't been a significant like material incremental cost of like, Oh, you know, now we need. Twice made GPUs. I don't big deal. But now we're starting to hit constraints because we haven't thought about that incremental energy costs. We haven't had to, as an industry at large, right?Like, but now it's starting to be like, well, we can't build that data center because we can't get the energy to it that we need to do the things we want to do with it because we haven't taken that incremental cost into account over time, we just kind of ignored it. And now we hit like the barrier, right? And so I think thinking about, the energy costs and probably this means investing in more finding more efficient algorithms, more efficient approaches as well as more efficient ways to run data centers and run jobs. That's gonna become increasingly important, even as our compute capacity continues to increase.The energy costs are likely to increase along with that as we use more and more, and we need create more generation capacity, right? Like, it's expensive at some point where we're really driving that energy production, and that's going to be increasingly an important cost as well as it is now, like, starting to be a constraint to what kind of computing we can do.So I think investing in more efficient approaches is going to be really key in the future. Chris Adams: There's one thing that I, that I think Dawn might come in on this actually, is that, you're talking about, it seems that you're talking about having more of a focus on surfacing some of the kind of efficiency or the fact that resource efficiency is actually going to be something that we probably need to value or sharpen, I mean, because as I understand it so far, it's not particularly visible in benchmarks or anything like that right now, like, and if you have benchmarks deciding, what counts as a good model or a good use of this until that's included. You're not going to have anything like this. Is that the kind of stuff you're kind of suggesting we should probably have? Like, some more recognition of, like, or even like, you're taking at the energy efficiency of something and being that thing that you draw attention to or you include in counting something as good or not, essentially.Dawn Nafus: You know, I have a particular view of efficiency. I suspect many of your listeners might, as well. You know, I think it's notable that at the moment when we're seeing the, you know, the the model of the month, apparently, or the set of models of DeepSeek has come onto the scene and immediately we're starting to see, for the first time, you know, a Jevons paradox showing up in the public discourse.So this is the paradox that when you make things more efficient, you can also end up stimulating so much demand... Chris Adams: Absolute use grows even though it gets individually more efficient.Dawn Nafus: Yeah, exactly. Again, this is like this topsy turvy world that we're in. And so, you know, now the Jevons paradoxes is front page news, you know, my view is that yes, you know, again, we need to be particular about what sorts of efficiencies are we looking for where and not, you know, sort of willy nilly, you know, create an environment where, which I'm not saying you're doing Charles, but you know, what we don't want to do is create an environment where if you can just say it's more efficient, then, somehow, you know, we're all good, right. Which is, you know, what some of the social science of Energy Star has actually suggested that, that stuff is going on. With that said, right, I am a big fan of the Hugging Face Energy Star initiative. That looks incredibly promising. And I think one of the things that's really promising about it, so this is, you know, you know, leaderboards when, you know, people put their models up on Hugging Face. There's some energy measurement that happens, some carbon measurement, and then, you know, leaderboards are created and all the rest of it. And I think one of the things that's really good at, right, I can imagine issues as well, but you're A, you know, creating a way to give some people credit for actually looking. B, you're creating a way of distinguishing between two models very clearly, right? So in that context, do you have to be perfect about how many kilowatts or watts or whatever it is? No, actually, right? Right? You know, you're looking at more or less in comparable models. But C, it also interjects this kind of path dependence. Like, who is the next person who uses it? Right?That really matters. If you're setting up something early on, yes, they'll do something a little bit different. They might not just run inference on it. But you're, changing how models evolve over time and kind of steering it towards even, you know, having energy presence at all. So that's pretty cool to my mind.So I'm looking forward to... Chris Adams: Cool. We'll share a link to the Hugging Face. I think they, I think, do you know what they were called? I think it's the, you might be, I think it's, it was initially called the Energy Star Alliance, and then I think they've been told that they need to change the name to the Energy Score Alliance from this, because Ithink it, Energy Star turned out to be a trademark, but we can definitely add a link to that in the show notes, because, these, this actually, I think it's something that is officially visible now. It's something that people have been working on late last year, and now there is, we'll share a link to the actual GitHub repo, to the code on GitHub to kind of run this, because this works for both closed source models and open source models. So it does give some of that visibility. Also in France, there is the Frugal LLM challenge, which also sounds similar to what you're talking about, this idea of essentially trying to emphasize more than just the, you know, like to pay a bit more attention to the energy efficiency aspect of this and I'm glad you mentioned the DeepSeek thing as well because suddenly everyone in the world is an armchair expert on William Stanley Jevons paradox stuff.Everybody knows! Yeah. Dawn Nafus: Actually, if I could just add one small thing, since you mentioned the Frugal effort in France, there's a whole computer science community, sort of almost at a step's length from the AI development community that's really into just saying, "look, what, you know, what is the purpose of the thing that I'm building, period."And even, and that, you know, frugal computing, computing within limits, all of that world really about how do we get, you know, just something that somebody is going to actually value, as opposed to, you getting to the next, you know, score on a benchmark leaderboard somewhere. so I think that's kind of also lurking in the background here.Chris Adams: I'm glad you mentioned this, what we'll do, we'll add a we'll add links to both of those and, you immediately make me think of, there is this actual, so we're technologists mostly, the three of us, we're talking about this and I work in a civil society organization and, just this week, there was a big announcement, like a kind of set of demands from civil society about AI that's being shared at the AI Action Summit, this big summit where all the great and good are meeting in Paris, as you alluded to, next week to talk about what should we do about this? And, they, it's literally called Within Bounds, and we'll share a link to that. And it does talk about this, like, well, you know, if we're going to be using things like AI, what do, we need to have a discussion about what they're for. And that's the first thing I've seen which actually has discussions about saying, well, we should be actually having some concrete limits on the amount of energy for this, because we've seen that if this is a constraint, it doesn't stop engineers.It doesn't stop innovation. People are able to build new things. What we should also do is we should share a link to, I believe, Vlad Coraoma. he did an interview with him all about Jevons paradox a few, I think, late last year, and that's a really nice deep dive for people who want to basically sound knowledgeable in these conversations on LinkedIn or social media right now, it's a really useful one there as well. Okay, so we spoke a little bit about these ones here. Charles, are there any particular projects you'd like to kind of like name check before we start to wrap up? Because I think we're coming up to the hour now, actually.Charles Tripp: I don't know, not particular, but I did mention earlier, you know, we published this BUTTER-E data set and a paper along with it, as well as a larger one without energy measurements called BUTTER. Those are available online. You can just search for it and you'll find it right away. I think, if that's of interest to anyone hearing this, you know, there's a lot of measurements and analysis in there, including, you know, all the details of analysis that I mentioned where we, had this journey from number of compute cycles to, like, amount of stall, in terms of what drives energy consumption. Chris Adams: Ah, it's visible so people can see it. Oh, that's really cool. I didn't realize about that. Also, while you're still here, Charles, while I have access to you, before we did this interview, you mentioned, there's a whole discussion about wind turbines killing birds, and you were telling me this awesome story about how you were able to model the path of golden eagles to essentially avoid these kind of bird strike stuff happening.Is that in the public domain? Is something, can we link to that? That sounded super cool. Charles Tripp: There's several, papers. I'll have to dig up the links, but there's several papers we published and some software also to create these models. But yeah, I worked on a project where we looked at, we took, eagle biologists and computational fluid dynamics experts and machine learning experts.And we got together and we created some models based off of real data, real telemetry of tracking, golden eagle flight paths through, well, in many locations, including at wind sites, and match that up with the atmospheric conditions, the flow field, like, or graphic updrafts, which is where the wind hits, you know, like a mountain or a hill and it, some of it blows up.Right. And golden eagles take advantage of this as well as thermal updrafts caused by heating at the ground. Right. Causing the air to rise to fly. Golden eagles don't really like flapping. They like gliding. And because of that, golden eagles and other soaring birds, their flight paths are fairly easy to predict, right?Like, you may not know, like, oh, are they going to take a left turn here or right turn there, but generally they're going to fly in the places where there's strong updrafts and using actual data and knowledge from the eagle biologists and simulations of the flow patterns, we were able to create a model that allows wind turbines to be cited and also operate, right?Like, what, under what conditions, like, what wind conditions in particular and what time of year, which also affects the eagles' behavior, should I perhaps reduce my usage of certain turbines to reduce bird strikes? And in fact, we showed that it could be done without significantly, or even at all, impacting the energy production of a wind site.You could significantly reduce the chances of colliding with a bird.Chris Adams: And it's probably good for the birds too, as well, isn't it? Yeah.Alright, we definitely need to find some links for that. That's, going to be absolute catnip for the nerdy listeners who put, who are into this. Dawn, can I just give you the last word? Are there any particular things that you'd like to, I mean actually I should ask like, we'll add links to like you and Charles online, but if there's anything that you would draw people's attention to before we wrap up, what would you pay, what would you plug here? Dawn Nafus: I actually did want to just give a shout out to National Renewable Energy Lab, period. One of the things that are amazing about them, speaking of eagles, a different eagle is, they have a supercomputer called Eagle. I believe they've got another one now. It is lovingly instrumented with all sorts of energy measurements, basically anything you can think to measure.I think you can do it in there. There's another data set from another one of our co authors, Hilary Egan, that has some sort of jobs data. You can dig in and explore like what a real world data center job, you know, situation looks like. So I just want to give all the credit in the world to National Renewable Energy Lab and the stuff they do on the computing side.It's just phenomenal.Chris Adams: Yes, I think that's a really, I would echo that very much. I'm a big fan of NREL and the output for them. It's a really like a national treasure Folks, I'm really, thank you so much for taking me through all of this work and diving in as deeply as we did and referring to things that soar as well, actually, Charles. I hope we could do this again sometime soon, but otherwise, have a lovely day, and thank you once again for joining us. Lovely seeing you two again.Charles Tripp: Good seeing you.Chris Adams: Okay, ciao!  Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.  
undefined
Feb 27, 2025 • 53min

The Week in Green Software: Transparency in Emissions Reporting

Dive into the latest buzz on emissions reporting and the innovative AI Energy Score project by Hugging Face. The discussion tackles the complexities of measuring AI's environmental impact and the importance of collaboration in establishing benchmarks. Key policy shifts, including an executive order on clean energy for data centers, spark debates about ethical considerations and local community impacts. Plus, explore a beginner's guide to energy measurement for computing and upcoming events focused on Green AI initiatives.
undefined
Feb 20, 2025 • 1h 1min

How to Tell When Energy is Green with Killian Daly

In this episode, host Chris Adams is joined by Killian Daly, Executive Director of EnergyTag, to explore the complexities of green energy tracking and carbon accounting. They discuss the challenges of accurately measuring and claiming green energy use, including the flaws in current carbon accounting methods and how EnergyTag is working to improve transparency through time-based and location-based energy tracking. Killian shares insights from his experience managing large-scale energy procurement and highlights the growing adoption of 24/7 clean energy practices by major tech companies and policymakers. They also discuss the impact of green energy policies on industries like hydrogen production and data centers, emphasizing the need for accurate, accountable energy sourcing and we find out just how tubular Ireland can actually be!Learn more about our people:Chris Adams: LinkedIn | GitHub | WebsiteKillian Daly: LinkedIn | WebsiteFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterResources:GHG Protocol [09:15]Environment Variables Podcast | Ep 82 Electricity Maps w/ Oliver Corradi [32:22]Masdar Sustainable City [58:28]If you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Killian Daly: We need to think about this kind of properly and do the accounting correctly.And unfortunately, we don't do the accounting very well today. Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.I'm your host, Chris Adams. Hello, and welcome to another edition of Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Chris Adams. When we write software, there are some things we can control directly. For example, we might be able to code in a tight loop ourselves, or design a system that scales to zero when it's not in use.And if we're buying from a cloud vendor, like many of us do now, we're often buying digital resources, like gigabytes of RAM and disk, or maybe virtual CPUs, rather than physical servers. It's a little bit less direct, but we still know we have a lot of scope for the decisions, to control the impact of their decisions and what kind of environmental consequences come about from that.However, if we look one level further down the stack, like how the energy powering our kit is sourced, our control is even more indirect. We rarely, if ever, directly choose the kind of generation that powers data centers that our code runs in. But we know it still has an impact. So if we want to source energy responsibly, how do we do it?If you want to know this, it's a really good idea to talk to someone whose literal job for years has been buying lots and lots of clean energy and is intimately familiar with the standards involved in doing so and who has spent a lot of time thinking about how to make sure you can tell when the energy you're buying really is green.Fortunately, today I'm joined by just that person, Killian Daly, the Executive Director of the standards organization, EnergyTag. Killian, it's really, nice to have you on the pod. Thanks for coming on.Killian Daly: Yeah, thanks. Thanks very much for having me, Chris. great to be on the pod and, an avid listener, also. So it's always nice to contribute.Chris Adams: Thank you very much. Killian, I'm going to give you a bit of space to introduce yourself, and I've just mentioned that you're involved in EnergyTag, and we'll talk a little bit about what EnergyTag does. Because I know you and because, well, I met you maybe three years ago, I figured it might just be, it might be worth just talking a little bit about our lives outside of green software and sustainability.So, we were in this accelerator with the Green Web Foundation talking about a fossil free internet, and you were talking about EnergyTag and why it's important to track the provenance of energy. I remember you telling, we were asked about our passions. And, you told me about surfing and I never ever thought about Ireland as a place where you would surf because I didn't think it was all that warm. So can you maybe tell me a little bit like enlighten me here because it's not the first country I think of when I think of surfing and when you said that I was like he's" having a joke, right?"Killian Daly: Yeah. Well, I do like to joke, but this is not actually one of the jokes, Well, it doesn't need to be warm to surf. You just need to have waves, I suppose. So, yeah, it's something since I was really very young. I've always gone to the west coast of Ireland. Beautiful County Clare near the Cliffs of Moher.Maybe people know of them. And so we go every year. And my cousins, since a very young age, started surfing. We just, you know, solve these big waves and there's other people out there, surfing, bodyboarding and we're like, "Hey, let's try that out. That looks really cool." So, yeah, since I don't know, 6 or 7 years old, I've been going there every year, in summer, also in winter, me and my cousins also go, yeah.We go at New Year's get into the frigid cold Atlantic. And, yeah, it's magic, really. If you have the right, if you have the right wetsuit, you can kind of, you can get through anything, Chris Adams: So there's no such thing as cold was it bad weather, just bad clothing that also applies to wetsuits.Killian Daly: Yeah. Yeah. Yeah. It couldn't apply. Couldn't apply anymore. And obviously, in winter, you get the biggest swells, right? so actually, people probably don't know it, but Ireland has some of the biggest waves in the world. Now, on the west coast of Ireland, you have, yeah, really massive 50, 60 foot waves.Yeah, really all you can get some sort of a, all time surf there. So, so yeah, it's one of one of our better kept secrets.Chris Adams: I was not expecting to learn how to go totally tubular on this podcast.Killian Daly: Yeah, Chris Adams: Wow, that's, yeah, that's...Killian Daly: It's not, not for the faint of heart, but yeah, I would definitely recommend it.Chris Adams: Actually, now that you mention that, and now that we talk about, going back to the world of energy, now that people talk about Ireland as, the Saudi Arabia of wind, and it being windy AF, Then I can kind of see where you're coming from with it, actually. It doesn't make a bit more sense. So yeah, thank you for that little segue, actually, Killian.Okay, so we've started to talk a little bit about energy. And, I know that your, the organization you work for right now is called EnergyTag. But previously, as I understood it, you didn't, you worked in other organizations before. And, you've been working as a kind of buyer of energy, so you know a fair amount about actually sourcing electricity and how to kind of do that in a kind of responsible way.And I think when I heard you, we spoke about this before, you mentioned that, "yeah, I'm used to buying significant amounts of power" in your kind of previous life. Could I just like, could you maybe talk, provide a bit of a kind of background there, and so we can talk a little bit about context and size, because that might be helpful for us talking about the relative size that tech giants might buy and so on, and how much of that is applicable.Killian Daly: Yeah, sure. Yeah, so, I've been thinking about energy for a long time, even before my professional career studied energy and electrical engineering since I was 18 years old and did a master's in that, also. And then obviously in my working life as well. I've been basically always in the energy sector.So before EnergyTag, I was basically overseeing the global electricity portfolio, and the procurement of electricity for a company called Air Liquide, which is basically a large French multinational that produces, liquid air. So, oxygen, nitrogen, all the different parts of air which are, essential, feedstocks into various industries, and they consume a lot of electricity.So, the portfolio my team oversaw was about 35 to 40 terawatt hours of electricity consumption.Chris Adams: Okay.Killian Daly: Yeah, it's a lot, it's more than my home country, Ireland. It's about the same as Google and MicrosoftChris Adams: put together, yeah. Okay, so, wow. AndKillian Daly: So, it's pretty big stuff. And obviously, when you're working on something like that globally, looking at various electricity markets operating in 80 countries in these huge volumes, I suppose you, kind of learn a lot about what it means to buy power.Chris Adams: I guess if you're looking at something which is basically as much power as an entire country, then there's going to be like country sized carbon emissions, depending on what you choose to power this from. And I guess that's probably why you, I mean, we, have ways of tracking power. I mean, tracking the carbon emissions from various things like this, I mean, called like the GHG protocol, which is a kind of like the kind of gold standard for talking about some of that stuff.And this is something that I think you have some exposure to and I remember when you spoke to me, I remember us sitting down one time and you were telling me about There's a thing called scope 1 and there's a thing called scope 2, and that scope 2 was actually a kind of relatively new Idea where this came into this. Can you maybe tell me a little bit like maybe you could explain to someone who is Who's heard of, carbon footprinting, and they know there's a thing called scopes.Why would anyone care about scope 2 in the first place? And how does it come about in the first place? Because it seems like it's not intuitive for most people when they first, when they start thinking about carbon footprints and stuff like that. Killian Daly: Yeah. I think the obvious, first thing you need to take into account when you think of like a company's emissions is, well, what are they burning themselves on site? do they have gas boilers burning gas? Are they burning coal to produce electricity? So that's, I think, very intuitive and obvious. But actually that is not the end of the story. And there's actually like a, a very funny anecdote. I put a true anecdote from the legendary Laurent Segalen, who does the Redefining Energy podcast and general energy guru. And he was actually involved in the kind of creation of a lot of the carbon accounting standards that are used today, this Greenhouse Gas Protocol standard, which is basically used by over 90 percent of companies now to report their carbon emissions.It is the Bible of how carbon accounting works, right? and so 20 years back, he basically was, down in Australia and visiting an aluminum smelter. On site, they were explaining, "this is very low carbon product. we hardly burn any fossil fuels on site. This is incredibly, clean production." Chris Adams: The aluminium here, right? big chunks of aluminium. Okay, right.Killian Daly: Aluminum, aluminum smelting. So like one of the, biggest metallic commodities that we have, very energy intensive. and so, he was there on site and just saw these big overhead wires coming in from yonder, from somewhere, right? And he said, hang on, what are the, what are those big cables above? and they were like, "oh, yeah, that's the electricity," obviously driving the smelter because aluminium, it's all about electricity. That's what power is an aluminium production facility. And so he said, well, hang on, where is that coming from?They're like, "oh, no, don't, don't worry about that. That's not our responsibility." Well, it absolutely is, right? so you need to think about where is that electricity coming from? How is that being produced? And in that case, it was coming from a very large multi gigawatt coal power plant right next door. Chris Adams: Okay. All right. So I thought you were gonna say, oh, it's maybe a, something clean, like a hydro power station, but no, just a big, fat, dirty, great coal fired power station was the thing generating all the power for it. And that's whereKillian Daly: Absolutely. So, that's kind of the, just a bit of an anecdote is that's why it's so important to think about what we call scope to emissions, the emissions of electricity that I'm consuming, because especially as we electrify the economy, right, more and more emissions are going to become scope 2 emissions.They're going to be related to someone else either burning fossil fuels to produce electricity and to give to a consumer or ideally, using clean energy sources to generate that electricity without carbon emissions. we need to think about this kind of properly and do the accounting correctly.And unfortunately, we don't do the accounting very well today.Chris Adams: Alright, so previously, before we even had that, there wasn't even this notion of scope 2 in the . , you might have just had direct, and then maybe this kind of bucket of indirect stuff, which is really hard to measure, so you're not going to really try to measure it. And okay, so, I remember actually reading about some of this myself, and I always wondered, like, where do some of these figures come, where do, where does even the notion of a protocol like this come from? And one of the things I realized was, particularly with the GHG one, was that they're like, when I listened to Laurent Segalen speaking about some of this, he was basically saying, yeah, this was essentially like Shell, the oil company, who basically said, "we have a way of tracking our own emissions."And, why not use that as a starting point for talking about how we do carbon accounting? And then, scope 2 was a new concept. That was one of the things that they were kind of pushing for. But I suppose this kind of speaks to the idea of, who's in those rooms for those working groups to kind of, that is going to totally change the framing of how we talk about some of this.And I guess that's probably why this, is this a little bit like why you started talking and getting involved with things like EnergyTags so you could take part in those discussions? Because it feels if this is what we're going to use to define how we do this or how we do that just like you have people talking about okay BP had an impact of changing how we think about carbon footprints from, from an individual point of view.But you do need people involved in that conversation to say, "actually, no, that's possibly not the best way to think about this, and there are other ways to take this into account." I mean, is this why you got involved in the EnergyTag stuff?Killian Daly: Yeah, it's one of the main reasons, because I used to do, so, work for one of the world's largest electricity consumers. And so I was responsible for calculating all of the electricity emissions for that company, right? Like doing the scope 2. And so I read the Greenhouse Gas Protocol back to front.That was how the, all the calculations were done. That's what qualified clean and not clean, right? And I remember thinking, "this is an insanely influential document," right? It's kind of in the weeds. It's kind of stayed maybe, to some people, but I wasChris Adams: of tedium around it, here. Killian Daly: Yeah. But the more I've gotten involved in things like regulation and conversations like that, that is where, it's in the annexes, it's in the details that the big decisions are made often. So I remember thinking back then, this is insanely influential and some of the ways that we're allowed to claim to consume clean energy are, frankly, disconnected from reality in a way that is just not okay, right?As in this is far too weak. And definitely, I thought, someday I'd love an opportunity to be able to, say, "hang on, can we,we fix this please? can we do this differently? Can we start to respect some sort of basic realities here?" So, yeah, it was definitely one of the drivers why I joined EnergyTag, which is obviously like a nonprofit that is, has as its mission to clean up accounting, right? And to clean up the way we think about electricity accounting. So, yeah, obviously it's a great honor, I suppose, to be part of those ongoing discussions in the Greenhouse Gas Protocol update process.Chris Adams: So, We spoke before about how there, before there was even no scope 2, right? So that was like, the bar was on the floor. Right, and then we introduced the idea that, oh, maybe we should think about the emissions from the electricity. So that was kind of a bit of a leap forward by one person pushing for this, that otherwise wouldn't have been in the standard at all, right?And I just realized actually now that you mentioned that, we spoke about oil firms being very involved in this and being very organized in this, and I remember people talking about Shell, that's what you use, and how much, and I'm just realising, oh Christ, Shell's in the Green Software Foundation as well.We should, that's something I didn't really think so much about, but they're also there too. So they are organized. Wow. So let's move on. So maybe we could talk a little bit about scope 2 here. The thing I want to kind of get my head around is I'm like, can you maybe talk me through some examples of where this doesn't, this falls down a little bit, where might be a little, stretching your, you spoke about the physicality, the physical reality. where does it need a bit of work, or need some improvement that you're looking to do, looking to address in EnergyTag, for example? Killian Daly: Yeah, so basically, one way of doing scope 2 accounting is basically looking at the energy contracts or the electricity supply, contracts that companies have and saying, well, where are you buying your energy from? How are you contracting for your power? Right? And there's a kind of a number of fundamental issues.One of them is around the temporal correlation, or between when you're consuming electricity and when the electricity you're claiming to consume is being produced. And today, right, we actually allow an annual matching window between production and consumption. And put in simple terms, what that means is that you can be basically solar powered all night long, right. You can take solar energy attributes from the daytime and use them at nighttime, or you could take them from the daytime in March and use them at nighttime in November. At any other time of year. And this just does not make sense, right?Chris Adams: Not physically how the science works for a start. Maybe if I can just dive into that a little bit in a bit more detail because you've mentioned this idea of certificates for example or like claiming like that and as I understand it if I am running a solar farm right I'm generating two separate things. I'm generating power but I'm generating the kind of greenness so these are two independently sellable things which will sometimes be bundled together. That's how I might buy green energy. But under certain rules, they're not. They can be separated. So it's like the greenness that I'm moving or I'm buying and kind of slapping onto something else to make it green. Is that? And if it's at the same time, it's kind of okay. If it's from totally separate times of day, you do like you mentioned where you're saying this thing running at night runs at solar, is running on the greenness from a solar farm, which is stretching the, well, our imagination, I suppose, and your credulity, I suppose.Okay, so that's one example of this is something that you wanted to get, wanted to get fixed. Are there any other ones, or things that you'd point people to, becauseKillian Daly: I think you know the. The other, the other aspect, I think that's pretty, problematic in today's standards is so we've talked about time and the other big one is space, right? Today we allow consumers to claim to use green energy or clean energy over vast geographical boundaries that really don't respect the physical limits of the grid.So, for example, the whole U. S. is considered to be one region, right? So you can buy green energy attributes produced in Texas and say that you're using them in New York. So you could be 100 percent power by Texas solar in New York. Or if you're in Europe, Europe is considered of one region. So you have really absurd cases where you can be powered by Icelandic hydro in Germany, and Iceland has never exported any electricity to anyone. There's no cables leaving Iceland. So, that just doesn't make sense. And this has real consequences because what we're trying to do is obviously drive consumers to buy green energy. If they're doing it in this way, then they're kind of, in some cases, pretending to buy green energy rather than actually going and buying green energy and incentivizing more production of green energy and clean flexibility that's needed to integrate that solar and wind, at every hour of the day.So, that time and space kind of paradigm is maybe a good way of thinking about, some of the fundamental issues here. There are other ones. I don't know how far we want to go into the rabbit hole, but that's two very high level, and hopefully very kind of understandable examples of the problems we have with today's carbon accounting.Chris Adams: Yeah, I think I understand why that would be something we would address, and so presumably this is the thing that EnergyTag's looking to do now. You're basically saying, well, the current system is asking you to make quite spectacular leaps of faith. And there are certain places where you do want to do leaps of faith and be super creative, but accounting might not be where you want to be super creative or super jumpy. That's not always where you want to have your innovation.So that's, this is, so you're saying, well, let's actually be, make this more reflective of what's really happening in the world. So that we've got like some kind of solid foundation to be working on. So,Exactly. Killian Daly: And just maybe on that point, this is not what we advocate for is not, it's not anything radically new, to be honest, because the way electricity markets work today, the way electricity utilities deliver power to customers, just you know, let's say pure gray electricity on electricity markets.It is based on fundamental concepts of time matching. Power markets work on a 16, sorry, a 60, 30 or 15 minute, like balancing period. In Australia, it's 5 minutes. In Europe, there's things called bidding zones. So that's the area over which you can buy and sell electricity. And all of this is to kind of capture these fundamental physical limits of the power system.You have to balance it in real time. And there's only a certain amount of grid capacity. And so you need to realize areas over which it's reasonable to trade power or not. So all we're saying is, make the green energy market much more like the real power market. So we're actually, if anything, trying to make it a bit more common sense,whereas today, we're, quite detached from some of those basic limits thatChris Adams: Ah, I see. Okay. So in fact, in some ways, there are some kind of comparisons where you could plausibly make where people there's a push right now for people to talk about treating environmental data with some of the same seriousness as financial data and apply some of the same constraints it sounds like something a little bit like that so if people are going to have basically take into account the physical constraints when they're purchasing the actual power part, they should think about applying their same ideas when they're thinking about the greenness of it as well. You can't kind of cheat, even if it makes it a bit easier, for example.Killian Daly: Yeah, well, exactly. And, ultimately, what are we trying to do here? Is the purpose so that certain consumers can say that they have no emissions, or is the purpose to set up an incentive system so that when those consumers actually. Do you say they have no emissions that they've gone through all of the challenges of grid decarbonization?So they've bought renewables. So they've invested in storage. So, fine, you can consume solar power at nighttime if you put it in a battery during the daytime. They're thinking about, demand flexibility. Are they consuming a bit less when there's less wind and sun? They're hard challenges, right?We need to do a lot more of those type of things, and a proper accounting framework will make sure that in getting to zero that you have to think about and take all of those boxes. Whereas today, you can just be 100 percent solar powered and obviously that's just not going to lead to the grid decarbonization in the real world that we want to see.Chris Adams: Maybe if you're in space it might work, but mostly no. Okay.Killian Daly: Mostly, no. Yeah, Chris Adams: Okay, so we spoke a little bit about why there are some problems with the existing process, and like you, we've spoke a little bit, hinted at some kind of ways you could plausibly fix this. So do you, could you mind just talking me through some of the key things that EnergyTag is pushing for in that case?Because it doesn't sound like you're trying to do something totally wacky, like, say you're never allowed, sorry, you're, it's not like you're asking for something like a significant change, like you're not allowed to split the greenness from power and or stuff like that. It sounds like you're still working inside the current ways that people are used to buying power and do all that stuff at the moment, right?Maybe you could tell me about how it's supposed to work on the newer schemes that you're working with.Killian Daly: Yeah. So basically what we're advocating for is that, if you're gonna claim to use green energy based on how you contract for power, then, well, you have to temporally match, right? So you can only claim to use green energy produced in the same hour as your consumption. Not in the same year, Okay. number 1. Number 2 is we need to think about the deliverability constraints, right,and this geographical matching issue. And what we're saying is that, for example, in Europe, Europe is not a perfectly interconnected grid. And so you shouldn't be able to claim you're consuming green energy from anywhere else in Europe, you should be doing it, in the same bidding zone or, at least at aChris Adams: There needs to be some physical deliverable, physical connection to make it possible. Okay.Killian Daly: Or fine, you can go across border, but you have to show that actually the power actually did come across border and that you're not violating like fun. You're not importing, 10 times more certificates than you are real power between 2 countries, right? So we need to have those, limits put in place.And another thing that we think is important is that there needs to be some sort of controls on individual consumers just buying a load of certificates, for example, from very old assets. And I'm totally relying on those to be 100 percent green. For example, if I'm in Germany, right, and I just sign a deal with a hydro power plant, that has existed for 100 years and I'm time matched and I'm also within Germany, spatially matched, and I'm claiming to be 100 percent renewableChris Adams: it's not speedytransition if it's a hundred years old, that feels like that's stretching the definition of being an agent of that. Okay.Killian Daly: that's another thing to kind of, you know, having this 3 pillar framework.Sometimes we call about, and that is very important. I think for an existing consumer, it is legitimate to claim a certain amount of that existing power, but that must have a limit, right? You can't just be resource shuffling and "well I'm the one who's taking all the green energy" and everyone else is left with the, fossil that needs to be controlled also.Chris Adams: All right. I think I follow that. So basically, so timely has to be more or less the same time, right? Deliverable, like you need to be able to demonstrate that the power could actually be delivered to that place. So deliverable there. And this other one was like, additional, like we need to transition, so you can't look at something which is 100 years old or 50 years old and say "I'm using that, I'm fine." There is this notion of bringing new supply stream to kind of presumably displace or move us away from our current fossil based default, which is not great from a climate point of view, right?Killian Daly: Exactly. And I think one way, there's a really, a good friend of mine, who's in the Rocky Mountain Institute, Nathan Iyer, smart guy. We've worked a lot on US federal policy topics, and he actually has a really, good analogy about this stuff. BYOB, right?So, yeah, of these 3 pillars. So, like, when you're going to a party, you need to bring your beer to the party on time. You can't bring it yesterday, you need to bring it when the party is happening. You need to bring it to the party, not to another party. And it needs to also be your own beer.You can't just be taking someone else's. And it's it's kind of like a bit simplified, but it's a good analogy, I think for what we're trying to get out here. It's if we get everyone to start like thinking that way and acting on those kind of fundamental principles, obviously, we're going to end up being much more effective in deeply decarbonizing our power systems.Chris Adams: So, decarbonization of the grid communicated through the power of carbonated beverages, basically. Wow!Killian Daly: What could be better?Chris Adams: I think it's, well, it's topical, at least it's still talking about CO2, just on slightly different scales, actually. I quite like that, actually. I might borrow that one myself, actually. Okay. So, there's one thing that you mentioned then. So this notion of, we spoke a little bit before about there's this idea of greenness that could be split, you're still keeping that, so you're not, saying, there's no ban on saying you're not allowed to sell power, that is unbundled from that, there is, that is still a kind of key idea of flexibility, could you maybe, I mean, cause from someone who isn't familiar with it, they might say, "why do we even have this, idea of being able to have separate these in the first place.Doesn't this make things much more complicated?" I mean, I might be going down into the weeds, but is there a reason for that? is it just because that's how it's such a big change there that, or it's really hard to make that, to get people to shift to a new way of doing things or, what was that, what's the thinking around that part there?Killian Daly: Well, basically, right, anytime you want to claim or have a contract, whether that be an unbundled or a bundle PPA contract, Chris Adams: Power Purchase Agreement, right?Killian Daly: Yeah, a power, like a long term power purchase agreement, for example, right? so anytime you have a contract for a specific type of electricity, you need an accounting mechanism or a tracking mechanism that kind of sits on top of the grid and allocates generation to consumption, becauseobviously, the way that the grid actually works, is that electrons are just oscillating around the place. there's not really a methodology to physically trace this individual electron started here and went there, right? And so, much like power markets do, and they have mechanisms for contractually allocating power between different buyers and sellers, as long as it's matched in time and space, that's a fundamental premise of our power markets work, we're basically borrowing that concept, but attaching the greenness attribute,Chris Adams: Ah,Killian Daly: and saying "provided that this system, of detaching greenness from the power is respecting temporal and geographical matching requirements, deliverability requirements, sufficiently, then that should be the basis of legitimate green claims and that essentially creates a market mechanism for financing renewables.If you don't do that, then you cannot have a green power market basically, right? You,= don't have a way of differentiating buyers who are contracted for green power and those who are not doing anything. So, yeah, for example, a few years ago in Air Liquide, we only did this, we didn't look at what contracts we were sourcing.We just did this location based accounting where you take an average of all the generation in the grid. Which is another way of looking at electricity emissions and a very valid way of doing it. But obviously one disadvantage that has is that it basically leaves all consumers passive.They have no incentive to do anything in terms of driving electricity decarbonization. So that's why we need these, these mechanisms of essentially having tracking Chris Adams: systems. Oh, okay, I see. So, if you, if there's no recognition, if I'm working at a large company, why would I, why would I choose to buy something green if I can't be recognized for me doing something, doing that green step? And, so the downside of the location based approach is that yes, it gives you one single answer, but it takes away this idea that organizations which have honestly massive amounts of resources can influence or speed up a transition.That's what it seems to be a kind of it's trying to respect that reality or at least acknowledge that this is what we expect of organizations if they're that powerful.Killian Daly: And one person, I know you've had Olivier Corradi from Electricity Maps on before they've done, some very good blog series on this topic. They're obviously have insanely deep knowledge of grid emissions is really no one better that I've come across.And they did a very kind of simplified explanation of this stuff. And you have the location based method, which is like maximizing physical accuracy and then you have the market based method, which is trying to maximize incentives and financing. And what this 24/7 accounting framework that we're advocating is basically trying to make those things meet in the middle, right? Today we have a market based system that is too much focused on, I would say, flexibility, making it easy for people to say they're green. and so has led to very valid criticism. And what we're trying to do now is bring that market based mechanism back closer to the physical realities of the grid,Chris Adams: Oh, I see.Killian Daly: But keeping the, incentive system, because if you don't have that, then, well, I don't really see the point in even doing the exercise.Chris Adams: Okay. So there's two things that I wanted to kind of just see if I could maybe dive into a little bit on that then. So it sounds like this whole notion of not having this stuff tied to each other is to reflect the fact that people have all these complicated ways to purchase power in the first place.So in my world as a cloud, as like someone working as a cloud engineer, right, I might buy computing by the hour, but I might also buy it, in advance for three years, for example, for a lower price, and that, that provides a bit of stability for whoever's running my server, but this kind of, this is an example of me having multiple different ways of being able to buy something, and essentially, some of that unbundling there is actually trying to capture the fact that there is, there are all these complicated ways to arrange to pay for something, and this is one way that we can use to value some of the Flexibility and stuff you said before.So for example, you spoke about you can't run something on solar power, right? But if you had a battery, you can capture that and then use a battery bit like a time machine to kind of run at night almost right so but therefore you're trying to but that's more expensive than just making some claims.So you need to have some way to recognize the fact that it takes a battery and a bunch of extra smarts to run something at night from that. That's what you're trying to go for with that, right?Killian Daly: Yeah, exactly. And again, basing things on how power markets contractual, they have ways of already have contracted with allocating power between generators and consumers. I think the biggest issue with unbundling, so, selling the energy attributes and the power to different people. Actually, I think what the fundamental problem is the lack of time matching and deliverability requirements. That's where unbundling has gone wrong. Because it's, it said, "we're going to take the green attribute from this energy in Norway, and we're going to allow it to be used at any time of year, anywhere in Europe."That's insane. That's where it starts to get completely insane. I don't have any particular problem with you producing it in one hydro plant, and selling the power into a power pool. and then that being consumed in Norway in the same hour. That's literally how power markets work on a short term power market.Everyone bids into a common pool. And why not just put the attributes into the same pool and well, they, all have the same properties anyway. So it doesn't make a difference. It's the only way you're ever going to have liquidity, right? so I don't see any fundamental issue with, that.The fundamental issue is with the annual matching and theChris Adams: the physics beyond breaking point, essentially.Killian Daly: And that's, I think that's why I'm bundling, it's got such a bad name, right? And I think that's actually been fair, but I do think that it's not that bundling around bundling or necessarily the issue is, kind of theChris Adams: like those three pillars you mentioned. Okay, gotcha. Thank you for indulging me as I went down that thing, because I didn't know the answer to that, and I've always been wondering. Okay, so, we spoke about this thing called EnergyTag. We've spoke a little bit about how it's supposed to work and how it's basically an improvement on some of the approaches before.And, maybe we could talk a little bit about who's using it? Is anyone, adopting it? maybe we could go from there, because this sounds like a cool idea, but there are many, cool ideas. That no one is paying attention to. And I suspect that would be quite a demoralizing conversation if that was the case.So, yeah, I mean, who's using this and where, are there any kind of big name adopters you might point people to or anything like that?Killian Daly: Yeah, so, yeah, two of the leading ones that kind of come to mind immediately, obviously, especially for software folks like yourselves or Google and Microsoft, they have 24/7 clean energy targets by 2030. Basically, they're committing to buying clean power for every hour, their data centers are consuming electricity, everywhere in, in which they're operating.So they're two of the most, I would say, advanced, ambitious, corporate climate commitments in terms of scope 2 electricity procurement, at least. And they're obviously two major buyers. And they've been signing some really interesting deals as well. So there's, gigawatts now already of these 24/7 or close to 24/7 PPAs signed, 80, 90 percent firmed, portfolios of renewables, and that's game changing, right?that's something we've seen emerge in the last few years where traditionally, the way of buying renewables has been "I'm going to buy a solar contract, and I'm going to blend that into whatever I'm buying elsewhere." And that's fine, right? But it's only giving you maybe 20, 20 percent of your electricity on an annual basis.Now, we're seeing new contract structures that are blending together. Solar, wind, batteries, and getting maybe 80, 90 percent like of a flattened,Chris Adams: so that's what I mean by firmed then, so firmed is this idea that it's basically it's when you say, so if it's not firmed, it's like I'm gonna buy the same amount totally without thinking about when it's matched, but if it's firmed then I am trying to think, I'm taking the steps necessary so that I can make a much more credible claim that the power I'm using is coming from generation or from stored amounts of power or something like that.Ah,Killian Daly: And that's, as I said, there's gigawatts of deals done already to date. Are there people doing this hourly matching stuff? Yes, absolutely. Check out our website. There's 30 projects there, with millions of megawatt hours of hourly matching being done.So, this is not 40 organizations or something doing it 5 continents. So, This is not rocket science, right? This is literally taking meter data. That's very common, hourly production and gen data. You could do it on an Excel file with three columns if you wanted, and matching those things together and seeing where we're at. So it's absolutely demonstrated and leaders are doing it. Is everyone doing this? Is this now the status quo way of doing it? No, absolutely not. And that's what we work every day to try change, right? so we're still, I would say, relatively in the early days of this transition, but, as far as I'm concerned, it's kind of inevitable for credibility reasons, transparency reasons also for pretty fundamental economic reasons. Companies going out there and committing to buy loads of energy that is unmatched to their consumption profile.They're leaving themselves open to a lot of risks. So, what if you say, okay, I'm just going to buy a load of solar. That has no connection to how I actually consume electricity. You're leaving yourself open to a lot of volatility that we're seeing electricity markets today. A lot of super high prices in the evening.For example, when you're, when your solar contract is not delivering you anything, then what do you do? Right? you have all this gas volatility and exposure. So it's not just about decarbonization. It's also about things like electricity price hedging. So there's kind of various, I think, fundamentals that mean that.We are going to move in this direction.Chris Adams: okay, so So if I understand that final point that you've basically made is if I want to do this kind of matched thing for example, or if I want to, if I want to be buying some power, one of the advantages of doing like a longer term deal is that there's a degree of stability. So let's say, I don't know, a one country decides to invade another country and then totally make gas prices go through the roof.I'm somewhat insulated from all that stuff so that it's not gonna massively destroy, it's not gonna destroy the, make impossible to kind of pay my own bills, for example. And like we've seen those of examples of that over the last few years, for example. So there's a bit of insulation from that kind of stuff.Yeah.Killian Daly: Exactly. So now we do get into kind of contracting mechanisms here. It's a little bit similar to what basically, if you're committing to a fixed price, for example, for a number of years, if you sign like one of these PPAs and you commit, let's say, to a 10 year fixed price for power. And if you're committing to like a affirmed profile, let's say 90 percent matched,that has a very significant hedging value. So it means that basically you fixed like a lot of your power price. So no matter what happens, if, there's a massive spike in gas prices and power prices go through the roof. You're protected against that. We actually worked on a really interesting study on this a couple of years back or 18 months ago that said.With Pexapark, who are like PPA analysts, and they basically showed that like a 10 megawatt consumer in Germany could save over 10 million euro, in the best of cases, and at least millions of euro in a given year by signing these 24/7, or close to 24/7 power purchase agreements with clean electricity assets, because one thing that clean energy has as an advantage in an ever more uncertain world is that the costs are basically known up front. You know how much money you need to build a wind turbine to build a battery up front.It's all capex heavy. And that means that renewables can basically Give you a fixed price up front where honestly, gas cannot, because, most of their costs are operational. It's about buying the gas when you need it to.Chris Adams: And there's a constant flow is not Okay, I guess with the sun, I mean, there's maybe a scenario where, I mean, it's not like there's a Mr burns style blackout of the sun kind of thing, right? if you're relying on something where no one has control over, no one can, kind of blockade the wind or blockade the sun.That's where some of the stability is coming from, right?Killian Daly: Yeah, exactly. Right. so you have those things, and you know that those fuel sources basically don't cost anything. Right? so you're all your costs are in construction, materials, all things you basically know, largely upfront, and that does enable you to provide long term contracts, typically way beyond the terms that fossil fuel generators can offer.And so it can protect you for, the consumers willing to take that long term price risk. It can really offer really significant hedging benefits. not above alternatives.Chris Adams: Buy that on like the spot market as it were or buying something just like on the regular market. Okay. All right. So, so you mentioned a few large companies doing that stuff and outside of technology, I know that I think it's the federal government. They've, it sounds like you said one or two things, which are quite interesting.There is this idea that 100 percent is obviously really, good. Right. And that's what you want to head towards. But given there are some places where aren't, they're not going, they're not shooting for 100 percent straight away, for example, they might be going for 50 percent or 60 percent or something like that.This is something that is kind of okay to do, or that's okay to start at. Cause I think I heard about the government, the US government had a plan for something about this by 2030 or something.Killian Daly: Yeah. So basically, what we, we started the conversation talking about accounting. So I think the first thing you need to do is get, the accounting right. So that when you say 50, it means 50 or when you say 100, it means 100 because if you're just saying 100 and it means 50, then well, you're screwed, right?You have a bad system. So, I think, actually being at 70 percent renewable, but saying that out loudChris Adams: 70%. Yeah.Killian Daly: and addressing the, the basic fact that you're only there that's much better than kind of saying I'm 100 percent renewable on some annualized basis and kind of like misleading people about where you're at with, decarbonization.Chris Adams: So it's better to be a real 70 than a fake 100, basically, yeah? Killian Daly: Yeah.And, so, you have, electricity, like suppliers, for example, who are, there's like Good Energy in the UK, Octopus Energy in the UK, most of the electricity suppliers now in the UK, in fact, are, offering these like hourly tariffs.And, I don't think any ofChris Adams: it was only one or two that did that. Whoa. That'sKillian Daly: Now, I think this year it'll become more, a kind of a norm, where they will offer this alongside their a hundred percent renewable tariff. And none of those hourly tariffs are gonna start off being a hundred percent renewable, but it's bringing that extra bit of transparency, which I think is great.And, the likes of good energy, they're already offering to thousands of customers, right? This is not just the Googles and the Microsofts who have their long term targets on this. This is already being offered to thousands of customers around the world because electricity suppliers are basically taking.They're doing all the work. They're just giving the consumer the number on some dashboard saying, this is how much matching you have. if you look at the Octopus Energy example, it's quite interesting. They have a tariff called Electric Match for some of their B2B customers and they're actually basically reducing your price of power. when you're more matched, so that's quite cool, yeah, they're charging you less the more that your demand is matched to their generation. Right? And I think that's quite a cool gamification of this. They're saying basically try to consume when there's more wind and sun in the UK, you'll be more matched and we'll cut, we'll cut your rates because obviously it's sort of, it costs them less to deliver that in the first place.So that's. That's the type of cool mechanism.Chris Adams: So, I swear, every single time I speak to energy people, and they say, "oh yeah, the price is totally changing." Then I think of one level up, when we're like paying for cloud, and it's the same price all the time. Someone's making a bunch of money off us doing all the kind of carbon aware computing stuff, because if the price is going, low, I would expect to see those numbers go low.This feels like something we might want to have a conversation about inside the tech industry then, if they are, if there's savings being made here, because it feels like it would be nice if those were passed on, I suppose. So, all right, let's speak, go on,Killian Daly: I think just very importantly, of the, the more I think one fundamental truth that we're going to see,it's already the case in some parts of the world, but this is going to be an essential truth of the transition. The more renewables you have, the more volatility you're going to have in power prices. And the more flexible you can be in your consumption. It is going to be very rewarding economically, if you can consume, at the times of day when there's loads of wind and sun, power prices are going to be very low and you're going to get rewarded for that. If you can't, if you can only be base load, then that is going to cost you.Chris Adams: Ah, okay, alright. Okay. Alright, that's it, that's a useful thing to take into account. And so, we spoke before about, scope 2 and stuff like that, and you spoke about this idea that you're defining this standard. Now, EnergyTag is a standard in its own right, but, as I understand it, it's not like you're stepping outside of this.You are still engaging with the protocols and all the stuff like that right now, yeah?Killian Daly: Basically, so yeah, EnergyTag is a nonprofit. we do a couple of different things. we're obviously focused on this area of electricity accounting, electricity markets and better green energy claims and all that. And so yeah one of the things that we do is we have a voluntary standard for hourly energy tracking because one of the kind of blocking points we have today, is that the way we do this tracking with these energy certificates, it tends to be on a monthly or even an annual basis globally.And sometimes we don't have the information on the certificates to do this hourly matching. So we're trying to un debottleneck that particular technical issue. Think about how do we track through storage, like doing some novel things there. So we have a standard for that, but that's only one of the building blocks, I would say of this much larger question of, like, how do companies do electricity accounting or how do they do carbon accounting more generally? Our standard is there to work on that specific topic, but actually a lot if not most of what we do today is like working on policy advocacy around the world, working on global standards and basically advocating for those to change because ultimately it's the meta-levers, regulations,standards. Once they change, then we're just there to help technically put that all together with some voluntary standards as long as they're needed.But it's not our aim to be the world's next greenhouse gas protocol. That's really not in our wheelhouse. What we want to do is make sure that global standards and regulations are as good as possible.Chris Adams: Oh, I see. Okay, so that, so if we go for a concrete example of this. So, in Europe, if you want to do a hydrogen project, which is, in some ways, kind of a bit like an AI project in that it's like a building that uses loads and loads and loads of power in one place, right?Really dense. If you're going to make, green hydrogen, for example, you're taking water, adding loads of electricity to split that, and that's incredibly energy intensive. So you've probably want that, if you want the green hydrogen to be green, probably only use green energy. And one of the things you told me about before was, yes, we won that fight so that any, and if people want to get any of the subsidies from the government to kind of do this green energy thing, they need to have those three pillars style approach, right?That's what, that's an example of your strategy, yeah?Killian Daly: Yeah, so this is actually the reason I what really brought me into EnergyTag, it was a Greenhouse Gas Protocol thing, but basically are the key to one of the world's largest hydrogen producers. Right? And so I got put onto this topic a few years ago, which I found incredibly important and fascinating and, maybe not well enough understood.It's like, when we're going to produce hydrogen using electricity, we need to really make sure that the electricity is squeaky clean, because of the efficiency issues and losses that you just inherently have with electrolysis. And so, just to give a quick example, Jesse Jenkins lab in Princeton University, a guy called Wilson Ricks, who is a rock star of power system modeling, they model this right?And they show that in the US, if you basically use today's carbon accounting rules, this annual matching stuff, and you built out a hydrogen sector based on those rules, you will have hydrogen that is twice, maybe even three times as bad as today's fossil fuel hydrogen production. and you'd be calling it clean and subsidizing that production. Totally insane, just literally wasting money.And so it's actually really, important. Billions of dollars of subsidy are going to go into hydrogen in Europe and in the United States. And so we worked a lot with NGOs, advanced companies and other partners to advocate for these strong requirements on green electricity sourcing for hydrogen, both in the US and also in Europe, and we won on both fronts, which hasChris Adams: Oh, the US way as well!Killian Daly: Yeah. Yeah. Yeah. Yeah. And it hasn't been, so both of those are legislation in Chris Adams: place.They're in! Yay science!Killian Daly: Yeah, that's the legal way now to qualify for the tax credit in the US. In Europe, there's a phase in period on the hourly part to 2030. So, in 5 years or whatever.But anyway, projects built now, they have to be designed to comply with that. And so, Chris Adams: if you know,it's going to be in the law of five, you're just going to make sure you Killian Daly: going to start doing it now, right? more or less. yeah, so that's, yeah, obviously, this is kind of like hundreds of millions of tons of CO2 per year on the line between good and bad rules and that, that's kind of a concrete example of, why these things matter. Right? Like accounting sounds boring sometimes. I definitely thought it was boring before I realized like, "Oh my God, I'm working for a huge power consumer and this is changing everything." So yeah, it's definitely super, super important that we get this stuff right.Right.Chris Adams: Okay, so we spoke about, it sounds like you've done the work with Air Liquide and you've kind of essentially laid the groundwork to move from a fossil based hydrogen thing to hopefully a greener way of making hydrogen, which ends up being used in all these places. And now it seems like you've got the, okay, you said Google and Microsoft, same power usage as Air Liquide in a single year.Maybe they might've changed, but back then, there's, so it looks like we're seeing some promising signs. For that over here. So maybe, I mean, if we, want to see that, what do we need to see at a policy level? Do you need to have, government saying, "if you want to have green energy for data centers, you need to be at least as good as the hydrogen, industry."Is it something like that you need to do? Because what you've described for the hydrogen thing sounds awesome, but I'm not aware of that in the, kind of IT sector yet. That's something that I haven't seen people doing yet. Killian Daly: That is also coming, right? So hydrogen has just been the first battleground or the first palce, I think. Interestingly, actually, on the 14th of January, just before the inauguration of Donald Trump, as US president, so the Biden administration issued an executive order, which hasn't yet been rescinded.Basically on data centers on federal lands and in that they do require these 3 pillars. So they do have a 3 pillar requirement on electricity sourcing, which is very interesting. Right? I think that's quite a good template. And I think, we definitely need to think about, okay, if you're going to start building loads of data centers in Ireland, for example, Ireland, 20 percent 25 percent of electricity consumption in Ireland is from data centers. That's way more than anywhere else in the world in relative terms. Yeah, there's a big conversation at the moment in Ireland about "okay, well, how do we make sure this is clean?" How do we think aboutprocurement requirements for building a new data center? That's a piece of legislation. That's being written at the moment. And how do we also require these data centers to do reporting of their emissions once they're operational? So, the Irish government, is also putting together a reporting framework for data centers and the energy agency.So the Sustainable Energy Authority of Ireland, SEAI they published a report a couple of weeks ago saying, yeah, they, you know what, they need to do this hourly reporting based on contracts bought in Ireland. So I think we're seeing already promising signs of legislation coming down the road in other sectors outside of hydrogen.And I think data centers is, probably an obvious one.Chris Adams: So people are starting to win. Wow, I didn't realize that. I knew somewhat about that there was an executive order that there was a bit of buzz about. But I didn't realize that, set the precedent. So, yeah, we should do what that massive industry over there is doing because that's now the new baseline that, that's where the bar should be.We should do that as well, basically.Killian Daly: Exactly, because that those hydrogen rules, it's actually what it actually is. Well, actually, the whole debate was about is what is clean electricity procurement? What does that mean? What does it mean to use clean electricity? And that has been defined now in hydrogen rules and that can be copy and pasted to any large new load.Well, if you want it to be clean, we already know the answer. It's in legislation,Chris Adams: It's how to tell when energy is green,Killian Daly: MIT, IEA, the who's who of energy experts have all modeled this and they've all found that this is the way to do it. So, there's a template there, right? And it's, if you're going to go against that, it, yeah, well, obviously, then you're, obviously sacrificing the integrity of your accounting schemes.Chris Adams: Wow! That was, we spoke about how to tell when energy is green, and you've, We seem to be ending on a high, I didn't realise we'd actually got to that. That's really, awesome. You've really made my day for that, Killian. Thank you so much for coming on and diving into the minutiae of carbon accounting for electricity, but also ending it with a slightly less depressing piece of news, which I'll take in this current political climate,Killian Daly: just to interject before I say goodbye, there's one, one really, it's good to end on a positive note, I suppose, in this mad world we live in. There was a project announced recently. I think people should go check it out in the Middle East in UAE, where basically for the first time, they're going to deliver basically, around the clock solar power. So 1 gigawatt of solar, all night long because they're basically, building a massive battery and a huge solar farm, and basically all year round is going to deliver, green electricity at under 70 us dollars per megawatt hour, which is extremely competitive.So, I think solar and storage, what they're going to do together is going to be, is going to change the world. Right? I really think that is going to happen faster than people think. They're going to start to kill gas. So, yeah, I think green energy economics, despite what politicians will want to do with their culture wars,I think will at the end of the day, hopefully, answer some of the questions we're trying to solve here. So, yeah, thanks so much for having me on. It's been a real pleasure.Chris Adams: Brilliant, thank you so much for that mate, and may the fossil age end. That's really, that's so, so cool to actually see that, I totally forgot about the Masdar thing, which is the city. Yeah, and we'll share a link to that so people can read about that, because if you care about, I don't know, continued existence on this planet, then yeah, it's probably one to, good one to read about.Killian, this has been loads of fun, thanks a lot mate, and next time I'm in Brussels I'll let you know, and maybe we can catch up for, have a shoof or something like that. Take careKillian Daly: Yeah. A hundred percent. Thanks. Bye.  Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode. 
undefined
Feb 13, 2025 • 28min

Backstage: Impact Framework

This episode of Backstage focuses on the Impact Framework (IF), a pioneering tool designed to Model, Measure, siMulate, and Monitor the environmental impacts of software. By simplifying the process of calculating and sharing the carbon footprint of software, IF empowers developers to integrate sustainability into their workflows effortlessly. Recently achieving Graduated Project status within the Green Software Foundation, this framework has set a benchmark for sustainable practices in tech. Today, we’re joined by Navveen Balani, Srinivasan Rakhunathan, the project leads and Joseph Cook, the Head of R&D at GSF and Product Owner for Impact Framework, to discuss the journey of the project, its innovative features, and how it’s enabling developers and organizations to make meaningful contributions toward a greener future.Learn more about our people:Navveen Balani: LinkedInSrini Rakhunathan: LinkedInJoseph Cook: LinkedInFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterResources:Impact Framework | Green Software Foundation [00:00]The SCI Open Ontology | Green Software Foundation [04:27]SCI for AI - Addressing the challenges of measuring Artificial intelligence carbon emissions | Green Software Foundation [06:57]SCI Guidance [12:07]CarbonHack [13:03]Impact Framework Github Page [17:58]IF Explorer [20:18]IF Community Google Group [23:42]Events:Kickstarting 2025: A Community-Driven Sustainable Year (February 13 at 5:00 pm CET · Utrecht): [24:21] Advocating for Digital Sustainability (February 19 at 6:00 PM GMT · Hybrid · Brighton): [25:10]Day 0: MeetUp Community GSF Spain (February 20 at 6:00 PM CET · Online): [25:33]Digging Deeper into Digital Sustainability (February 20 at 6:00 pm AEDT· Melbourne): [25:59]Practical Advice for Responsible AI (February 27 at 6:00 pm GMT · London): [26:27]GSF Oslo - February Meetup (February 27 at 5:00 pm CET · Oslo): [26:46]If you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Chris Skipper: Hello, and welcome to Environment Variables, where we bring you the latest news from the world of sustainable software development. I'm the producer of this podcast, Chris Skipper, and today we're excited to bring you another episode of Backstage, where we peel back the curtain at the GSF and explore the stories, challenges and triumphs of the people shaping the future of green software. We're no longer gatekeeping what it takes to set new standards and norms for sustainability in tech.This episode focuses on the Impact Framework, also known as IF, a pioneering tool designed to model, measure, simulate, and monitor the environmental impacts of software. By simplifying the process of calculating and sharing the carbon footprint of software, IF empowers developers to integrate sustainability into their workflows effortlessly.Recently achieving graduated project status within the Green Software Foundation, this framework has set a benchmark for sustainable practices in tech. Today, we have audio snippets from Naveen Balani, Srinivasan Rakhunathan, the project leads. And Joseph Cook, the head of R&D at GSF and product owner for Impact Framework, to discuss the journey of the project, its innovative features, and how it's enabling developers and organizers to make meaningful contributions toward a greener future.And before we dive in, here's a reminder that everything we talk about will be linked in the show notes below this episode. So without further ado, let's dive into the first question about the Impact Framework for Naveen Balani.Naveen, the Impact Framework has been described as a tool to model, measure, simulate and monitor the environmental impacts of software.Could you provide a brief overview how this works and the inspiration behind creating such a framework?Navveen Balani: Thank you, Chris. And thanks to all the listeners for tuning in. Let's first understand the problem we're solving with the Impact Framework. Software runs the world, but its environmental impact is often invisible. Every CPU cycle, every page load, every API call, these all contribute to energy consumption, carbon emissions, and water usage.Yet, without the right tools, measuring and managing this impact remains a challenge. This is where the Impact Framework comes in. It's an open source tool designed to transform raw system metrics like CPU usage or page views into tangible environmental insights, helping organizations take action. Built on a plugin based architecture, it allows users to integrate, customize, and extend measurement capabilities, ensuring scalability and adaptability.More importantly, the Impact Framework helps realize the software carbon intensity specification, making sustainability reporting transparent, auditable, and verifiable. Every calculation, assumption, and methodology is documented in a manifest file, ensuring that impact assessments are replicable and open for collaboration.At its core, the Impact Framework is built on a simple yet powerful idea. If we can observe it, we can measure its impact. And once we can measure it, we can drive real change, reducing emissions, optimizing resource use and building truly sustainable software.Chris Skipper: What were some of the most significant technical or organizational challenges you faced during the development of the Impact Framework and how did you and the team overcome them?Navveen Balani: The Impact Framework wasn't just built, it evolved. It was shaped by real world challenges. Lessons learned and the need for a scalable, transparent way to measure software's environmental footprint. The foundation of the Impact Framework was laid through previous projects and ideas. Starting with SCI Open Data, which tackled the lack of reliable emissions data, and SCI Guide, which helped organizations navigate different datasets and methodologies.Another critical component was the SCI Open Ontology, which defines relationships between architecture components, establishing clear boundaries for calculating measurements. Alongside these foundational efforts, real world use cases from member organizations applying software carbon intensity measurement played a crucial role.These practical implementations tested SCI in diverse environments, refining methodologies, and ensuring that SCI calculations were not just theoretical, but applicable and scalable across industries, but data alone wasn't enough. We needed to scale measurement across thousands of observations.Sustainability assessments had to be continuous, automated, and seamlessly integrated into software development. This led to key innovations like aggregation, which enables organizations to condense vast amounts of data into meaningful, structured insights, rolling up emissions data across software components to provide a holistic system wide view.Technology, however, was just one piece of the puzzle. Adoption was equally critical. To accelerate real world impact, we opened up the Impact Framework to our annual Carbon Hackathon event. Where teams worldwide build projects that pushed its capabilities. This was a turning point, validating its flexibility and refining it through community driven development.At its core, the Impact Framework is built on transparency. Unlike black box solutions, every input, assumption, and calculation is fully recorded in a manifest file. Making assessments auditable and verifiable. This commitment to openness has been crucial in building trust and driving adoption.Chris Skipper: Looking ahead, what are the next steps for the Impact Framework? Are there specific new features or partnerships on the roadmap that you're particularly excited about?Navveen Balani: That's a great question, Chris. Looking ahead, the Impact Framework is entering an exciting new phase with a major focus on expanding measurement capabilities for AI. Right now, we're working on the SCI for AI specification. which extends software carbon intensity to both classical AI and generative AI workloads.Measuring AI's environmental impact comes with a new level of complexity. AI isn't just another software workload. The environmental footprint varies significantly depending on whether you're training a model from scratch, fine tuning a large language model, or simply using an AI API like ChatGPT or Gemini.Each scenario has different compute demands. Memory requirements and energy consumption patterns, making standardized measurement both challenging and essential. Through the Impact Framework, we aim to tackle this by developing new plugins and contributions that enable precise measurement of AI related energy use, hardware efficiency, and emissions across training, fine tuning, and inference workloads.These capabilities will collectively evolve, through community participation with researchers, developers, and organizations, contributing to refining methodologies, expanding data sets, and ensuring that AI measurement remains transparent, auditable, and standardized. This collaborative approach will allow organizations to quantify, compare, and optimize their AI workloads.Making sustainability a key consideration in AI deployment. Beyond AI, we are also exploring new partnerships to further enhance the Impact Framework's adaptability. Collaboration with cloud providers, software vendors, and sustainability researchers will be crucial in ensuring that the framework evolves alongside industry needs.Our goal is to make environmental impact measurement not just an option, but a fundamental part of software and AI development at scale.Chris Skipper: Moving on, we have some questions for Srini. Srini, IF emphasizes composability and the ability to create and use plugins. Could you explain how this innovative approach has enabled more accurate and flexible environmental impact calculations for different types of software environments?Srini Rakhunathan: Absolutely. The Impact Framework's emphasis on composability and the use of plugins is actually a game changer for different environmental impact calculations. If you notice that the framework is highly modular, making and allowing users to create and integrate various plugins. What it means is you can tailor the framework to fit the specific needs of your software and it doesn't matter what type of software you have, whether it's cloud based, on prem or hybrid.What is also advantageous is that the plugin ecosystem has a wide range of tasks. For example, it has something around data collection, it can do impact calculation, it can do reporting. It can do also very, very specific tasks like math functions and aggregation functions. What this means, you can mix and match plugins to create a mashed up pipeline that reflects your environment, whether you are running your software on web, cloud, mobile, doesn't really matter. As long as you know what your software boundaries are, you will be able to combine these plugins and create your own, um, pipeline, if you will. And that pipeline will help you, uh, create your calculation pipeline that can either run one time or run as a batch or, you know, run based on certain triggers.What it also means, and if you notice, there is also manifest files, and we will be talking more about it later in this conversation, is that the manifest files ensures that you have a repeatable way of calculation. I mean, you mash up these different plugins and you create a pipeline and you embed it in a manifest file and it's repeatable.So what I think is this framework's capability of composability and plug in can help you make very, very accurate impact calculations.Chris Skipper: How have collaborations with organizations like Accenture and Microsoft, as well as the open source community, contributed to the success of the Impact Framework? Are there any standout moments or partnerships you'd like to highlight?Srini Rakhunathan: Thanks, Chris. That's a great question. So the cornerstone of the success of Impact Framework has been collaborations. And this has been ongoing from the time this project was conceptualized. Bear in mind that when we, like Naveen, who's there also with us, and I, along with the Joseph and Asim started thinking about the project.The initial vision of the project was very different. So we started off with something called SCI Guide, where we wanted to collate datasets across the open source community to help calculate emissions from software. And we built the SCI Guide and that transitioned into something called CarbonQL, which is a primitive version of what we see today in the Impact Framework, which is more like how do we make sure that it is easier for users or developers to calculate emissions from software and the learnings that Naveen, Joseph, I and Asim went through to come up with the initial version of Impact Framework and the amount of work that the team has put together to get it to graduation state is amazing and it speaks volumes about the collaborations that has gone ahead into the building of the tool.One particular highlight I want to call out is every year, GSF organizes what they, what is called the CarbonHack. And in 2024, the CarbonHack focused on getting the open source community to come and build tools.On top of Impact Framework, either extension of the tool or building content or newer areas where the Impact Framework can be used. And you would be amazed at the amount of contributions that came in and newer use cases that looked at calculating emissions, not just from carbon, but from water and other forms of renewable resources was also identified.And that's great. That, I believe, was a standout moment for the tool.Chris Skipper: The IF documentation highlights the use of a manifest file and a CLI tool to calculate environmental impacts. Could you walk us through how these tools work and how they lower the barriers for developers to adopt sustainable practices?Srini Rakhunathan: Definitely, we can talk about both the CLI tool and the manifest file. These are actually cornerstone capabilities built within the Impact Framework, and they help us to calculate the environmental impacts. What happens is, the manifest file contains a list of of the software's infrastructure boundary encoded as YAML files.It's in the standard YAML format, and it contains every bit of component that is part of the software, whether it's front end, middle tier, back end, database, API, everything encoded as what's the hardware used, what's the utilization, what's the telemetry involved. So much so that it can be used to give us an input to the Impact Framework CLI tool that calculates emissions.The use of the file enables transparency and rerunability. That means it can allow anyone to re execute the manifest file and everyone will come up with the same calculations. The second piece that we spoke about, which is a CLI tool, it's a command line tool, which means it can be used to run on any environment.It processes the manifest file and computes the environmental impacts. So the way it works is developers can pass the path to the manifest file to the CLI tool, and it'll take care of the calculations. The tool has capabilities to do phased execution and that allows efficient and flexible use of the framework.Chris Skipper: And finally, what lessons have you learned from working on this project that might benefit other teams looking to build tools or frameworks for sustainability in tech?Srini Rakhunathan: Thanks for asking this question. At an overall level, I would like to respond to this question by focusing on lessons learned from two aspects. The first is the execution model and the second will be the technical design. In the execution model space, this project is a good example of how open source collaboration works.The team used GitHub extensively, and most of the meetings were asynchronous. And the engineers and the product managers and everyone who worked on the project worked through GitHub. And collaborated extensively using the open source tools available, which is a great model for scale. The second aspect we should look at from an execution model, and which is a success story here, is how the team used customer feedback as inputs to make the product better.There were constant, if not many sessions with many customers with whom the team worked to engage with them and understand what the requirements are for building a tool that can help them calculate emissions and use that feedback into the process, into the backlog to make the tool better. The second aspect of lessons learned will be on technical design.And here I would want to call out that. The whole concept of building a plugin ecosystem and make them composable such that it can, you have a, you know, you have a set of plugins that you deliver to the community, like a base framework, and then you allow extensibility. So that's a great model, which can help tools that can use sustainability as a calculation engine.And then the second piece is, which is also equally important. As you do this. You also make sure that you have extensive and good documentation that can help anyone who's coming on board understand the framework and be able to get on board and run with building a new plugin as soon as possible. The IF code, the GitHub site, if you go there, You will have a link to the docs page.And if you read through the docs, it's very, very self explanatory and will allow anyone who can come in and who's interested in building a plugin, do that at the fastest possible time. So these are, in my mind, lessons learned both from an execution model and the technical design aspect.Chris Skipper: Moving on, we now have some questions for Joseph. Joseph, the Impact Framework recently achieved the status of a graduated project under the GSF. What does this milestone mean for the project, and what were some of the key factors that led to its graduation?Joseph Cook: The Impact Framework graduation was a huge milestone because it represents the moment when the project is considered sufficiently mature that it no longer needs to be incubated and instead it can largely be handed over to the community. We consider the software to be feature rich and stable enough that people can integrate it into their systems, and in order to graduate, the project had to meet a quite stringent set of requirements, including demonstrating that Impact Framework had real world users, and that we had addressed community requests and bug reports, and that we had suitably comprehensive test coverage, and that the documentation and the onboarding materials were all fit for purpose.Now that milestone has passed, development activity is going to be much more ad hoc and driven by the community, rather than following a development roadmap that's defined by Green Software Foundation. Our efforts at the GSF will now be in driving adoption instead.Chris Skipper: How does the Impact Framework engage with the broader tech community to encourage adoption? Can you tell us what steps the GSF is taking to include the community as part of the IF development?Joseph Cook: Impact Framework is used by all kinds of organizations, but it also has a thriving open source community. And most of the discussion with the community happens on GitHub, either through issues or on the discussion board. But we also have a Google group where we share updates and collect feedback. Open source development on Impact Framework is really fundamental.It's really baked into the very core of the project. Instead of trying to ship Impact Framework with all the built in features to connect to thousands of different services and systems that people want to measure, we instead focused on making it really easy to build plugins, and then encouraged an open source community to develop, where people create their own plugins for all the features that they care about, and share them with each other on our Explorer website, which is like a free marketplace for Impact Framework plugins. This model actually makes the Impact Framework much more robust and much more stable because we have a much greater diversity of voices influencing what Impact Framework can do and what it can connect to. It decentralizes the development of the project without compromising the core software, and it also means that our small development team doesn't shoulder the burden of maintaining a huge code base with lots of different brittle connectors to third party APIs and services.And going forward, we want to keep this community thriving and see thousands more Impact Framework plugins listed on the Explorer.Chris Skipper: How do you see the Impact Framework setting new benchmarks for environmental responsibility in tech? Are there specific metrics or practices that you believe will influence industry standards?Joseph Cook: Impact Framework is a lightweight piece of software for processing what we call manifest files. These are YAML files that follow a simple format that captures the architecture of the system that you're studying. All the observations that you've made about that system and all of the operations that are applied to your data.I like to refer to these files as executable audits because they mean that you don't just report emissions numbers anymore, you actually show you're working too. And this enables the community to fork and modify your manifests and challenge you. And through iteration, you can come to crowdsourced consensus over your environmental reports. We would love to see this radical transparency become the gold standard for environmental impact reporting for software. Not only that, but manifests can be the basis for experimentation or forecasting, and help decision makers to assess the environmental benefits of implementing some change. Imagine you're challenged about why you chose some specific action.Your manifests are your evidence. And we think this combination of transparency and reproducibility, composability, and openness is a unique selling point for Impact Framework, and it could transform the way projects and organizations report their emissions and introspect their own operations.Chris Skipper: For listeners who are interested in getting involved with the Impact Framework, what are the ways they can contribute or support the project? Are there specific skills or areas where the community can make the most impact?Joseph Cook: If you would like to get involved in Impact Framework, there are many ways to do so. If you're a developer, you can head to the GitHub, where we have plenty of open issues, including some specific good first issues to help people get started. If you want to build plug ins, then you can download our template and use that to bootstrap your way in, and then submit your plug in to the Explorer using a simple typeform on our website.We always appreciate updates to the documentation too, and if you're interested in integrating Impact Framework into your systems, we'd You can always reach out to research at greensoftware. foundation to discuss it with us directly. We're always happy to help. If you just want to test the water or you have general questions about Impact Framework, you can start discussions on our GitHub discussion board or communicate via our Google group, IF-community@greensoftware.foundation.Chris Skipper: Awesome. So I'd like to thank Naveen, Srini, and Joseph for their contributions to this episode. Before we finish off this episode, I have a few events that need announcing.Starting us off, we have an event that will be happening today, the date of the publication of this episode, February the 13th, 2025 at 5 p.m. CET in Utrecht, Netherlands. Any Netherlands based listeners, you're invited to a Green Software Community Meetup today from 5pm until 8pm at Werkspoorkathedraal. Join us for a free in person event to kickstart a more sustainable year in tech. You'll hear insightful talks about reducing your software's energy footprint, scaling down for greener computing and building a grassroots digital sustainable movement. This is a great opportunity to connect with like-minded professionals, share ideas, and be part of a growing Dutch community that's dedicated to building a greener tech future. Food and drinks are provided free of charge.Next up is an event in Brighton in the UK, happening on February the 19th from 6:00 PM to 8:00 PM at Runway East, which features Senior Digital and Sustainability Manager for OVO, Mark Buss, speaking about the challenges with advocating for digital sustainability within his company. The talk will also be live streamed, so we will have a link in the show notes below for that.Next up for any Spanish listeners, we have the first ever meetup of the Green Software Community in Spain that will be happening online at 6pm On February the 20th, Dia Zero, Comunidad, Meetup, Green Software Foundation, España will be a chance for you to discuss how to collaborate with other people passionate about climate change and green software. And we'll have a link to that in the show notes below too.Next up down under in Australia on February the 20th at 6pm AEDT in Melbourne, we have Digging Deeper into Digital Sustainability. How to design and build tech solutions. This will be happening at ChargeFox. Katherine Buzza will be talking about the impact that software is having on the world's carbon emissions, and how to align your career in tech with the decarbonized future we can all play a role in creating.Next up, another UK event on February the 27th at 6pm GMT in London. Practical Advice for Responsible AI will be held in person at the Adaptivist offices. Talks about Green AI with Charles Humble and AI Governance with Jovita Tam. Click the link below to find out more.And finally, on our events list, we have GSF Oslo will be having its February meetup on the 27th of February at 5pm in person at the Accenture offices from 5 until 8pm. Come along to find out how leveraging data and technology can drive sustainability initiatives and enhance security measures and dive into green AI. Talks from Abhishek Dewangan and Johnny Mauland. Details in the podcast notes below.So that's the end of this episode about the Impact Framework project at the GSF. I hope you enjoyed the podcast. To listen to more podcasts about the Green Software Foundation, please visit podcast.greensoftware.foundation, and we'll see you on the next episode. Bye for now!

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner