The POWER Podcast

POWER
undefined
Feb 10, 2025 • 41min

181. A New Paradigm for Power Grid Operation

Power grids operate like an intricate ballet of energy generation and consumption that must remain perfectly balanced at all times. The grid maintains a steady frequency (60 Hz in North America and 50 Hz in many other regions) by matching power generation to demand in real-time. Traditional power plants with large rotating turbines and generators play a crucial role in this balance through their mechanical inertia—the natural tendency of these massive spinning machines to resist changes in their rotational speed. This inertia acts as a natural stabilizer for the grid. When there’s a sudden change in power demand or generation, such as a large factory turning on or a generator failing, the rotational energy stored in these spinning masses automatically helps cushion the impact. The machines momentarily speed up or slow down slightly, giving grid operators precious seconds to respond and adjust other power sources. However, as we transition to renewable energy sources like solar and wind that don’t have this natural mechanical inertia, maintaining grid stability becomes more challenging. This is why grid operators are increasingly focusing on technologies like synthetic inertia from wind turbines, battery storage systems, and advanced control systems to replicate the stabilizing effects traditionally provided by conventional power plants. Alex Boyd, CEO of PSC, a global specialist consulting firm working in the areas of power systems and control systems engineering, believes the importance of inertia will lessen, and probably sooner than most people think. In fact, he suggested stability based on physical inertia will soon be the least-preferred approach. Boyd recognizes that his view, which was expressed while he was a guest on The POWER Podcast, is potentially controversial, but there is a sound basis behind his prediction. Power electronics-based systems utilize inverter-based resources, such as wind, solar, and batteries. These systems can detect and respond to frequency deviations almost instantaneously using fast frequency response mechanisms. This actually allows for much faster stabilization compared to mechanical inertia. Power electronics reduce the need for traditional inertia by enabling precise control of grid parameters like frequency and voltage. While they decrease the available physical inertia, they also decrease the amount of inertia required for stability through advanced control strategies. Virtual synchronous generators and advanced inverters can emulate inertia dynamically, offering tunable responses that adapt to grid conditions. For example, adaptive inertia schemes provide high initial inertia to absorb faults but reduce it over time to prevent oscillations. Power electronic systems address stability issues across a wide range of frequencies and timescales, including harmonic stability and voltage regulation. This is achieved through multi-timescale modeling and control techniques that are not possible with purely mechanical systems. Inverter-based resources allow for distributed coordination of grid services, such as frequency regulation and voltage support, enabling more decentralized grid operation compared to centralized inertia-centric systems. Power electronic systems are essential for grids with a high penetration of renewable energy sources, which lack inherent mechanical inertia. These systems ensure stability while facilitating the transition to low-carbon energy by emulating or replacing traditional generator functions. “I do foresee a time in the not-too-distant future where we’ll be thinking about how do we actually design a system so that we don’t need to be impacted so much by the physical inertia, because it’s preventing us from doing what we want to do,” said Boyd. “I think that time is coming. There will be a lot of challenges to overcome, and there’ll be a lot of learning that needs to be done, but I do think the time is coming.”
undefined
Jan 31, 2025 • 31min

180. Data Centers Consume 3% of Energy in Europe: Understand Geographic Hotspots and How AI Is Reshaping Demand

The rapid rise of data centers has put many power industry demand forecasters on edge. Some predict the power-hungry nature of the facilities will quickly create problems for utilities and the grid. ICIS, a data analytics provider, calculates that in 2024, demand from data centers in Europe accounted for 96 TWh, or 3.1% of total power demand. “Now, you could say it’s not a lot—3%—it’s just a marginal size, but I’m going to spice it up a bit with two additional layers,” Matteo Mazzoni, director of Energy Analytics at ICIS, said as a guest on The POWER Podcast. “One is: that power demand is very consolidated in just a small subset of countries. So, five countries account of over 60% of that European power demand. And within those five countries, which are the usual suspects in terms of Germany, France, the UK, Ireland, and Netherlands, half of that consumption is located in the FLAP-D market, which sounds like a fancy new coffee, but in reality is just five big cities: Frankfurt, London, Amsterdam, Paris, and Dublin.” Predicting where and how data center demand will grow in the future is challenging, however, especially when looking out more than a few years. “What we’ve tried to do with our research is to divide it into two main time frames,” Mazzoni explained. “The next three to five years, where we see our forecast being relatively accurate because we looked at the development of new data centers, where they are being built, and all the information that are currently available. And, then, what might happen past 2030, which is a little bit more uncertain given how fast technology is developing and all that is happening on the AI [artificial intelligence] front.” Based on its research, ICIS expects European data center power demand to grow 75% by 2030, to 168 TWh. “It’s going to be a lot of the same,” Mazzoni predicted. “So, those big centers—those big cities—are still set to attract most of the additional data center consumption, but we see the emergence of also new interesting markets, like the Nordics and to a certain extent also southern Europe with Iberia [especially Spain] being an interesting market.” Yet, there is still a fair amount of uncertainty around demand projections. Advances in liquid cooling methods will likely reduce data center power usage. That’s because liquid cooling offers more efficient heat dissipation, which translates directly into lower electricity consumption. Additionally, there are opportunities for further improvement in power usage effectiveness (PUE), which is a widely used data center energy efficiency metric. At the global level, the average PUE has decreased from 2.5 in 2007 to a current average of 1.56, according to the ICIS report. However, new facilities consistently achieve a PUE of 1.3 and sometimes much better. Google, which has many state-of-the-art and highly efficient data centers, reported a global average PUE of 1.09 for its facilities over the last year. Said Mazzoni, “An expert in the field told us when we were doing our research, when tech moves out of the equation and you have energy engineers stepping in, you start to see that a lot of efficiency improvements will come, and demand will inevitably fall.” Thus, data center load growth projections should be taken with a grain of salt. “The forecast that we have beyond 2030 will need to be revised,” Mazzoni predicted. “If we look at the history of the past 20 years—all analysts and all forecasts around load growth—they all overshoot what eventually happened. The first time it happened when the internet arrived—there was obviously great expectations—and then EVs, electric vehicles, and then heat pumps. But if we look at, for example, last year—2024—European power demand was up by 1.3%, U.S. power demand was up by 1.8%, and probably weather was the main driver behind that growth.”
undefined
Jan 22, 2025 • 51min

179. District Energy Systems: The Invisible Giant of Urban Efficiency

District energy systems employ a centralized facility to supply heating, cooling, and sometimes electricity for multiple buildings in an area through a largely underground, mostly unseen network of pipes. When district energy systems are utilized, individual buildings do not need their own boilers, chillers, and cooling towers. This offers a number of benefits to building owners and tenants. Among them are: • Energy Efficiency. Centralized heating/cooling is more efficient than individual building systems, reducing energy use by 30% to 50% in some cases. • Cost Savings. Lower operations and maintenance costs through economies of scale and reduced equipment needs per building. • Reduced Environmental Impacts. Emissions are lessened and renewable energy resources can often be more easily integrated. • Reliability. A more resilient energy supply is often provided, with redundant systems and professional operation. • Space Optimization. Buildings need less mechanical equipment, freeing up valuable space. The concept is far from new. In fact, Birdsill Holly is credited with deploying the U.S.’s first district energy system in Lockport, New York, in 1877, and many other cities incorporated district systems into their infrastructure soon thereafter. While district energy systems are particularly effective in dense urban areas, they’re also widely used at hospitals and at other large campuses around the world. “There’s over 600 operating district energy systems in the U.S., and that’s in cities, also on college and university campuses, healthcare, military bases, airports, pharma, even our sort of newer industries like Meta, Apple, Google, their campuses are utilizing district energy, because, frankly, there’s economies of scale,” Rob Thornton, president and CEO of the International District Energy Association (IDEA), said as a guest on The POWER Podcast. “District energy is actually quite ubiquitous,” said Thornton, noting that systems are common in Canada, throughout Europe, in the Middle East, and many other parts of the world. “But, you know, not that well-known. We’re not visible. Basically, the assets are largely underground, and so we don’t necessarily have the visibility opportunity of like wind turbines or solar panels,” he said. “So, we quietly do our work. But, I would guess that for the listeners of this podcast, if they went to a college or university in North America, I bet, eight out of 10 lived in a dorm that was supplied by a district heating system. So, it’s really a lot more common than people realize,” said Thornton.
undefined
Dec 19, 2024 • 34min

178. Why LVOE May Be a Better Decision-Making Tool Than LCOE for Power Companies

Most POWER readers are probably familiar with levelized cost of energy (LCOE) and levelized value of energy (LVOE) as metrics used to help evaluate potential power plant investment options. LCOE measures the average net present cost of electricity generation over a facility’s lifetime. It includes capital costs, fuel costs, operation and maintenance (O&M) costs, financing costs, expected capacity factor, and project lifetime. Meanwhile, LVOE goes beyond LCOE by considering the actual value the power provides to the grid, including time of generation (peak vs. off-peak), location value, grid integration costs and benefits, contributions to system reliability, environmental attributes, and capacity value. Some of the key differences stem from the perspective and market context each provides. LCOE, for example, focuses on pure cost comparison between technologies, while LVOE evaluates actual worth to the power system. Notably, LCOE ignores when and where power is generated; whereas, LVOE accounts for temporal and locational value variations. Concerning system integration, LCOE treats all generation as equally valuable, while LVOE considers grid integration costs and system needs. “Things like levelized cost of energy or capacity factors are probably the wrong measure to use in many of these markets,” Karl Meeusen, director of Markets, Legislative, and Regulatory Policy with Wärtsilä North America, said as a guest on The POWER Podcast. “Instead, I think one of the better metrics to start looking at and using more deeply is what we call the levelized value of energy, and that’s really looking at what the prices at the location where you’re trying to build that resource are going to be.” Wärtsilä is a company headquartered in Finland that provides innovative technologies and lifecycle solutions for the marine and energy markets. Among its main offerings are reciprocating engines that can operate on a variety of fuels for use in electric power generating plants. Wärtsilä has modeled different power systems in almost 200 markets around the world. It says the data consistently shows that a small number of grid-balancing gas engines in a system can provide the balancing and flexibility to enable renewables to flourish—all while maintaining reliable, resilient, and affordable electricity. Meeusen noted that a lot of the models find engines offer greater value than other technologies on the system because of their flexibility, even though they may operate at lower capacity factors. Having the ability to turn on and off allows owners to capture high price intervals, where prices spike because of scarcity or ramp shortages, while avoiding negative prices by turning off as prices start to dip and drop lower. “That levelized value is one of the things that we think is really important going forward,” he said. “I think what a lot of models and planning scenarios miss when they look at something like LCOE—and they’re looking at a single resource added into the system—is how it fits within the system, and what does it do to the value of the rest of their portfolio?” Meeusen explained. “I call this: thinking about the cannibalistic costs. If I look at an LCOE with a capacity factor for a combined cycle resource, and don’t consider how that might impact or increase the curtailment of renewable energy—no cost renewable energy—I don’t really necessarily see the true cost of some of those larger, inflexible generators on the system. And, so, when we think about that, we really want to make sure that what we’re covering and capturing is the true value that a generator has in a portfolio, not just as a standalone resource.”
undefined
10 snips
Dec 12, 2024 • 32min

177. How Nuclear Power Could Help Decarbonize Industrial Steam Needs

Clay Sell, CEO of X-energy, discusses the transformative potential of high-temperature gas-cooled nuclear reactors in decarbonizing industrial processes. He highlights that about 20% of global carbon emissions stem from industrial heat, primarily from burning hydrocarbons. Sell emphasizes the need to replace hydrocarbons with nuclear-generated, carbon-free steam. The conversation also covers advancements in reactor safety, cost-effective designs, and the importance of partnerships to facilitate nuclear energy's resurgence, aiming for a cleaner, sustainable future.
undefined
Dec 4, 2024 • 31min

176. Hydrogen Use Cases for the Power Industry

Hydrogen is becoming increasingly important to the electric power generation industry for several reasons. One is that hydrogen offers a promising pathway to decarbonize the power sector. When used in fuel cells or burned for electricity generation, hydrogen produces only water vapor as a byproduct, making it a zero-emission energy source. This is crucial for meeting global climate change mitigation goals and reducing greenhouse gas emissions from power generation. Hydrogen also provides a potential energy storage solution, which is critical for integrating solar and wind energy into the power grid. These renewable resources are intermittent—sometimes they produce more energy than is needed by the grid, while at other times, they may completely go away. Hydrogen can be produced through electrolysis during periods of excess renewable energy production, then stored and used to generate electricity when needed. This helps address the challenge of matching energy supply with demand. Hydrogen is a flexible and versatile fuel that can be used in fuel cells, gas turbines, or internal combustion engines. It can also be blended with natural gas to accommodate existing equipment limitations. The wide range of options make hydrogen a great backup fuel for microgrids and other systems that require excellent reliability. “We’ve actually seen quite a bit of interest in that,” Tim Lebrecht, industry manager for Energy Transition and the Chemicals Process Industries with Air Products, said as a guest on The POWER Podcast. Lebrecht noted that hydrogen can be a primary use in microgrids, or used as a source of backup or supplement. “Think of a peaking unit that as temperature goes up during the day, your pricing for power could also be going up,” Lebrecht explained. “At a point, hydrogen may be a peak shave–type situation, where you then maximize the power from the grid, but then you’re using hydrogen as a supplement during that time period.” Another hydrogen use case revolves around data centers. “Data centers, specifically, have been really interested in: ‘How do we use hydrogen as a backup type material?’ ” Lebrecht said. Air Products is the world’s leading supplier of hydrogen with more than 65 years of experience in hydrogen production, storage, distribution, and dispensing. Lebrecht noted that his team regularly works with original equipment manufacturers (OEMs); engineering, procurement, and construction (EPC) companies; and other firms to collaborate on solutions involving hydrogen. “We’ve got a great history,” he said. “My team has a great amount of experience.”
undefined
Nov 21, 2024 • 20min

175. Communication Is Key to Successful Power Projects

Power plant construction and retrofit projects come in all shapes and sizes, but they all generally have at least one thing in common: complexity. There are usually a lot of moving pieces that must be managed. This can include sourcing the right materials and components, getting equipment delivered to the site at the right time, finding qualified contractors, and overseeing handoffs between working groups. Getting a job done on time and on budget is not as easy as some people might think. “It absolutely can be difficult and a lot of things to consider,” Kevin Slepicka, vice president of Sales for Heat Recovery Boilers at Rentech Boiler Systems, said as a guest on The POWER Podcast. “You’ve got to make sure that communication is ongoing between your suppliers and the end user.” Rentech is a leading manufacturer of boiler systems including package boilers, waste heat boilers, and heat recovery steam generators (HRSGs). Rentech’s fabrication facilities are in Abilene, Texas. “We have three shops,” Slepicka explained. “There’s 197,000 square feet of manufacturing space under roof. We’ve got over 100 tons of lift capability with cranes, and we can bring in other cranes for our heavier lifts. Our properties are located on 72 acres, so we have a lot of room for staging equipment, storing equipment, if customers aren’t ready to take delivery at the time the units are done.” Moving large boilers from Texas to sites around the country and other parts of the world can be difficult, which is another reason why good communication is imperative. “Shipping is a major consideration on how the unit is constructed, how much is going to be built in the facility, and how large we can ship. So, it really goes hand in hand with the design of the boiler,” Slepicka said. “It really is important that we work with our logistics people and work with our partner companies that do our transportation for us.” Communication with customers on potential future needs is also important. Slepicka said knowing that a retrofit may be required down the road to account for a new environmental regulation, for example, could allow a boiler system to be designed with space to accommodate changes. This could save a lot of money and headaches in the long run. “That’s where you’ve got to be able to work with the customer—make sure you understand the space available and make sure that the unit’s going to work properly,” he said. Slepicka said Rentech had a customer recently that faced new formaldehyde restrictions and needed its HRSG system modified. “Luckily, we had the space in the unit where that catalyst could be installed in the right location to address the concern they had, so it was a relatively easy retrofit for them to make.” If the prospect had not been considered up front, the cost and complexity could have been much greater.
undefined
Nov 5, 2024 • 34min

174. Kingston Coal Ash Spill: Cleanup Workers Were the Unfortunate Losers

On Dec. 22, 2008, a major dike failure occurred on the north slopes of the ash pond at the Tennessee Valley Authority’s (TVA’s) Kingston Fossil Plant. The failure resulted in the release of approximately 5.4 million cubic yards of coal ash spilling onto adjacent land and into the Emory River. The Kingston spill is considered one of the most significant and costly events in TVA history. In a project completion fact sheet issued jointly by the U.S. Environmental Protection Agency (EPA) and the TVA in December 2014, it says the cleanup took about six years, required a total of 6.7 million man-hours, and cost $1.178 billion. TVA hired various contractors to perform the post-spill cleanup, removal, and recovery of fly ash at the Kingston site. Perhaps most notable among them was Jacobs Engineering. TVA hired Jacobs in 2009 specifically to provide program management services to assist with the cleanup. Jacobs claims to have “a strong track record of safely managing some of the world’s most complex engineering and environmental challenges.” It has noted that TVA and the EPA’s on-scene coordinator oversaw the worker safety programs for the Kingston cleanup, approving all actions in consultation with the Tennessee Department of Environment and Conservation. Jacobs said TVA maintained rigorous safety standards throughout the cleanup, and that it worked closely with TVA in following and supporting those standards. Jared Sullivan, author of Valley So Low: One Lawyer’s Fight for Justice in the Wake of America’s Great Coal Catastrophe, studied the Kingston cleanup and followed some of the plaintiffs for more than five years while writing his book. As a guest on The POWER Podcast, Sullivan suggested many of the workers felt fortunate to be employed on the Kingston cleanup. The U.S. economy was not thriving at the time; housing and stock markets were in a funk, and unemployment was relatively high. “These workers—these 900 men and women—this disaster is kind of a godsend for them as far as their employment goes, you know. A lot of them needed work. Many of them were very, very pleased to get this call,” Sullivan explained. “The trouble is that after a year or so of working on this job site—of scooping up and hauling off this coal ash muck from the landscape, also from the river—they start feeling really, really terribly,” he said. “At first they kind of write off their symptoms as overworking themselves. In many cases, these workers were working 14-hour shifts and just pushing themselves really, really hard because there’s a lot of overtime opportunities. So, that was good for them—that they could work so much, that this mess was so big,” Sullivan continued. But after a while, some workers start blacking out in their cars, having nosebleeds, start coughing up black mucous, and it becomes clear to them that the coal ash is the cause. Jacobs reports several contractors’ workers at the Kingston site filed workers compensation claims against their employer in 2013. These workers alleged that conditions at the site caused them to experience various health issues that were a result of excessive exposure to coal ash. Jacobs said many of these claims were found to be unsubstantiated and were rejected. Then, many of the same workers filed lawsuits against Jacobs, even though they may not have been Jacobs employees. Jacobs says it stands by its safety record, and that it did not cause any injuries to the workers. “The case resolved early last year, after almost 10 years of litigation,” Sullivan said. “Jacobs Engineering and the plaintiffs—230 of them—finally settled the case. $77.5 million dollars for 230 plaintiffs. So, it works out to a couple hundred thousand dollars each for the plaintiffs after the lawyers take their fees—so, not tons of money.” In a statement, Jacobs said, “To avoid further litigation, the parties chose to enter into an agreement to resolve the cases.”
undefined
Oct 30, 2024 • 42min

173. Why Data Center Developers Should Think ‘Power First’

You don’t need me to tell you how artificial intelligence (AI) is impacting the power grid; you can just ask AI. Claude, an AI assistant created by Anthropic, told POWER, “AI training and inference are driving unprecedented demand for data center capacity, particularly due to large language models and other compute-intensive AI workloads.” It also said, “AI servers, especially those with multiple GPUs [graphics processing units], require significantly more power per rack than traditional servers—often 2–4x higher power density.” So, what does that mean for power grid operators and electricity suppliers? Claude said there could be several effects, including local grid strain in AI hub regions, the need for upgraded transmission infrastructure, higher baseline power consumption, and potential grid stability issues in peak usage periods. Notably, it said AI data centers tend to cluster in specific regions with favorable power costs and regulations, creating “hotspots” of extreme power demand. Sheldon Kimber, founder and CEO of Intersect Power, a clean energy company that develops, owns, and operates a base portfolio of 2.2 GW of operating solar PV and 2.4 GWh of storage in operation or construction, understands the challenges data centers present for the grid. As a guest on The POWER Podcast, Kimber suggested the only way to meet the massive increase in power demand coming from data centers is with scalable behind-the-meter solutions. “These assets may still touch the grid—they may still have some reliance on the grid—but they’re going to have to bring with them an enormous amount of behind-the-meter generation and storage and other things to make sure that they are flexible enough that the grid can integrate them without creating such a strain on the grid, on rate payers, and on the utilities that service them,” Kimber said. Yet, data center developers have not traditionally kept power top-of-mind. “The data center market to date has been more of a real estate development game,” Kimber explained. “How close to a labor pool are you? What does it look like on the fiber side? What does the land look like?” He said electric power service was certainly part of the equation, but it was more like part of a “balanced breakfast of real estate criteria,” rather than a top priority for siting a data center. In today’s environment, that needs to change. Kimber said Intersect Power has been talking to data center companies for at least three years, pitching them on the idea of siting data centers behind-the-meter at some of his projects. The response has been lukewarm at best. Most of the companies want to keep their data centers in already well-established hubs, such as in northern Virginia; Santa Clara, California; or the Columbia River Gorge region in Oregon, for example. Kimber’s comeback has been, “Tell us when you’re ready to site for ‘Power First.’ ” What “Power First” means is simple. Start with power, and the availability of power, as the first criteria, and screen out all the sites that don’t have power. “To date, data center development that was not ‘Power First’ has really been focused on: ‘What does the plug look like?’ ” Kimber said. In other words: How is the developer connecting the data center to the power grid—or plugging in? The developers basically assumed that if they could get connected to the grid, the local utility would find a way to supply the electricity needed. However, it’s getting harder and harder for utilities to provide what developers are asking for. “The realization that the grid just isn’t going to be able to provide power in most of the places that people want it is now causing a lot of data center customers to re-evaluate the need to move from where they are. And when they’re making those moves, obviously, the first thing that’s coming to mind is: ‘Well, if I’m going to have to move anyway, I might as well move to where the binding constraint, which is power, is no longer a constraint,’ ” he said.
undefined
Oct 22, 2024 • 34min

172. What Are Microreactors and How Soon Could We See One in Operation

Microreactors are a class of very small modular reactors targeted for non-conventional nuclear markets. The U.S. Department of Energy (DOE) supports a variety of advanced reactor designs, including gas, liquid-metal, molten-salt, and heat-pipe-cooled concepts. In the U.S., microreactor developers are currently focused on designs that could be deployed as early as the mid-2020s. The key features of microreactors that distinguish them from other reactor types mainly revolve around their size. Microreactors typically produce less than 20 MW of thermal output. The size obviously allows a much smaller footprint than traditional nuclear power reactors. It also allows for factory fabrication and easier transportability. Among other unique aspects are their self-regulating capability, which could enable remote and semi-autonomous microreactor operation. Their rapid deployability (weeks or months rather than many years) is a huge benefit, too, allowing units to be used in emergency response and other time-sensitive situations. Furthermore, some designs are expected to operate for up to 10 years or more without refueling or significant maintenance, which could be a big benefit in remote locations. A lot of microreactor development work is being done at the Idaho National Laboratory (INL). John H. Jackson, National Technical Director for the DOE’s Office of Nuclear Energy Microreactor program at INL, was a recent guest on The POWER Podcast. On the show, he noted some of the programs and facilities INL has available to assist in proving microreactor concepts. “I like to say it starts with my program, because I’m overtly focused on enabling and accelerating commercial development and deployment of microreactor technology,” Jackson said. “But there are certainly the entities like the National Reactor Innovation Center, or NRIC, which is heavily focused on deployment and enabling deployment of microreactor technology, as well as small modular reactor technology.” POWER has reported extensively on the Pele and MARVEL microreactor projects. Project Pele is a Department of Defense (DOD) project that recently broke ground at INL. Meanwhile, MARVEL, which stands for Microreactor Applications Research Validation and EvaLuation, is funded through the DOE by the Office of Nuclear Energy’s Microreactor program. Project Pele aims to build and demonstrate a high-temperature gas-cooled mobile microreactor manufactured by Lynchburg, Virginia–headquartered BWXT Advanced Technologies. Fueled with TRI-structural ISOtropic particle fuel, Project Pele will produce 1 MWe to 5 MWe for INL’s Critical Infrastructure Test Range Complex (CITRC) electrical test grid. The DOD noted last month that assembly of the final Pele reactor is scheduled to begin in February 2025, and the current plan is to transport the fully assembled reactor to INL in 2026. The MARVEL design is a sodium-potassium-cooled microreactor that will be built inside the Transient Reactor Test (TREAT) facility at INL. It will generate 85 kW of thermal energy and about 20 kW of electrical output. It is not intended to be a commercial design, but the experience of constructing and operating the unit could be crucial for future microreactor developers and microgrid designers, as future plans are to connect it to a microgrid. “The MARVEL reactor is one of the top priorities, if not the top priority, at the Idaho National Laboratory, along with the project Pele,” Jackson said. “One or the other—Pele or MARVEL—will be the first reactor built at Idaho National Laboratory in over 50 years.” Still, Jackson was cautious when it came to predicting when the first microreactor might begin operation. “I cringe sometimes when people get a little ahead of themselves and start making bold declarations, like, ‘We’re going to have a microreactor next year,’ for instance. I think it’s important to be excited, but it’s also important to stay realistic with respect to timeframes for deployment,” he said.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app