

The Data Center Frontier Show
Endeavor Business Media
Data Center Frontier’s editors are your guide to how next-generation technologies are changing our world, and the critical role the data center industry plays in creating our extraordinary future.
Episodes
Mentioned books

Sep 16, 2025 • 24min
Generac Steps Into Data Center Backup Power
As artificial intelligence (AI) reshapes the data center landscape, power resiliency is being tested like never before. With enormous new facilities coming online and operators exploring alternatives to diesel, the backup power market is at an inflection point.
In this episode of the Data Center Frontier Show, we sit down with Ricardo Navarro, Vice President of Global Solutions at Generac Power Systems, to discuss how the company is positioning itself as a major player in the data center ecosystem.
Diesel Still Reigns — For Now
Navarro begins by addressing the foundational question: why diesel remains the primary backup power choice for hyperscale and AI workloads.
The answer, he explains, comes down to density, responsiveness, and reliability. Diesel engines respond instantly to the fluctuating loads that are common in AI training clusters, and fuel can be stored directly on-site. While natural gas is gaining traction as a bridging and utility-support solution, true redundancy requires dual pipelines — a level of infrastructure not yet common in data center deployments.
That said, Navarro is clear that the story doesn’t end with diesel. He sees a future where natural gas, paired with batteries, becomes a cost-effective and environmentally attractive option. Hybrid systems, combined with demand response and grid participation programs, could give operators new tools for balancing reliability and sustainability.
“Natural gas might not be the right solution right now, but definitely it will be in the future,” Navarro notes.
Scaling Fast to Meet Hyperscaler Demands
The conversation also explores how hyperscalers are shaping requirements. With campuses needing hundreds of generators, customers are asking not just about product performance, but about scale, lead times, and support.
Generac is addressing that demand by delivering open sets in as little as 30 to 35 weeks — about a third of the wait time from traditional OEMs. That speed-to-deployment advantage has driven significant new interest in Generac across the hyperscale sector.
From Generators to Energy Technology
Equally important is Generac’s shift toward digital tools and predictive services. Over the past decade, the company has invested in acquisitions such as Deep Sea Electronics, Blue Pillar, and Off Grid Energy, expanding its expertise in controls, telemetry, and microgrid integration.
Today, Generac is layering advanced sensors, machine learning, and AI-driven analytics onto its equipment fleet, enabling predictive failure detection, condition-based maintenance, and smarter load orchestration. This evolution, Navarro explains, represents Generac’s transformation “from being just a generator manufacturer to being an energy technology company.”
What’s Next for Generac
Looking ahead, the company is putting real capital behind its ambitions. Generac recently completed a $130 million facility in Beaver Dam, Wisconsin, designed to expand production capacity and meet surging demand from data center customers. With firm domestic and international orders already in place, Navarro says the company is determined “to be in the driver’s seat” as AI accelerates the need for scalable, resilient, and flexible backup power.
For data center leaders, this episode provides a clear look into how backup power strategies are evolving — and how one of the industry’s largest players is preparing for the next wave of energy and infrastructure challenges.

Sep 9, 2025 • 34min
Cologix and Lambda Debut NVIDIA HGX B200 AI Clusters in Columbus, Ohio
Columbus Hosts First Nvidia HGX B200 AI Cluster, Scaling AI at the Aggregated Edge
In this episode of the Data Center Frontier Show, Matt Vincent sits down with Bill Bentley (Cologix) and Ken Patchett (Lambda) to discuss Columbus, Ohio’s first Nvidia HGX B200 AI cluster deployment.
The conversation dives into:
Why Columbus is emerging as a strategic hub for AI workloads in the Midwest.
How Lambda’s one-click clusters and Cologix’s interconnection-rich campus enable rapid provisioning, low-latency inference, and scalable enterprise AI.
Flexible GPU consumption models that lower entry barriers for startups and allow enterprises to scale efficiently.
Innovations in energy efficiency, cooling, and sustainability as data centers evolve to meet the demands of modern AI.
The impact on regional industries like healthcare, manufacturing, and logistics—and why this deployment is a repeatable playbook for future AI clusters.
Join us to hear how AI is being brought closer to the point of need, transforming the Midwest into a next-generation AI infrastructure hub.

Sep 4, 2025 • 27min
Schneider Electric's Steven Carlini on AI Workloads and the Future of Data Centers
Artificial intelligence is changing the data center industry faster than anyone anticipated. Every new wave of AI hardware pushes power, density, and cooling requirements to levels once thought impossible — and operators are scrambling to keep pace. In this episode of the Data Center Frontier Show, Schneider Electric’s Steven Carlini joins us to unpack what it really means to build infrastructure for the AI era.
Carlini explains how the conversation around density has shifted in just a year: “Last year, everyone was talking about the one-megawatt rack. Now densities are approaching 1.5 megawatts. It’s moving that fast, and the infrastructure has to keep up.” These rapid leaps in scale aren’t just about racks and GPUs. They represent a fundamental change in how data centers are designed, cooled, and powered.
The discussion dives into the new imperatives for AI-ready facilities:
Power planning that anticipates explosive growth in compute demand.
Liquid and hybrid cooling systems capable of handling extreme densities.
Modularity and prefabrication to shorten build times and adapt to shifting hardware generations.
Sustainability and responsible design that balance innovation with environmental impact.
Carlini emphasizes that operators can’t treat these as optional upgrades. Flexibility, efficiency, and sustainability are now prerequisites for competitiveness in the AI era.
Looking beyond hardware, Carlini highlights the diversity of AI workloads — from generative models to autonomous agents — that will drive future requirements. Each class of workload comes with different power and latency demands, and data center operators will need to build adaptable platforms to accommodate them.
At the Data Center Frontier Trends Summit last week, Carlini expanded further on these themes, offering insights into how the industry can harness AI “for good” — designing infrastructure that supports innovation while aligning with global sustainability goals. His message was clear: the choices operators make now will shape not just business outcomes, but the broader environmental and social impact of the AI revolution.
This episode offers listeners a rare inside look at the technical, operational, and strategic forces shaping tomorrow’s data centers. Whether it’s retrofitting legacy facilities, deploying modular edge sites, or planning new greenfield campuses, the challenge is the same: prepare for a future where compute density and power requirements continue to skyrocket.
If you want to understand how the world’s digital infrastructure is evolving to meet the demands of AI, this conversation with Steven Carlini is essential listening.

Sep 2, 2025 • 19min
Virtual Machines and Containers - Better Together
Are you facing challenges with Edge Computing in your organization? Join us as we explore how Penguin Solutions’ Stratus ztC Edge platform combined with Kubernetes management creates a powerful, low-maintenance Edge Computing solution.
Learn how to:
Leverage Kubernetes for scalable, resilient Edge Computing
Simplify edge management with automated tools
Implement robust security strategies
Integrate Kubernetes with legacy operations
Don't miss this opportunity to optimize your Edge Computing infrastructure with cutting-edge tools and practices.
This podcast is ideal for IT leaders and engineers looking to optimize their Edge Computing infrastructure with cutting-edge tools and practices.

Aug 21, 2025 • 27min
Johnson Controls Brings Cooling-as-a-Service to the Data Center
In this episode of the Data Center Frontier Show podcast, we sit down with Martin Renkis, Executive Director of Global Alliances for Sustainable Infrastructure at Johnson Controls, to explore how Data Center Cooling as a Service (DCCaaS) is changing the way operators think about risk, capital, and sustainability.
Johnson Controls has delivered guaranteed infrastructure services for over 40 years, shifting cooling from a CAPEX burden to an OPEX model. The company designs, builds, operates, and maintains systems under long-term agreements that transfer performance risk away from the operator.
Key to the model is AI-driven optimization through platforms like OpenBlue, paired with financial guarantees tied directly to customer-defined KPIs. A joint venture with Apollo Group (Ionic Blue) also provides flexible financing, freeing up capital for land or expansion.
With rising rack densities and unpredictable AI factory demands, Renkis says cooling-as-a-service offers “a financially guaranteed safety net” that adapts to change while advancing sustainability goals.
Listen now to learn how Johnson Controls is redefining cooling for the AI era.

Aug 19, 2025 • 31min
Rehlko CEO Brian Melka on Powering the AI Data Center Era
As AI workloads reshape the data center landscape, speed to power has overtaken sustainability as the top customer demand. On this episode of the Data Center Frontier Show, Editor-in-Chief Matt Vincent talks with Brian Melka, CEO of Rehlko (formerly Kohler Energy), about how the century-old power company is helping operators scale fast, stay reliable, and meet evolving energy challenges.
Melka shares how Rehlko is quadrupling production, expanding its in-house EPC capabilities, and rolling out modular power blocks through its Wilmott/Wiltech acquisition to accelerate deployments and system integration. The discussion also covers the balance between diesel reliability and greener alternatives like HVO fuel, hybrid power systems that combine batteries and engines, and strategies for managing noise, emissions, and footprint in urban sites.
From rooftop generator farms in Paris to 100MW hyperscale builds, Rehlko positions itself as a technology-agnostic partner for the AI era. Listen now to learn how the company is helping the data center industry move faster, smarter, and more sustainably.

Aug 12, 2025 • 20min
Podcast: Traka VP Craig Newell Discusses the Critical Role of Key and Asset Management in Data Center Operations
Smarter Security Starts with Key & Equipment Management
In data centers, physical access control is just as critical as cybersecurity. Intelligent key and equipment management solutions help safeguard infrastructure, reduce risk, and improve efficiency — all while supporting compliance.
Key Benefits:
Enhanced Security – Restrict access to authorized personnel only
Audit Trails – Track every access event for full accountability
Operational Efficiency – Eliminate manual tracking and delays
Risk Reduction – Prevent loss, misuse, or unauthorized access
System Integration – Connect with access, video, and visitor tools
Regulatory Support – Comply with ISO 27001, SOC 2, HIPAA & more
A smart solution for a high-stakes environment — because in the data center world, every detail matters.

Aug 7, 2025 • 36min
Uptime Institute’s Jay Dietrich on Why Net Zero Isn’t Enough for Sustainable Data Centers
New DCF Podcast Episode Breaks Down the Real Work Behind Energy and Emissions Metrics
In the latest episode of the Data Center Frontier Podcast, Editor-in-Chief Matt Vincent sits down with Jay Dietrich, Research Director of Sustainability at Uptime Institute, to examine what real sustainability looks like inside the data center — and why popular narratives around net zero, offsets, and carbon neutrality often obscure more than they reveal.
Over the course of a 36-minute conversation, Dietrich walks listeners through Uptime’s expanding role in guiding data center operators toward measurable sustainability outcomes — not just certifications, but operational performance improvements at the facility level.

Jul 29, 2025 • 29min
LiquidStack CEO Joe Capes on GigaModular, Direct-to-Chip Cooling, and AI’s Thermal Future
In this episode of the Data Center Frontier Show, Editor-in-Chief Matt Vincent speaks with LiquidStack CEO Joe Capes about the company’s breakthrough GigaModular platform — the industry’s first scalable, modular Coolant Distribution Unit (CDU) purpose-built for direct-to-chip liquid cooling.
With rack densities accelerating beyond 120 kW and headed toward 600 kW, LiquidStack is targeting the real-world requirements of AI data centers while streamlining complexity and future-proofing thermal design.
“AI will keep pushing thermal output to new extremes,” Capes tells DCF. “Data centers need cooling systems that can be easily deployed, managed, and scaled to match heat rejection demands as they rise.”
LiquidStack's new GigaModular CDU, unveiled at the 2025 Datacloud Global Congress in Cannes, delivers up to 10 MW of scalable cooling capacity. It's designed to support single-phase direct-to-chip liquid cooling — a shift from the company’s earlier two-phase immersion roots — via a skidded modular design with a pay-as-you-grow approach. The platform’s flexibility enables deployments at N, N+1, or N+2 resiliency.
“We designed it to be the only CDU our customers will ever need,” Capes says.
Tune in to listen to the whole discussion, which goes on to explore why edge infrastructure and EV adoption will drive the next wave of sector innovation.

Jul 24, 2025 • 24min
Leveraging Heat as an Asset in Data Center Operations
Every second an AI-enabled data center operates, it produces massive amounts of heat.
Cooling needs are often thought of separately from heat, and for years, that is how systems were built. In most facilities, waste heat has to be managed, properly expelled, and is then forgotten. The heat may not be needed by the data center, but the question arises, ‘where else could this energy be put to use?’
What if energy use was viewed differently by data centers and the systems and institutions around them? Rather than focusing on a data center’s enormous power demands, let’s recognize data centers are part of a larger energy network, capable of giving back through the recovery and redistribution of thermal waste.
The pursuit of heat reuse solutions drives technological advancements in data center cooling and energy management systems. Recovering waste heat isn’t just a matter of technology and hardware. Systems need to run smoothly, and uptime is critical. This can lead to the development of more efficient and sustainable technologies that benefit not only data centers but the communities they operate within, creating a symbiotic relationship.
Join Trane® expert Esti Tierney as she explores critical considerations for enabling heat reuse as part of the circular economy. Esti will discuss high computing’s growing impact on heat production, the importance of a holistic view of thermal management, and why the need to collaborate and plan a heat redistribution strategy early with community stakeholders matters.
Heat reuse in data centers is a crucial aspect of modern energy management and sustainability practices, offering benefits that extend beyond the immediate operational efficiencies.
Designing for optimized energy efficiency and recovering waste heat isn’t just about saving money. The ability to reduce energy demand on the grid will be critical for all today and into the future. As server densities increase and next-generation chips push power demands ever higher, waste heat is no longer a byproduct to manage — it's power waiting to be harnessed.


