Data Center Intelligence: Mukul Girotra, SVP and GM, Ecolab Global High-Tech Division
The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry.
Here’s a look at the Q3 2025 insights from Mukul Girotra, Senior Vice President and General Manager, Global High-Tech Division, Ecolab.
Mukul Girotra is Senior Vice President and General Manager of Ecolab's Global High-Tech Division. He focuses on sustainable, high-performance solutions that drive innovation in data centers, semiconductor manufacturing and industrial cooling. A purpose-driven P&L leader with GE and Cummins experience, he champions operational excellence and growth through the hard work of his award-winning global teams.
Data Center Frontier: AI workloads are now dominating new data center builds. What are the most critical thermal or water-related risks operators must solve at scale, and how are your solutions evolving to meet that challenge?
Mukul Girotra, Ecolab: The shift to AI workloads represents a fundamental inflection point for data center thermal management, creating challenges that traditional air cooling simply cannot address at scale.
The reality is AI chips are generating heat densities of 40-100kW per rack, which is often 3-5x higher than traditional enterprise workloads. This creates localized hot spots that overwhelm conventional facility cooling systems and can lead to catastrophic thermal failures if not properly managed.
Here’s how we’re addressing that cooling challenge at Ecolab.
We’ve developed site-to-chip cooling management solutions that include programs for cooling water, adiabatic and direct-to-chip systems, with coolant health insights delivered via digital control platforms like Ecolab® Water Quality IQ™.
Specifically, our 3D TRASAR™ Technology for Direct-to-Chip Liquid Cooling provides real-time monitoring of system parameters including temperature, pH, flow rates, and glycol concentration, which act as leading indicators of coolant health. These insights help operators anticipate and address degradation risk before it impacts performance and uptime.
This kind of visibility is key to managing AI workloads at scale.
Ultimately, our integrated cooling programs are designed to optimize PUE (Power Usage Effectiveness) and WUE (Water Usage Effectiveness) - helping data centers run cooler, cleaner, and more sustainably, aspects that continue to be some of the industry’s biggest challenges.
Data Center Frontier: How are you helping data center operators strike a balance between capital expenditure and long-term operational efficiency in thermal and water systems, especially amid AI build urgency?
Mukul Girotra, Ecolab: First, we’re collaborating very early on in the site selection process to help our customers evaluate what cooling topology for a given region is truly most sustainable with least holistic impacts on watersheds and power grids. This includes evaluation of climate, geography, source of power generation, water sources available, their respective qualities, as well as local water stress/strain.
In taking this approach, we can support in the evaluation of what cooling topologies are able to drive the best balance between WUE and PUE interests, which in turn can allow for allocation of as much power to compute (increasing that amount of compute per interconnection) as possible. Which in aggregate can have considerable positive impacts on power grids and watersheds.
When a decision is made to use water as part of data center cooling topologies, as part of our design consultation, we’re also able to identify continuous improvement projects within identified customer ROI/IRR that drive reductions in total cost of operation.
Once a data center is up and running, through deployment of our technology and digital innovation we’re able to support our strategic partners in continuing to align operational and design efficiencies and identify opportunities for efficiency capture.
Data Center Frontier: How are your customers rethinking the integration of thermal, water, and power systems as AI infrastructure scales, and what role is your company playing in breaking down legacy silos between them?
Mukul Girotra, Ecolab: The AI infrastructure revolution is forcing a complete rethinking of how thermal, water, and power systems interact. It’s breaking down decades of siloed engineering approaches that are now proving inadequate given the increased rack demands.
Traditionally, data centers were designed with separate teams managing power, cooling, and IT equipment. AI scale requires these systems to operate holistically, with real-time coordination between power management, thermal control, and workload orchestration.
Here’s how Ecolab is addressing integration:
We extend our digitally enabled approach from site to chip, spanning cooling water, direct-to-chip systems, and adiabatic units, driving cleanliness, performance, and optimized water and energy use across all layers of cooling infrastructure.
Through collaborations like the one with Digital Realty, our AI-driven water conservation solution is expected to drive up to 15% water savings, significantly reducing demand on local water systems.
Leveraging the ECOLAB3D™ platform, we provide proactive analytics and real-time data to optimize water and power use at the asset, site and enterprise levels, creating real operational efficiency and turning cooling management into a strategic advantage.
We provide thermal, hydro and chemistry expertise that considers power constraints, IT equipment requirements, and day-to-day facility operational realities. This approach prevents the sub-optimization that can occur when these systems are designed in isolation.
Crucially, we view cooling through the lens of the water-energy nexus: choices at the rack or chiller level affect both Power Usage Effectiveness (PUE) and Water Usage Effectiveness (WUE) of a data center, so our recommendations balance energy, water, and lifecycle considerations to deliver reliable performance and operational efficiency.
The companies that will succeed in AI infrastructure deployment are those that abandon legacy siloed approaches and embrace integrated thermal management as a core competitive capability.
About the Author
Matt Vincent
A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.