Engineering a Cool Revolution: Shumate’s Hybrid-Dry Adiabatic Design Tackles AI-Era Density
As artificial intelligence surges across the digital infrastructure landscape, its impacts are increasingly physical. Higher densities, hotter chips, and exponentially rising energy demands are pressuring data center operators to rethink the fundamentals, and especially cooling.
That’s where Shumate Engineering steps in, with a patent-pending system called Hybrid Dry Adiabatic Cooling (HDAC) that reimagines how chilled water loops are deployed in high-density environments.
In this episode of The Data Center Frontier Show, Shumate founder Daren Shumate and Director of Mission Critical Services Steven Spinazzola detailed the journey behind HDAC, from conceptual spark to real-world validation, and laid out why this system could become a cornerstone for sustainable AI infrastructure.
“Shumate Engineering is really my project to design the kind of firm I always wanted to work for: where engineers take responsibility early and are empowered to innovate,” said Shumate. “HDAC was born from that mindset.”
Two Temperatures, One Loop: Rethinking the Cooling Stack
The challenge HDAC aims to solve is deceptively simple: how do you cool legacy air-cooled equipment and next-gen liquid-cooled racks, simultaneously and efficiently?
Shumate’s answer is a closed-loop system with two distinct temperature taps:
- 68°F water for traditional air-cooled systems.
- 90°F water for direct-to-chip liquid cooling.
Both flows draw from a single loop fed by a hybrid adiabatic cooler, a dry cooler with “trim” evaporative functionality when conditions demand it. During cooler months or off-peak hours, the system economizes fully; during warmer conditions, it modulates to maintain optimal output.
“This isn’t magic; it’s just applying known products in a smarter sequence,” said Spinazzola. “One loop, two outputs, no waste.”
The system is fully modular, relies on conventional chillers and pumps, and is compatible with heat exchangers for immersion or CDU-style deployment. And according to Spinazzola, “we can make 90°F water just about anywhere” as long as the local wet bulb temperature stays below 83°F, a threshold met in most of North America.
Tested, Proven, and Ready for Scale
At Baltimore Aircoil Company’s test facility, HDAC proved it could hold:
- 68°F supply temp at 60°F ambient dry bulb.
- 90°F supply temp at 82°F ambient dry bulb and 81°F wet bulb.
These results validate its ability to support real-time adaptive cooling at scale and underscore HDAC’s dramatic resource reductions:
- ~50% less power than traditional air-cooled chillers.
- 90–92% less water than conventional evaporative systems.
- Water Utilization Effectiveness (WUE): 0.07, versus ~1.6 for cooling towers.
“At the end of the day, there’s only two ways to dissipate energy from a data center: you either have to blow air over a coil or evaporate water. That’s it: there is no door number three in data center cooling,” said Spinazzola. “With our system, you’re in economizer mode for much of the year, cooling the data center without mechanical compression.”
While emerging technologies like full immersion cooling offer new ways to absorb and move heat at the server level, they don’t fundamentally change the physics of how that heat is ultimately rejected from the building. Whether using a dielectric fluid, cold plates, or traditional air handlers, all cooling strategies eventually rely on either blowing air over coils or evaporating water to release heat into the environment. That’s the premise behind HDAC’s closed-loop design: leverage both mechanisms in a controlled, efficient balance, without relying on compression year-round.
The Infrastructure Multiplier: PUE and Collateral Benefit
Beyond thermodynamics, HDAC unlocks new economics for data center development. With an annualized PUE as low as 1.05, depending on workload mix, HDAC reduces connected power needs significantly. That translates to more IT revenue from a fixed utility allocation.
“We call this the collateral benefit,” Shumate explained. “If your site has 36 megawatts of utility capacity, a traditional system might give you 24 megawatts of IT load. With HDAC, you can push that to 28—without major additional investment.”
That’s a difference of 3.5 to 4 megawatts of monetizable load, a meaningful margin in today’s highly competitive environment.
Shumate also pushed back against industry claims of “waterless” cooling from air-cooled chiller systems.
“If you’re not using water onsite, your utility is,” he said. “Whether it’s nuclear, coal, or gas, that electricity required water to produce. We’re honest about that, and HDAC still comes out ahead.”
Ready for Greenfield and Retrofit
HDAC’s design is flexible enough to support new builds or retrofits, with greatest efficiency realized when chilled water infrastructure is already in place. For greenfield sites, it offers a chance to sidestep conventional chiller-cooling tower stacks entirely.
“This system is ideal for any environment where you’re looking to scale AI and avoid buying 300 extra megawatts of power,” said Shumate.
With patent approval pending and multiple large-scale designs already underway, Shumate Engineering is optimistic that the tipping point for adoption may not be far off.
“The data center industry loves innovation...as long as someone else does it first,” Spinazzola joked. “We’ve done the testing. Now we’re finding the first movers.”
Recent DCF Show Podcast Episodes
-
Flexential CEO Chris Downie on the Data Center Industry's AI, Cloud Paradigm Shifts
-
ark data centers CEO Brett Lindsey Talks Colocation Rebranding for Edge, AI Initiatives
-
CyrusOne CEO Eric Schwartz Talks AI Data Center Financing, Sustainability
-
Prometheus Hyperscale Pushes Data Center Horizons to 1 GW
-
Quantum Corridor CEO Tom Dakich On U.S. Midwest Data Center Horizons
Did you like this episode? Be sure to subscribe to the Data Center Frontier show at Podbean to receive future episodes on your app.
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.
About the Author
Matt Vincent
A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.