How Accelerating AI Demand has Fundamentally Shifted Data Center Infrastructure

Mistakes in the cooling and power designs can cost data center millions. Shareef Alshinnawi, Global Key Accounts Director at nVent, explains how reference architectures can help data center designers stay ahead of constantly evolving IT.
Oct. 17, 2025
4 min read

AI demand is accelerating, presenting challenges for data center operators as they work to keep pace with growing capacity demand. Servicing and supporting this growing demand is putting pressure on data center infrastructure providers to efficiently design and deliver solutions that can be installed quickly and are scalable for future needs.

This challenge is complex. Not only is data demand constantly increasing, but the power and cooling needs of next generation IT are shifting while the buildout is happening. Integrating systems to balance shifting IT needs with long cycle data center projects is challenging. It can take years to build a data center, so infrastructure providers need to plan for future IT while the IT is still being developed.

Modularity is Critical

Modular data center equipment allows data centers to scale operations as demand increases. If data centers build to the capabilities required today, they will quickly find themselves conducting rework and further developments as technology changes. Data center managers need to preserve the ability to add additional racks or equipment within existing building infrastructure to scale with rapidly increasing demand that cannot be met in time by larger expansion projects.

Cooling technology has the potential to be a large driver of modularity. Cooling systems for different kinds of equipment (for instance, high-density liquid cooling for high-performance chips, and hybrid solutions for more standard IT), must be able to be flexed in and out to fit specific deployments. As cooling technology continues to improve, data centers can also increase density of IT. However, this means cabling and power distribution must also be designed with a scalable architecture in mind.

Liquid Cooling is Front and Center

Liquid cooling is everywhere today. The technology has actually been around since the 1960s, but it was used more for energy optimization and reducing total cost of ownership instead of for its cooling power. Now, however, the power trajectories of current chips presents challenges that can only be solved by liquid cooling. Liquid cooling is not optional anymore—we see that with recent industry developments like Google’s Project Deschutes and the Open Compute Project’s open letter calling for a flexible and collaborative framework to develop AI infrastructure standards. Industry leaders see the need for more liquid cooling and they are pushing the industry in that direction.

Liquid provides a much greater heat transfer capacity, 3,500 times higher than that of air, because liquid is denser than air. Air also cannot be delivered in a concentrated funnel at the temperature and density needed to cool advanced chips.  Additionally, liquid cooling can help data center operators reduce environmental impact compared to air cooling, improving power usage effectiveness while reducing the need for water intensive evaporative chillers in many applications. While air cooling can still service non-AI IT systems, including cloud-based computing as well as tertiary activities in a data center that require cooling, overall liquid cooling will continue growing in the industry.

Building Reference Architectures

Reference architectures can help data center managers efficiently plan, design and commission data centers that meet the needs of today’s and tomorrow’s IT. Reference architectures bring data center designers tested, integrated templates for configuring systems. Designers can then customize these frameworks to meet their specific needs instead of starting from scratch. This provides flexibility for data center builders while offering a foundation to work from that takes into account all the considerations mentioned above.

Data center designers need to move fast, but power and cooling infrastructure providers have been working together to provide these kinds of reference architectures for years. Mistakes in the cooling and power designs can cost data center millions—but they don’t have to. Using reference architectures and working with technology innovators who have the expertise needed to avoid common pitfalls can help data center designers stay ahead of constantly evolving IT.

About the Author

Shareef Alshinnawi

Shareef Alshinnawi

Shareef Alshinnawi, Global Key Accounts Director at nVent, has more than two decades of experience in the data center industry working with large data center customers to deliver innovative, sustainable, scalable infrastructure solutions. Before joining nVent, he held a variety of roles at IBM, Lenovo and Iceotope, where he was responsible for thermal design and innovation, business strategy and emerging business partnerships. 

Sign up for the Data Center Frontier Newsletter
Get the latest news and updates.