Modernizing Legacy Data Centers for the AI Revolution with Schneider Electric's Steven Carlini
Key Highlights
- AI workloads are driving unprecedented increases in rack density, requiring significant upgrades to power and cooling systems in existing data centers.
- Liquid cooling is becoming the industry standard for managing high-density AI servers, though it involves complex architecture and infrastructure changes.
- Retrofitting legacy facilities involves transforming layouts, often moving equipment outside and creating open spaces to accommodate new high-density racks.
- Regional differences influence AI data center strategies, with the U.S., Europe, Middle East, and Asia investing heavily in large-scale, gigawatt-level projects.
- Future-proofing involves planning for even higher densities, flexible power sources, and modular designs to adapt to rapid hardware innovations and increasing AI demands.
As artificial intelligence workloads drive unprecedented compute density, the U.S. data center industry faces a formidable challenge: modernizing aging facilities that were never designed to support today’s high-density AI servers. In a recent Data Center Frontier podcast, Steven Carlini, Vice President of Innovation and Data Centers at Schneider Electric, shared his insights on how operators are confronting these transformative pressures.
“Many of these data centers were built with the expectation they would go through three, four, five IT refresh cycles,” Carlini explains. “Back then, growth in rack density was moderate. Facilities were designed for 10, 12 kilowatts per rack. Now with systems like Nvidia’s Blackwell, we’re seeing 132 kilowatts per rack, and each rack can weigh 5,000 pounds.”
The implications are seismic. Legacy racks, floor layouts, power distribution systems, and cooling infrastructure were simply not engineered for such extreme densities. “With densification, a lot of the power distribution, cooling systems, even the rack systems — the new servers don’t fit in those racks. You need more room behind the racks for power and cooling. Almost everything needs to be changed,” Carlini notes.
For operators, the first questions are inevitably about power availability. At 132 kilowatts per rack, even a single cluster can challenge the limits of older infrastructure. Many facilities are conducting rigorous evaluations to decide whether retrofitting is feasible or whether building new sites is the more practical solution. Carlini adds, “You may have transformers spaced every hundred yards, twenty of them. Now, one larger transformer can replace that footprint, and power distribution units feed busways that supply each accelerated compute rack. The scale and complexity are unlike anything we’ve seen before.”
Safety considerations also intensify with these densifications. “At 132 kilowatts, maintenance is still feasible,” Carlini says, “but as voltages rise, data centers are moving toward environments where human presence may be limited. You may have to power down equipment to work on it safely.”
Schneider Electric has long championed modularity and prefabrication as strategies to accelerate modernization. “People think of modularity as shipping containers, but that’s no longer the default,” Carlini explains. “We’re prefabricating IT rooms in our facilities, with racks, cooling, and power interconnects ready to deploy. Components are built in factories and assembled on-site with minimal effort. This system-level approach replaces the old ‘bit spec’ method of assembling individual components on-site, which is increasingly challenging at high densities.”
Liquid Cooling Becoming the New Default
As AI workloads drive extreme rack densities, liquid cooling is emerging as the new standard for modern data centers, but it is far from a simple plug-and-play solution. “Liquid cooling is an architecture, not really a solution,” explains Carlini. “It has the heat rejection, which a lot of times is chillers, it has cooling distribution units, and lots of piping for the different loops. It’s not something you can buy off the shelf and deploy.”
The push toward liquid cooling is largely driven by accelerated AI servers, many of which ship with preconfigured input/output piping. For these machines, liquid cooling isn’t optional — it’s the only practical way to manage heat. “You’re forced to deal with it if you want to deploy the latest AI servers,” Carlini notes.
Yet liquid cooling does not entirely replace air. Even with direct-to-chip or immersion designs, approximately 20–30% of a data center’s load — including networking equipment and certain power supplies — still requires air cooling. Traditional air-cooled chillers, typically optimized for lower temperatures, may be incompatible with liquid systems, so higher-temperature chillers are often necessary to handle the heat efficiently.
Looking forward, the industry is working toward fully liquid-cooled IT systems. “In the future, IT companies and integrators are working on systems that will liquid cool the entire IT system, including power supplies and communication components,” Carlini says. “But we’re probably two to three years away from being able to eliminate air cooling entirely.”
Beyond cooling hardware, facility managers need to perform rigorous diagnostics to determine whether legacy sites can support AI-scale compute. Carlini emphasizes evaluating the incoming power supply and system inertia. “You want to look at the available power coming into the data center, what type it is, and what kind of inertia it has. High-powered AI workloads can create sub-cycle oscillations, and the site has to be able to handle that without voltage sag or brownouts.”
For operators using renewable or distributed energy feeds, stabilization mechanisms such as grid batteries may be required to smooth power delivery. Schneider Electric provides analysis services to ensure that these unique workload characteristics are compatible with the existing power infrastructure, helping operators avoid unexpected disruptions when scaling up for AI.
Retrofit Strategies for AI Differ by Region and Building Design
While reference designs for AI data centers are invaluable, they are primarily tailored for greenfield deployments rather than retrofitting legacy facilities. “The reference designs assume that you have the footprint,” Carlini explains. “We don’t have different designs for different configurations of buildings.” For existing facilities, retrofitting can require transformative changes, particularly when high-density racks and liquid cooling are involved.
“Data centers are becoming upside down,” he says. “The IT room floor space is smaller, and the footprint is outside — chillers, generators, and medium-voltage switchgear are all outside now. Some buildings, originally designed to house everything internally, have to be adapted. Companies are literally blowing out walls, installing large doors, and creating open areas to place equipment that used to be inside.”
Regional attitudes toward AI adoption further shape these retrofit strategies. Carlini emphasizes that the AI data center race is as much national as corporate. “The U.S. moved first and is the most aggressive. Some sites are gigawatt scale, with hundreds of billions in new construction. The government is paving the way with grid support and streamlined approvals.”
Europe is accelerating its investments to catch up. The European Union has committed $30 billion for gigawatt-scale data centers, each designed to house roughly 100,000 GPUs. Meanwhile, the Middle East is planning ambitious projects, including a five-gigawatt AI campus in Abu Dhabi. Asia, too, is expanding capacity, with NTT planning a gigawatt-scale site in Japan and China aggressively scaling operations as GPU availability allows. “It started in the U.S., and the U.S. isn’t slowing down,” Carlini notes. “Europe and Asia are in the race, building out lots of capacity.”
Future-Proofing for AI
The pace of AI innovation demands forward-thinking strategies. Carlini points to roadmaps from Nvidia and AMD projecting densities of 1–1.5 kilowatts per rack, with plans extending even further. “It’s like the 1980s Cray supercomputers all over again, but compressed,” he says. “Tens of thousands of GPUs running in parallel — you have to plan for systems that haven’t even been invented yet.”
Power availability is paramount. “The number one concern is making sure you’re in the queue for more power,” Carlini explains. Operators may negotiate with the grid or pursue alternative sources, such as natural gas turbines or small modular reactors (SMRs). Flexibility is critical, as new facilities will differ from traditional warehouse-style builds. IT equipment will occupy smaller indoor spaces, while expansive fields of chillers, generators, and heat rejection systems dominate the external footprint.
Carlini also highlights the relentless pace of hardware innovation. “Nvidia releases new GPU generations every year. It’s not like the old days of Intel Xeons, where Moore’s Law helped constrain power growth. Today, every new evolution requires more power to operate.” This reality underscores the importance of designing both new and retrofitted facilities to accommodate ever-escalating energy and cooling requirements.
Looking Ahead: AI Workloads and Data Center Trends
As AI hardware continues to evolve at a breathtaking pace, the conversation naturally circles back to power — the lifeblood of modern compute infrastructure. Carlini points to the growing scale of accelerated compute racks: “Last year, everyone was talking about the one-megawatt rack. Now densities are approaching 1.5 megawatts. It’s moving that fast, and the infrastructure has to keep up.”
These shifts underscore a broader reality: today’s data centers are only the beginning. “We didn’t really talk about the different types of AI workloads — generative AI, autonomous agents — that are being developed now, which will drive even more capacity,” Carlini notes. “It’s going to be interesting to see what AI brings and how the data center architecture will support it all.”
The U.S. and global data center ecosystem is bracing for a series of unprecedented challenges and innovations. Operators must balance retrofitting legacy facilities with building new greenfield sites, all while keeping pace with annual GPU releases and increasing power and cooling requirements. The imperative is clear: flexibility, modularity, liquid cooling, and proactive power planning are no longer optional — they are prerequisites for AI readiness.
At last week’s Data Center Frontier Trends Summit, Carlini would go on to provide even more insights on building AI infrastructure for good, emphasizing not only efficiency and performance but also sustainability and responsible design practices.
The podcast conversation with Steven Carlini offers a rare glimpse into the technical, operational, and strategic considerations that will define AI-ready data centers in the coming years. From modular retrofits and prefabricated IT rooms to hybrid liquid and air cooling systems, operators face a complex landscape — but also a world of opportunity for those willing to innovate and plan for a future where compute density and power demands will continue to skyrocket.
Recent DCF Show Podcast Episodes
-
Flexential CEO Chris Downie on the Data Center Industry's AI, Cloud Paradigm Shifts
-
ark data centers CEO Brett Lindsey Talks Colocation Rebranding for Edge, AI Initiatives
-
CyrusOne CEO Eric Schwartz Talks AI Data Center Financing, Sustainability
-
Prometheus Hyperscale Pushes Data Center Horizons to 1 GW
-
Quantum Corridor CEO Tom Dakich On U.S. Midwest Data Center Horizons
Did you like this episode? Be sure to subscribe to the Data Center Frontier show at Podbean to receive future episodes on your app.
At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.
About the Author
Matt Vincent
A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.