Executive Roundtable: Standardization vs. Flexibility in Data Center Designs for the AI Era

As AI reshapes infrastructure demands, data center leaders are rethinking the balance between standard designs and custom builds. In our latest Executive Roundtable, experts reveal where standardization still works, and where flexibility is now essential.
June 23, 2025
6 min read

For the third installment of our Executive Roundtable for the Second Quarter of 2025, we asked our panel of experienced infrastructure leaders to tackle a key tension shaping today’s build strategies: the balance between standardization and customization. As the industry shifts from cloud-era scaling to AI-accelerated infrastructure, that old balancing act has taken on new urgency.

We asked our experts: How are you striking the right equilibrium in 2025? With skyrocketing demand for capacity on one hand, and workload-specific requirements for GPUs, liquid cooling, and hybrid environments on the other, how do you deliver at speed without sacrificing client-specific nuance?

Our panel shared how their organizations are rethinking design templates, empowering regional flexibility, and determining where one-size-fits-all still delivers value - and where it no longer can.

What emerged is a picture of an industry in evolution: moving toward smarter standardization frameworks that still leave room for differentiation where it counts.

The seasoned data center industry leaders of our Executive Roundtable for the Second Quarter of 2025 include:

  • Jason Waxman, CEO, CoolIT Systems
  • Phillip Marangella, Chief Marketing and Product Officer, EdgeConneX
  • Nicole Dierksheide, Global Category Director for Large Power, Rehlko
  • Carsten Baumann, Director, Strategic Initiatives and Solution Architect, Schneider Electric

And now, on to our third Executive Roundtable question for Q2 of 2025.

Data Center Frontier: How are you balancing standardization with client-specific customization in 2025? Given the dual pressures of rapid scaling and the unique demands of AI and hybrid workloads, how is your team navigating the trade-offs between delivering standardized infrastructure and meeting bespoke customer needs? Where do you see flexibility delivering the most value today?

Jason Waxman, CEO, CoolIT Systems: Balancing standardization and customization is no longer an either-or decision. Hyperscale data center design requires customization for efficiency. Each data center is designed around the customer value and application to be delivered, and the infrastructure is optimized to most efficiently deliver those workloads. 

What this means to us is that we need to build products that use common-off-the-shelf components but are designed for the unique customer requirements. To this end, we’ve developed a portfolio of modular, interoperable components that we can use to get a product to market quickly.

Take the development of a cooling loop for a customer’s new server SKU. We can move quickly into development because we’ve already worked with silicon manufacturers to develop coldplates that are fully optimized and validated for their next-generation processors.

We leverage existing components where possible to improve supplier qualification and testing cycles. Well-defined test and validation processes provide clarity. An established supplier base and high-volume manufacturing capacity mean we can scale production.

 

Phillip Marangella, EdgeConneX: At the speed and scale required to build out the capacity needed for AI, combined with the lack of standards and varying customer requirements for AI deployments, it certainly presents a challenge for data center operators.

With Ingenuity, we think of the data center as a backplane, which allows us to have a design to give our customer maximum flexibility to support any application density, ranging from lower-density cloud to high-density AI/HPC deployments; and any cooling preference our customer wants: from air, to liquid, to hybrid, to immersion or any other new technology.

 

Nicole Dierksheide, Rehlko: We’ve found that the best outcomes come from building long-term partnerships with our customers. When we have a continuous relationship, we gain valuable insight into their evolving needs. For example, we work with one major customer who initially required a specialized solution. Over time, as we've delivered between 70 and 105 units per year for them, that engineering special has essentially become a repeatable, standardized build.

The initial customization goes through its testing and approvals, but once it’s validated, it transitions into a consistent, scalable offering. It’s not truly off-the-shelf, but it’s no longer a one-off either.

This hybrid approach, where a unique customer need becomes a standardized product for them, strikes the right balance. It allows us to move faster while still addressing site-specific or application-specific requirements. That’s especially valuable in the AI space, where power density is increasing rapidly. A standard rack used to draw around 30 kW; now, AI racks can demand upwards of 70 kW or more. That significantly shifts backup power requirements.

For instance, if a customer previously needed 50 three-megawatt generators to support their data center, they might now need 60 to support the same footprint with AI-driven workloads. Or they might consider moving up to four-megawatt gensets to keep pace. Fortunately, we offer that scalability, and our customers can adapt their infrastructure accordingly.

Flexibility delivers the most value when it's paired with repeatability. Once we’ve aligned on a design and gone through that initial "first-of-a-kind" phase, the solution becomes familiar and efficient to produce. This reduces lead times, increases consistency, and ultimately helps our customers scale more quickly without sacrificing reliability.

 

Carsten Baumann, Schneider Electric: Customization is important. For example, switch gear can be configured with different breakers and bus architectures to match project needs. This is known as Configure to Order (CTO). Our manufacturing processes support this approach.

Engineering to Order (ETO) is more complex, requires more time, and typically demands a premium investment. UPS technology also benefits from modular architecture that provide initial capacity and can scale for future demand, provided infrastructure supports it. 

Reference designs help customers evaluate needs quickly and accurately, though they may not be built exactly to those specs. Standardizing electrical system designs could benefit the industry, but geographical and code requirements necessitate customization. Cooling systems also need customization based on location. Vendors aim to balance flexibility with availability and capacity impacts.

The short answer is, we offer both. Standardized and bespoke. Typically, we observe that the larger a single system becomes, the more unique it seems to become.

As mentioned earlier, supply chain agreements and the typically associated perhaps bespoke systems, mitigates against the complexity and cost associated with it. Yes, it’s unique, though we need 500 of them. It may be unique to the industry, not to that customer though.

Prefabrication is a fantastic example that allows to build infrastructure more rapidly, at better quality and at an accelerated time schedule.

 

NEXT: Defining Real Innovation in the Data Center of 2025

 
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

About the Author

Matt Vincent

A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.

Sign up for the Data Center Frontier Newsletter
Get the latest news and updates.