Designing Data Centers for Real-World Performance - Optimizing for AI Workloads All Year Long
The data center industry is under pressure on multiple fronts.
The industry is facing a critical inflection point: AI workloads are not just increasing power density, they are fundamentally reshaping the operational landscape, pushing facilities to their limits. Mounting grid constraints are now stalling vital projects in strategic markets, threatening growth and reliability. Meanwhile, cooling strategies must evolve at breakneck speed to keep pace with these escalating demands. All this is happening as operators urgently attempt to retrofit existing data centers, many of which were never designed for today's volatile, high-density compute, to avoid falling behind.
The pressure to adapt is immediate and unrelenting; failing to act risks jeopardizing performance, efficiency, and long-term viability.
However, these pressures are exposing a fundamental issue: many infrastructure decisions are still based on incomplete insight into how data centers actually perform in operation.
Moving Beyond Traditional Design Methods
For decades, data center design has relied on a combination of peak-load calculations and single point in time computational fluid dynamics (CFD) analysis of data halls to validate performance and ensure resilience.
Both remain essential, but as data centres become more advanced a detailed whole facility modeling approach is required alongside these.
Peak-load calculations provide critical guardrails, ensuring systems can withstand worst-case scenarios and maintain uptime under extreme conditions. CFD, meanwhile, offers detailed single point in time snapshots of airflow behaviour within data halls—helping engineers understand temperature distribution, validate containment strategies, and identify potential hotspots before deployment.
Together, these approaches answer a fundamental question:
“Will this design work under defined conditions?”
But those defined conditions represent only a small fraction of how data centers actually operate.
In reality, facilities run under constantly changing conditions. IT workloads fluctuate—often dramatically. AI clusters, in particular, can experience swings in power demand of 40–50% in very short periods, producing rapid thermal surges that cooling systems must respond to in real time. External temperatures shift hourly and seasonally, while infrastructure systems continuously operate under partial-load conditions.
This is where the limitations of traditional approaches begin to emerge.
Peak-load calculations are inherently static, focusing on extremes rather than typical operation. They ensure systems won’t fail—but they do not explain how efficiently or effectively they will perform day-to-day.
Similarly, while CFD provides a highly detailed view of airflow and thermal behaviour, it is typically applied as a static, steady-state analysis. These simulations rely on fixed assumptions—constant IT loads, stable airflow rates, and steady cooling system performance—solving for a single moment in time where conditions are in equilibrium.
This makes CFD extremely powerful for validating design intent, but less suited to capturing how performance evolves in real environments where conditions are continuously changing.
There are also practical constraints. CFD simulations require solving complex physics equations, such as the Navier–Stokes equations, making them computationally intensive and better suited to targeted analysis than continuous evaluation.
All of this matters because infrastructure performance is not linear. Systems that appear efficient at peak load—or under a single set of modelled conditions—can behave very differently across a full year of operation, with energy use, cooling effectiveness, and available capacity varying significantly outside those scenarios.
As AI workloads increase both compute density and variability, this gap between design assumptions and operational reality is becoming harder to ignore.
Peak-load analysis and CFD remain indispensable tools for ensuring resilience and validating design. But on their own, they cannot fully explain how facilities will perform across the dynamic, real-world conditions that now define modern data centers.
A More Complex Set of Trade-Offs
At the same time, engineering decisions are becoming more complex.
Cooling, power and location choices are increasingly interdependent. Their performance varies by climate, infrastructure constraints and operating conditions. Cooling strategies that perform well in cooler regions may behave very differently in hot or humid environments, particularly under partial-load conditions.
Power strategy is also becoming a critical design variable. In markets where grid capacity is constrained, developers are exploring on-site generation, renewable integration and hybrid power solutions to accelerate time to power. These approaches can unlock capacity and accelerate deployment timelines, but require careful evaluation of how power and cooling systems interact under real operating conditions.
In many cases, these factors determine not only how efficiently a facility operates, but whether it can be delivered on schedule at all.
So for developers and operators, the question is no longer just “What works at peak?” but “What performs best across real operating conditions—and how does that impact capacity, cost and risk?”
Designing for Time to Power—and Real Performance
This shift is being accelerated by one overriding constraint: time to power.
In many regions, access to electrical capacity has become the gating factor for new data center development. Grid connection timelines are extending, and developers are increasingly required to provide robust evidence that proposed facilities can operate within available infrastructure constraints.
As a result, infrastructure decisions must be supported by credible, data-driven analysis much earlier in the design process. Overestimating power requirements can limit deployable capacity, while underestimating cooling or load behavior can lead to delays in grid approval or performance issues after commissioning.
This is where the industry is evolving—not by replacing traditional methods, but by building on them.
Dynamic whole-facility simulation adds a new layer of analysis, enabling engineering teams to understand how data centers behave across all operating conditions—hour by hour, across an entire year, and under real climate conditions.
Working alongside peak-load calculations and CFD, it provides a more complete picture of performance, allowing teams to:
- Select cooling strategies based on year-round performance
- Right-size power and cooling infrastructure
- Identify capacity constraints under real workloads
- Optimize control strategies for partial-load efficiency
- Prove the impact of distributed/renewable generation
- Deliver robust grid connection & water use evidence
- Select climate appropriate technologies and seasonal control
- Evaluate retrofit strategies for higher-density AI workloads
- Validate operational strategies, including control logic and response to rapid load variability
Rather than replacing established tools, this approach connects design intent with operational reality, enabling more confident and informed decision-making.
It allows infrastructure strategies to be tested not just for resilience, but for efficiency, adaptability, and long-term performance—before construction begins.
Designing for Reality, Not Extremes
Designing for the other 99.9% of the year means designing for fluctuating loads, changing climates and increasingly transient operating conditions.
Dynamic simulation provides a fundamentally different level of insight. By modeling real load profiles, climate-specific conditions and power–cooling interactions, engineering teams can validate infrastructure strategies under the conditions facilities will actually experience, not just theoretical extremes.
Crucially, this complements—rather than replaces—traditional approaches. Dynamic simulation brings these together within the context of real operation, revealing how systems perform over time.
This layered approach enables a deeper understanding of performance, supporting informed decisions that derisk capital infrastructure spend while optimising both energy and resource efficiency.
This approach is already being applied at scale. IES is a global leader in building performance modelling, with decades of experience supporting the design and operation of complex, high-performance facilities. That expertise is now being applied to data centers, enabling teams to simulate entire facilities across all 8,760 hours of the year and test how systems respond to real-world variability.
In one recent hyperscale data center project, this approach achieved an industry leading PUE of 1.16 on a liquid cooling retrofit to increase capacity from 10kW-50kW/rack, while maintaining performance and resilience under dynamic load conditions. It also provided the evidence needed to validate this retrofit design on facilities in different climate zones to support infrastructure planning before construction.
As power constraints tighten and AI workloads continue to evolve, the ability to layer these approaches—combining traditional design methods with dynamic, data-driven insight—is now essential. It enables developers to de-risk capital investment, accelerate time to capacity, and ensure that both new and existing facilities can operate as efficiently and effectively as possible.
To see how whole-facility, climate-specific modelling can support your next data center project—whether new build or retrofit—download the free white paper here.
About the Author

Mark Knipfer
Mark Knipfer leads Data Center Services at Integrated Environmental Solutions (IES), where he works with global engineering teams and operators to evaluate performance, energy efficiency, and infrastructure strategies for high-performance facilities.
Integrated Environmental Solutions (IES) provides advanced building performance modeling and simulation tools used by engineering teams worldwide to evaluate complex infrastructure systems. Learn more about IES and download the full data center white paper here.



