Effective Resilience: Designing Data Centers that Excel (Without Excess)
Hyperscale and Fortune 100 customers know their use cases inside and out. They have high standards for uptime and operations — but they also know that designing, developing and operating a data center is always more than just supporting processing and compute.
The companies we serve don’t just have rigorous standards for their tech; they have them for their customers and their role as neighbors and community contributors as well. For developers, meeting these mandates means designing and developing facilities that offer the ideal mix of safety and resilience alongside efficiency and unobtrusiveness.
Of course, we need to create something redundant enough to overcome uncertainty — so we don’t design for a typical day or a comfortable load curve. We design for the day when our systems will be put through their paces: when a tenant is utilizing 100 percent of their contracted IT capacity under the most demanding operating conditions.
Still, designing to maximize reliability does not mean throwing everything and the kitchen sink in to handle an unrealistic catastrophe, especially when it’s equally as important to maintain refined, efficient, high-value, low-impact projects that fit nicely into communities.
As customer priorities evolve, developers must design solutions and problem-solve around these evolving needs to ensure our facilities continue to strike the right balance for each use case.
What is Realistic?
Delivering the right design for the right location and customer use case (meaning, ensuring we’re not overengineering or under-engineering) begins with evaluating credible scenarios. What could actually fail? How would systems respond? What is required by contracts and service level agreements?
Where design can begin to drift and veer off course is when a worst-case scenario seeps into becoming a worst imaginable case. After all, failure analysis only works when it reflects credible operating scenarios.
At a certain point, some scenarios become so unlikely that overdesigning to guard against them adds bulk, complexity and costs without meaningful benefit or proportional gains in reliability or efficiency. Overdesign can lead to excess infrastructure that consumes resources unnecessarily and leaves capacity stranded or underutilized.
We could ‘what if’ ourselves all day, but we’re designing for effective resilience, not excessive resilience.
Effective resilience aligns failure domains logically without forcing unnecessary buildup of electrical or mechanical capacity that may never be used. This approach supports both reliable operations for customers and responsible stewardship of energy and infrastructure resources in the communities where these facilities operate.
Designing to meet sophisticated needs without overbuilding has always been an important part of how Stream operates. Rightsizing our design and operations for individual customers, communities and workloads is fundamental in how we use resources responsibly, reduce impact and avoid material waste.
To achieve that, a data and experience-based approach serves us and our stakeholders well.
Learning From Operations, Not Assumptions
Leaning on the facts helps. As our designs evolve to meet evolving customer priorities for operations and community alignment, Stream relies heavily on the data generated by our existing facilities, the feedback of our experienced in-house operations team and the insights that come with 27 years of trusted results.
From a system perspective, what ultimately matters is how efficiently a facility manages thermal load within its mechanical infrastructure. We design chilled water plants around total system capacity, which has a finite heat rejection capacity, regardless of whether the heat comes from air-cooled or liquid‑cooled equipment. Viewing systems through that lens allows us to evaluate where efficiency can be improved and where redundancy levels can be refined.
In some cases, our evaluations have shown opportunities to better align system configurations with the dynamics of today’s market. By refining assumptions and applying real-world performance data, we can adjust the number of critical blocks on certain campuses while continuing to meet aggressive redundancy and resiliency levels.
These decisions reflect the natural progression of design standards as conditions and requirements evolve, building on choices that were appropriate and effective at the time.
Designing Systems, Not Just Equipment
Efficiency gains come from understanding how components interact within a larger system, not from selecting individual pieces of equipment in isolation.
When Stream evaluates new gear, the same questions apply. How does it plug into the overall systems profile? How does it fit within the block configuration? Can it be more efficient than what is currently in use?
Our approach is tied to a clear incentive: we want as much of the energy entering a facility as possible to be converted into critical IT capacity rather than consumed by support systems. This is what our customers expect — and it’s what responsible infrastructure design should enable.
This system‑level thinking also informs Stream’s equipment selection across our data center portfolio. Our supply chain and procurement model was built to deliver consistency and flexibility so our developments (and customers) can be nimble, leveraging the same parts and components across any number of design densities rather than locking facilities into highly specialized configurations.
Flexibility Is a Form of Resilience
At the same time, we must remember that modern data center design is shaped as much by uncertainty as by risk.
Densities that once fit within a narrow range now vary significantly, and cooling strategies continue to evolve. At Stream, preserving configurability within our standards and allowing certain decisions to be made later in the development process enables our customers to adapt as requirements become clearer. This ultimately helps reduce the cost, complexity and unnecessary environmental impacts of overengineering.
In an environment defined by rapid change, flexibility is one of the most effective ways to ensure long-term resilience.
The most expensive decision is the one that cannot be changed, so we created a design and construction model that helps customers avoid that risk.
Designing for Failure Means Designing for Change
At Stream, we’re guided by our DACS program (Design and Construction Standards). DACS is a flexible framework built on proven systems and preferred configurations that have evolved as technologies, densities and tenant requirements have changed over time. Within that framework, power is contracted for maximum demand, mechanical plants are sized to handle full thermal load under extreme conditions, and reliability commitments are made based on peak operating scenarios.
DACS was never intended to be a fixed prototype that gets repeated without question. Internally, it has long been thought of as a set of ingredients or a kit of parts: a collection of preferred systems and configurations that can be deployed differently depending on tenant needs while still serving the broader market.
In the past 18 to 24 months, the pace of customer change has accelerated, densities have continued to rise, and chip sets have evolved quickly, all while efficiency goals have become even more pressing. Rather than abandoning core principles, DACS has allowed Steam to focus on maintaining what continues to work while evolving how those concepts are applied. This includes preserving core electrical systems, modifying fan wall designs to support direct‑to‑chip liquid cooling, and continuing to rely on a closed‑loop chilled water system capable of supporting whatever desired compute load is deployed (while eliminating routine daily water use for IT cooling — an aspect that communities continue to appreciate).
Our objective is not reinvention, but measured, incremental adaptation that supports a wide range of compute loads and community priorities while maintaining the utmost flexibility, reliability and efficiency for customers.
Resilience Through Discipline, Not Excess
At Stream, designing the right facility for the job is about accountability to everyone — our customers and our communities.
Our customers depend on infrastructure that performs on the hardest days, adapts as technologies change, and avoids unnecessary complexity that can drive cost and long-term inflexibility. Our communities depend on facilities that make meaningful contributions while remaining low profile and without taxing local resources.
Grounding design and development in reality while refining our engineering standards through real-world operation is how we deliver the dependable infrastructure Stream is known for. It’s also how we ensure our customers can always trust their infrastructure while also making sure our facilities are as efficient and low-profile as possible.
About the Author

Eric Closson
Eric Closson is Senior Design Manager for Stream Data Centers. Eric delivers more than a decade of data center experience to empower hyperscale critical infrastructure design projects, serving as a conduit between design, construction and development teams to ensure requirements are met on time and with the utmost quality.
Stream Data Centers, a time-tested hyperscale partner and one of the longest-standing developers in the industry, is a high-growth developer and operator of wholesale data center colocation capacity and build-to-suit facilities for hyperscale and enterprise users in major markets across the United States. For more than 26 years, Stream has set new standards for innovation, operational excellence and sustainability in the data center industry, acquiring, developing and managing complex data center projects for the world’s most demanding users, with over 90% of its inventory leased to Fortune 100 customers.



