How to Mitigate Risk in Data Center Site Selection

Jan. 11, 2016
Companies in the midst of a data center site selection process must weigh risk mitigation. Robert McClary of FORTRUST offers six risk factors to consider.

Robert McClary, Senior Vice President and General Manager at FORTRUST, provides his view on how companies can mitigate risk in the data center site selection process. Here are six risk factors to consider.

Robert McClary, FORTRUST

1. How likely is the data center to be exposed to an earthquake risk?

For the purposes of illustrating seismic activity in the United States, the United States Geological Survey divides the country into zones, numbered from 0 to 4, indicating occurrences of observed seismic activity and assumed probabilities for future activity. Even with seismic enhancements added to data center equipment, it is preferable to choose a data center that falls within seismic zone 1 or below.

2. Is there a risk that the data center will be exposed to a tornado, hurricane or blizzard?

Refer to the National Oceanic and Atmospheric Association for records of high-intensity tornado, hurricane or hurricane related weather. If the data center often experiences heavy snowfall, ask how quickly snow would be removed from routes leading to the data center. Tornadoes, hurricanes, and blizzards are all grand-scale atmospheric events that can negatively affect a data center’s performance. Find a data center that has chosen a site with a low chance of these incidences occurring.

3. Is the data center in a high risk flood or fire zone?

Weather related catastrophes like flooding and wildfire can also impact data center operations. Flooding is a common event that can occur virtually anywhere, so finding an unaffected facility can seem impossible. Find out what might cause a flood in the data center’s region and if it has been effectively or ineffectively handled in the past. The historical causes of floods can clue you in to their frequency, and how effectively the data center controlled them can indicate which site has the best disaster recovery plans in place. The same strategies can be applied to wildfires if the data center lies in an area commonly affected by them.

4. How close are support resources? Does the data center have access to replacement parts, offsite backup media, and alternative power sources?

Is the data center prepared to sustain itself should the main supply of utility power fail? Specifically, are the generators at the data center rated for continuous run time operations as the facility’s primary source of power in an elongated utility event?

When considering alternate power sources, it is important to know if the data center has access to more than one grid from the energy company and whether the grid also feeds a large amount of residential developments or construction sites. In the event power fails, standby and emergency rated generators should be rated for continuous run time operations, but frequently they are not.

5. Does the data center have access to multiple major communications networks and backbones?

Are major carrier routes somewhat proximate to the data center and are they a major fiber route or a smaller spur off the main backbone? How much fiber is already in place, and how much of it is ‘lit’, or ready for service? How much of it is ‘dark’? Are the carriers themselves present in the area or do they rely on third parties for maintenance? Is at least one of the data center’s carriers a Tier 1 provider (peering directly with other major backbones at private and public peering exchanges)? Preferably, the data center should have access to multiple major communications networks, at least one of them Tier 1, off a major fiber route with much of it ready for service. The carriers should be personally responsible for their own maintenance and will be more likely to react quickly to problem along its major fiber route than elsewhere.

6. Is the data center aware of the disasters most likely to strike their location and do they have risk mitigation measures and disaster recovery plans in place?

What are they? Have they been enacted in the past? If so, how effective were they?

Don’t look for a data center with a “guarantee” because nowhere is completely immune to natural disasters. You should also avoid the data center that is unwilling to share their past experiences with natural disasters. Instead, look for the data center that knows about their site’s unique geological risks, is open about them, and has extensive plans to mitigate them. Check out the facility’s uptime record—the one true measuring stick of a data center— to see how well or how poorly they’ve prevented downtime.

Keeping the above questions in mind, look carefully at the geographic location of your data center. FEMA, NOAA, and USGS provide information about what catastrophes have historically happened in every part of the country—and what might be likely to happen next. Find a data center ahead of the game, one that has specifically chosen their location to be outside the range of most natural disasters. This will help assure the 100% uptime so critical to your business success.

It’s true that no matter where you choose, no place is 100% safe, which is why data center site selection is so important. You may want to partner with a data center that has taken all risks into account and has put measures in place to guard against any and all events. Use these questions to find the data center that was not only built prepared, but acts prepared.

Robert D. McClary is Senior Vice President and General Manager, responsible for the overall supervision of business operations, high-profile construction and strategic technical direction at FORTRUST.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

How Modern DCIM Helps Multi-Tenant Colocation Data Centers Be More Competitive

Discover the transformative impact of modern DCIM software on multi-tenant colocation data centers, enhancing competitiveness through improved resiliency, security, environmental...

Yentafern/Shutterstock.com

Cooling the AI Revolution in Data Centers

Nathan Blom of Iceotope Technologies explains that by aligning liquid cooling strategies with broader business objectives, organizations can accelerate innovation, improve cost...

Shutterstock, courtesy of Stream Data Centers
Sashkin/Shutterstock.com

White Papers

Get the full report

The Affordable Microgrid: Securing Electric Reliability through Outsourcing

Feb. 12, 2022
Microgrids, which use controllers to connect multiple power generation and storage sources, can provide electric reliability but they can also be too complex and costly for businesses...