Average Cost of a Data Center Outage

Jan. 25, 2016
In this week’s Voice of the Industry, Daniel Draper of Emerson Network Power, explains the latest figures on the cost of data center downtime. The average cost of a data center outage in 2016 now stands at $740,357, up 38% from 2010.

In this week’s Voice of the Industry, Daniel Draper of Emerson Network Power, explains the latest figures on the cost of data center downtime as quantified by Ponemon Institute.

Time is money.  And data center downtime is A LOT of money.  That’s what the latest findings from the Ponemon Institute indicate in their most recent edition of the Cost of Data Center Outages report.

The average cost of a data center outage in 2016 now stands at $740,357, up 38% from when the report was first developed in 2010.  That’s $8,851 per minute of lost revenue, and unproductive employees (“e-mail’s down, time for some minesweeper!”).

So how did the Ponemon Institute come up with an average cost of $740,357 per unplanned outage?   Well, to get that figure, the Ponemon Institute audited 63 data centers in North America that experienced an outage.  Utilizing an activity-based costing model, they captured information about both direct and indirect costs including:

  • Damage to mission-critical data
  • Impact of downtime on organizational productivity
  • Damages to equipment and other assets
  • Cost to detect and remediate systems and core business processes
  • Legal and regulatory impact, including litigation defense cost
  • Lost confidence and trust among key stakeholders
  • Diminishment of marketplace brand and reputation

Now back to the cost of downtime.  Way back in 2010, the average cost of an outage was calculated at $505,502.  So what explains the quarter of a million dollar increase in costs?  Well, think back to 2010 and how much internet based technology we used (or didn’t use as the case will show).  In 2010, I had a Facebook account, as did 500 million others around the world, but now Facebook has 1.5 Billion profiles.  2010 was the year the first iPad came out.  Cyber Monday accounted for less than a billion dollars in sales.  Today, over $2 Billion of commerce happens online on just that one day.  Cable cord cutters are growing and streaming media is quickly becoming mainstream in households all across the country.

More and more commerce and communication is happening through the web each day, and the importance of networks and data centers is higher than ever before.  So what can we do to make sure data center owners and operators aren’t losing money (and more importantly, creating unhappy customers)?  Well, let take a look at the root causes of these outages from the audited facilities:

UPS system failure (which includes batteries), cyber attacks and the dreaded “human error” account for 70% of the outages.  Most all of these outages were completely preventable and in many cases, the cost to prevent the problem was insignificant compared to the direct and indirect cost of the outage.

Generally speaking, here are some of the most basic tips to keep downtime from bringing you down:

  1. Monitor UPS Batteries – Batteries are the weak link in the UPS system. Use remote battery monitoring to identify battery problems before they impact operations.
  2. Use Intelligent Thermal controls with Cooling Units – These controls improve protection by monitoring component data points, providing unit-to-unit communications, matching airflow and capacity to room loads, automating self-healing routines, providing faster restarts and preventing hot/cold air mixing during low load conditions.
  3. Perform Preventive Maintenance – An increase in the number of annual preventive maintenance visits correlates directly with an increase in UPS MTBF. Going from zero to one preventive maintenance visit a year creates a 10x improvement; going from zero to two visits a year creates a 23x improvement.
  4. Strengthen Policies and Training – Make sure the EPO button is clearly labeled and shielded from accidental shut off. Document and communicate policies and conduct regular training.
  5. Standardize and Automate Security Management – Use console servers to provide secure, remote access to servers to simplify patch management and provide early detection of attacks.

Cost of Downtime is a popular number and a useful metric to have when making the case for additional resources (human and equipment) to make sure your facility is always on, but it’s not the only metric that IT and Facility professionals should think about.

In the coming months, the Ponemon Institute will be releasing four additional reports as part of the Data Center Performance Benchmark Series, covering the issues of security, productivity, speed-of-deployment and cost-to-support compute capacity.

Submitted by Daniel Draper, Director of Marketing Programs for Emerson Network Power.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

Guide to Environmental Sustainability Metrics for Data Centers

Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...