The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Shay Demmons of RunSmart software.
SHAY DEMMONS, RunSmart
Shay Demmons is EVP, General Manager of the RunSmart software division. In this position, Shay is responsible for running all facets of the software business unit including sales, marketing, product development and customer service. Before this role, Shay was BASELAYER’s Director of Product Management for the RunSmart product. Before joining BASELAYER, Shay worked at Corning where he was in charge of their Broadband product lines across the globe. Shay also held Senior Product, Business Development, Marketing and Engineering roles at Acoustic Technologies, NKK Switches and Intel Corp; notably obtaining two patents during his tenure at Intel. In all of these roles, he led a strategic transformation by improving development cycles, driving down cost and growing revenue year-over-year. Mr. Demmons earned a BS in Electrical Engineering from Arizona State University as well as an MBA from Babson College.
Here’s the full text of Shay Demmons’ insights from our Executive Roundtable:
Data Center Frontier: The recent British Airways data center outage caused widespread disruption to the airline’s operations, with early estimates placing its business impact at more than 80 million pounds ($104 million US). What are the most effective ways to eliminate these type of outages?
Shay Demmons: The British Airways outage highlights the exposure of not having a well thought-out operational plan that accounts for failures and other expected operating conditions. Failures are part of a well-conceived operational plan, and it is the responsibility of the core infrastructure team to have the people, processes, and technology in place to identify and react to this wide range of operating conditions. All applications and services depend on this core infrastructure, and yet a single human error or cyber-attack can wreak havoc on an enterprise and their core business unless the operating plan includes provisions for those conditions and the optimal responses.
One of the most basic yet readily available technologies today is that associated with detection and response. Many data centers lack this basic real-time monitoring of infrastructure, which is the first step to truly eliminating these sorts of outages. Once detected, most data centers lack the ability to respond to these conditions. The concept of a software-defined data center (SDDC) includes both the detection of failures and automated response capabilities. The power of integration across the IT and facilities layers can be leveraged by these software-defined structures, allowing business continuity goals to be met. Bottom line: No longer can IT and infrastructure management remain siloed systems. They need to compliment and respond together to mitigate disruption with limited (or no) human intervention.[clickToTweet tweet=”Shay Demmons: A software-defined data center includes both the detection of failures and an automated response.” quote=”Shay Demmons: A software-defined data center includes both the detection of failures and an automated response.”]
For example, after receiving real-time information that a system is failing or being removed from service for maintenance purposes, the automated response needs to determine the best course of action to shed demand and maintain a level of business services consistent with the business itself. This may include preemptively shutting down non-critical servers, throttling equipment, bursting into the cloud, or turning on other assets. With a well-conceived software-defined data center which includes failure detection and automated response, operational changes can happen quickly and automatically without human intervention.
Data Center Frontier: The data center industry saw lots of active M&A activity in the first half of 2017, highlighted by Digital Realty’s acquisition of DuPont Fabros Technology is the sector’s largest deal yet. What’s driving all this consolidation? Is it likely to continue?
Shay Demmons: The emergence of hyperscale cloud providers has created a ripple. Data centers that historically focused on reliability and availability are now placing a higher priority on the economics. This moves the balance of power away from smaller, custom-built on-premise data centers, to larger players who can leverage their footprint to offer greater flexibility and economies of scale.
Staci Daguanno of iMason shared a similar view, in the blog titled Shifting Power and Shifting Priorities, he writes, “We see data center companies expanding globally through M&A and new construction. The dynamics of the players in the space is changing very rapidly. Data center operators and end users are increasingly prioritizing energy efficiency, TCO, flexibility, and scalability over availability.”
With this shift in priorities, data center operators need a more comprehensive solution to achieve their mandates. Scale will bring some advantages, however monitoring and tracking are no longer good enough. There is a need for greater transparency, tighter control, integration, and automated control automation to drive companies into the future. Otherwise, they too may be swallowed up.
Shay Demmons of RunSmart software offers his industry insights as a panelist in our Executive Roundtable. (Image: RunSmart)
Data Center Frontier: What interesting trends are you seeing in data center power? Is there still room for innovation in data center electric infrastructure?
Shay Demmons: Much like in other areas of the data center, trends in power are centered around reducing the cost of operation, maximizing efficiency, and reducing waste. While renewable energy and energy storage are popular leading-edge trends that are being adopted coincident with major cost savers like peak shaving, load shifting, and demand response, these technologies themselves remain extremely expensive with multiple decade ROIs. As a general rule, green energy is expensive energy.
The most exciting and attractive innovations in energy management are the ones that utilize the existing assets of the data center to intelligently reduce cost. With the right DCIM tool integrated into the data center IT and infrastructure levels, data centers can take a step past capacity planning and start performing real-time capacity management. The software-defined data center should tell you how much power you need, match it with utility rates and schedules, and make cost cutting suggestions with estimated savings. Examples of this include postponing non-critical jobs, pushing work to a cheaper location or the cloud, or using generators to shave peak load or lower energy during peak times.[clickToTweet tweet=”RunSmart’s Shay Demmons: DCIM technology has been maturing for more than a decade.” quote=”RunSmart’s Shay Demmons: DCIM technology has been maturing for more than a decade.”]
DCIM technology has been maturing for more than a decade and is now available to any organization that desires to think strategically about cost savings and business continuity. It is safe to say that while evolutionary changes to various technologies will continue to occur, the biggest opportunity towards service delivery excellence will come from improving operations, reducing human error, and applying software-defined principles to the manual processes which abound in a data center or colocation facility. The software-defined data center needs to be able to understand the goals of the operator, the points of data to make decisions and then dynamically determine the best course of action based on hundreds or thousands of metrics.
Data Center Frontier: It’s increasingly a multi-cloud world. What are key strategies that data center operators and their customers can use to address multi-cloud deployments?
Shay Demmons: Multi-cloud deployments eliminate the problem of having all your eggs in one basket, but in return add a new layer of complexity to your data center management. The key to maximizing your multi-cloud or hybrid cloud deployment is to ensure that you have fully integrated your platform from cloud to on-premise and all the systems in-between so that you can start to leverage your resources and take advantage of your new flexibility.
If done correctly, data center operators can change the way they manage workloads by sending jobs to the lowest cost resource, in real-time. This will be determined by the workload, and predefined factors that affect its priority: Does the job require lots of I/O? Is the job sensitive to latency? Will the workload be resource intensive? Traditional follow-the-sun strategies shared the same type of business metrics, essentially utilizing a pool of globally distributed resources as needed to reduce cost. Multi-cloud strategies will share a similar consideration.
Your actual workload should drive business decisions and your software-defined data center solution should help to not only manage your multi-cloud deployment but make it a resource that reduces cost.