Gaining Control of Your Cloud Spend: A Data Center Decision-Maker’s Guide

Nov. 30, 2020
How can organizations gain control of their cloud spending? As data center managers reviewing budgets for 2021, DataBank CTO Vlad Friedman takes a look at strategies and tools to optimize your cloud and data center spending.

Vlad Friedman CTO at DataBank, takes a look at the strategy behind cloud spend and data centers. 

Vlad Friedman CTO at DataBank

For IT decision-makers, there’s a lot of hype around the theory that the public cloud is always the least-expensive option. It might not be. Corporate spend on public cloud infrastructure has risen 25 percent year over year — to nearly $17 billion in the second quarter of 2020, hitting an all-time high, according to Synergy Research Group.  While corporate spending may continue to increase in the public cloud space, companies are often surprised when receiving their invoices. Those surprises have many IT executives questioning whether they considered all of the alternatives when they executed their “cloud-first” strategy.

The net result is that many data center managers are now reviewing budgets for 2021 and are looking for ways to economize when it comes to cloud decision-making. With this in mind, how can organizations gain control of their cloud spend and incorporate a forward-looking strategy that considers the most relevant alternatives?

The answer depends on the design and usage patterns of your workloads.

For production workloads, using third-party cloud pricing and optimization tools, such as CloudHealth or CloudCheckr, is the first step. While many of the tasks performed by these tools can be performed manually, automation of the remediation of common mistakes that drive unnecessary costs is an essential tactic for long term success:

  • Reservations – Steady-state workloads benefit from reservations – with tools recommending and purchasing reserved instances to drive savings of up to 50%.
  • Sizing – Ensure the instance types selected are appropriately sized for your application.
  • Orphaned Snapshots and Volumes – Deleting an instance doesn’t always delete the attached disks and snapshots. Clean up old storage you aren’t using.

Secondly, forecasting costs for development workloads can be more difficult as servers are spun up and spun down – but often, they are launched and forgotten. At DataBank, a strategy we implemented was simply tagging resources with a developer’s name and sending a copy of consumption to the entire team. The visibility of consumption brought enough awareness to drive 50% savings.

Lastly, while public clouds excel at running highly variable spin-up and spin-down workloads, latency-sensitive, computationally, and IOPs intensive applications often perform better and are more efficient to operate on private and colocation infrastructure. Interestingly, the same tools used to manage costs will often provide insight into which applications are best suited for private infrastructure.

Invest time to understand your consumption under load, your application’s sensitivity to latency, and the potential hidden costs for labor, per transaction, and bandwidth when utilizing the public cloud. While unit costs may appear trivial, they add up quickly. If you don’t have access to commercial price and workload evaluation tools, consider using a free tool like LiveOptics.com to benchmark your applications over time and predict realistic public cloud costs.

Service providers like DataBank will often perform a free analysis of your workloads and design hybrid-cloud solutions, blending the benefits of each type of infrastructure into a singular, highly performant, and cost-effective solution.

How do you create a successful cloud spend strategy?

  1.  Start with a comprehensive inventory of your servers, applications, storage, and usage.
  2.  Identify opportunities to create shared services pools. For example, do you need a separate SQL server for every application? Create a single HA (High availability) SQL Cluster to service several applications. Apply this methodology to applications that drive considerable licensing or instance costs like SQL and Oracle.
  3.  Align Instance sizes with consumption.
  4.  Determine which applications can be shut down when not in use. Does the center run large batch processes?  Spin up the servers only when needed.
  5.  Analyze your potential bandwidth (egress) costs. While rarely factored into cost calculators, it’s often one of the highest costs. Consider moving bandwidth-intensive and latency-sensitive applications to traditional or even edge data centers.
  6. Plan your long-term data retention and archival needs to align with your security and compliance strategy. As data grows over time, so do costs. Leverage “cheap and deep” storage for long-term archival. Charges in a hybrid data center can be significantly lower as you achieve scale.
  7.  Assume your costs will be 30-40% higher than you planned initially. Over time, it will become apparent which applications are best suited for the public cloud and which should be repatriated to private infrastructure.

The most prominent mistake organizations make when planning their cloud spend is believing the hype. Public cloud can be an efficient option when workloads are transformed to take advantage of PaaS, microservices, and serverless computing. Conversely, lift and shift migrations often incur a significant cost penalty.

Effective and routine inventory management, applied on a schedule with financial transparency for application owners, supported by practical tools are the critical elements for long-term success. DataBank recommends that data center leaders evaluate their options and create a hybrid-cloud strategy repatriating workloads that are more efficient on private infrastructure. Plan upfront, work with facts, don’t rush things, and understand workloads to ensure that your first bill doesn’t lead to sticker shock.

Vlad Friedman is the CTO at DataBank. 

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Runawayphill/Shutterstock.com
Source: Runawayphill/Shutterstock.com

How A Cross-Company Initiative Is Telling the Story of a New, Collaborative Future for Data Centers

A group of the industry’s top talent have been working as a special task force to address a pivotal issue facing the data center sphere at large. With their upcoming publishing...

White Papers

Get the full report

Enhancing Resiliency For the Energy Transition

Nov. 14, 2021
This white paper from Enchanted Rock explores how dual purpose microgrids can offer resiliency and stability to the grid at large.