Gaining Control of Your Cloud Spend: A Data Center Decision-Maker’s Guide

Nov. 30, 2020
How can organizations gain control of their cloud spending? As data center managers reviewing budgets for 2021, DataBank CTO Vlad Friedman takes a look at strategies and tools to optimize your cloud and data center spending.

Vlad Friedman CTO at DataBank, takes a look at the strategy behind cloud spend and data centers. 

Vlad Friedman CTO at DataBank

For IT decision-makers, there’s a lot of hype around the theory that the public cloud is always the least-expensive option. It might not be. Corporate spend on public cloud infrastructure has risen 25 percent year over year — to nearly $17 billion in the second quarter of 2020, hitting an all-time high, according to Synergy Research Group.  While corporate spending may continue to increase in the public cloud space, companies are often surprised when receiving their invoices. Those surprises have many IT executives questioning whether they considered all of the alternatives when they executed their “cloud-first” strategy.

The net result is that many data center managers are now reviewing budgets for 2021 and are looking for ways to economize when it comes to cloud decision-making. With this in mind, how can organizations gain control of their cloud spend and incorporate a forward-looking strategy that considers the most relevant alternatives?

The answer depends on the design and usage patterns of your workloads.

For production workloads, using third-party cloud pricing and optimization tools, such as CloudHealth or CloudCheckr, is the first step. While many of the tasks performed by these tools can be performed manually, automation of the remediation of common mistakes that drive unnecessary costs is an essential tactic for long term success:

  • Reservations – Steady-state workloads benefit from reservations – with tools recommending and purchasing reserved instances to drive savings of up to 50%.
  • Sizing – Ensure the instance types selected are appropriately sized for your application.
  • Orphaned Snapshots and Volumes – Deleting an instance doesn’t always delete the attached disks and snapshots. Clean up old storage you aren’t using.

Secondly, forecasting costs for development workloads can be more difficult as servers are spun up and spun down – but often, they are launched and forgotten. At DataBank, a strategy we implemented was simply tagging resources with a developer’s name and sending a copy of consumption to the entire team. The visibility of consumption brought enough awareness to drive 50% savings.

Lastly, while public clouds excel at running highly variable spin-up and spin-down workloads, latency-sensitive, computationally, and IOPs intensive applications often perform better and are more efficient to operate on private and colocation infrastructure. Interestingly, the same tools used to manage costs will often provide insight into which applications are best suited for private infrastructure.

Invest time to understand your consumption under load, your application’s sensitivity to latency, and the potential hidden costs for labor, per transaction, and bandwidth when utilizing the public cloud. While unit costs may appear trivial, they add up quickly. If you don’t have access to commercial price and workload evaluation tools, consider using a free tool like to benchmark your applications over time and predict realistic public cloud costs.

Service providers like DataBank will often perform a free analysis of your workloads and design hybrid-cloud solutions, blending the benefits of each type of infrastructure into a singular, highly performant, and cost-effective solution.

How do you create a successful cloud spend strategy?

  1.  Start with a comprehensive inventory of your servers, applications, storage, and usage.
  2.  Identify opportunities to create shared services pools. For example, do you need a separate SQL server for every application? Create a single HA (High availability) SQL Cluster to service several applications. Apply this methodology to applications that drive considerable licensing or instance costs like SQL and Oracle.
  3.  Align Instance sizes with consumption.
  4.  Determine which applications can be shut down when not in use. Does the center run large batch processes?  Spin up the servers only when needed.
  5.  Analyze your potential bandwidth (egress) costs. While rarely factored into cost calculators, it’s often one of the highest costs. Consider moving bandwidth-intensive and latency-sensitive applications to traditional or even edge data centers.
  6. Plan your long-term data retention and archival needs to align with your security and compliance strategy. As data grows over time, so do costs. Leverage “cheap and deep” storage for long-term archival. Charges in a hybrid data center can be significantly lower as you achieve scale.
  7.  Assume your costs will be 30-40% higher than you planned initially. Over time, it will become apparent which applications are best suited for the public cloud and which should be repatriated to private infrastructure.

The most prominent mistake organizations make when planning their cloud spend is believing the hype. Public cloud can be an efficient option when workloads are transformed to take advantage of PaaS, microservices, and serverless computing. Conversely, lift and shift migrations often incur a significant cost penalty.

Effective and routine inventory management, applied on a schedule with financial transparency for application owners, supported by practical tools are the critical elements for long-term success. DataBank recommends that data center leaders evaluate their options and create a hybrid-cloud strategy repatriating workloads that are more efficient on private infrastructure. Plan upfront, work with facts, don’t rush things, and understand workloads to ensure that your first bill doesn’t lead to sticker shock.

Vlad Friedman is the CTO at DataBank. 

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

How Modern DCIM Helps Multi-Tenant Colocation Data Centers Be More Competitive

Discover the transformative impact of modern DCIM software on multi-tenant colocation data centers, enhancing competitiveness through improved resiliency, security, environmental...


Cooling the AI Revolution in Data Centers

Nathan Blom of Iceotope Technologies explains that by aligning liquid cooling strategies with broader business objectives, organizations can accelerate innovation, improve cost...

Shutterstock, courtesy of Stream Data Centers

White Papers

Dcf Service Express Sr Cover2023 07 07 15 37 53

Top Methods To Modernize and Balance Your Infrastructure

July 10, 2023
The growing number of connected devices, the increased use cases around mobility and a greater need for data center reliability are all driving growth in cloud and data center...