Six Key Variables to Consider When Setting Up IT Workloads

June 2, 2016
In this week’s Voices of the Industry, Chris Sharp, CTO, Digital Realty, gives in sights on a strategy for examining your various IT workloads and making sure that data is set up in the ideal environment.

In this week’s Voices of the Industry, Chris Sharp, CTO, Digital Realty, covers six key variables to consider when setting up IT workloads.

CHRIS SHARP, CTO, Digital Realty

In today’s world of social, mobile, analytics, cloud and content, the data center is no longer just a white floor where organizations store their servers. The data center is now a hub for cloud and network connectivity, responsible for liberating information collected on servers to help an organization’s community – made up of customers, partners and employees – easily and quickly exchange information to drive revenue and growth.

However, not every IT workload is made equal. To participate in today’s exchange economy, each IT workload needs to be carefully assessed and managed in the right environment. For instance, one workload may require high data transfer rates, while another may need to be specially secured due to sensitive data. Further, certain workloads – such as virtual, desktop and hybrid storage – might benefit from sitting next door to major cloud service providers, such as Amazon Web Services (AWS) or IBM SoftLayer, for greater efficiency.

The first step IT teams must take when designing their systems is to sort out the workloads based on their requirements and rules. Below are six key variables every IT team needs to consider, along with strategic questions that need to be answered:

1. Performance: What is the required time to complete a unit of work (e.g., page load, transaction)? What does this mean in terms of location, latency and bandwidth?
2. Capacity: What resources (e.g., compute, memory, storage, network bandwidth) are needed to deliver a unit of work (e.g., transaction, session)?
3. Read vs. write: What is the proportion of data that is read from a data source compared with what is written? This has implications for storage design and capacity growth. It also affects where data needs to be located.
4. Security: How much information should be stored, transmitted and used? Considerations include compliance, data sovereignty and encryption.
5. Variability: How constant or variable is the workload?
6. Reliability: What happens if the service is unavailable for a period of time? What does this imply for the corresponding workload? For example, how should an emergency response service be designed so that it is always reachable?

To deliver the right workload in the right place at the right value, many businesses today are turning to hybrid cloud architectures. According to RightScale’s 2016 State of the Cloud Report, 82 percent of enterprises have a hybrid cloud strategy in place. Common deployment methods include:
• Distribute data across both private and public cloud storage, depending on its risk classification or its latency and bandwidth needs
• Federate private and public cloud storage, using public cloud storage for archive, back up, disaster recovery, or workflow sharing and distribution

As a result, private cloud consumption and interconnection to public cloud providers is a critical part of identifying how to architect an elastic, hybrid cloud. Not to mention, there’s a culmination of services that customers want to pull together and utilize. Approximately 99 percent of all services deployed today are a mash up or culmination of other services.

Additionally, given workloads are becoming more bloated and sensitive to latency, it’s important to be mindful of the interconnection capabilities offered by an operator. A majority of hybrid cloud architectures today are being re-architected due to the lack of network between public and private clouds.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

How Modern DCIM Helps Multi-Tenant Colocation Data Centers Be More Competitive

Discover the transformative impact of modern DCIM software on multi-tenant colocation data centers, enhancing competitiveness through improved resiliency, security, environmental...


Unpacking CDU Motors: It’s Not Just About Redundancy

Matt Archibald, Director of Technical Architecture at nVent, explores methods for controlling coolant distribution units (CDU), the "heart" of the liquid cooling system.

White Papers

Download the full report.

PCIe® 6.0: Testing for a New Generation

Aug. 1, 2021
This white paper from Anritsu outlines the enhanced PCIe 6.0 technologies, such as PAM4, Forward Error Correction (FEC) and link equalization. It also provides guidelines on selecting...