Balancing the Benefits of Data Center, Cloud, and Colocation Solutions

Nov. 30, 2021
With increasing needs for high-performance computing across applications, hyperscale computing and retail colocation are beginning to merge. A new DCF special report courtesy of NTT explores how enterprises are building the most effective ecosystem for their needs by mixing and matching data center, cloud, and colocation.

Last week we continued our special report series on the hybrid cloud. This week, we’ll look at the benefits and limits of data center, cloud, and colocation solutions.

Get the full report

Enterprises have a range of options for their IT needs. Data center, cloud only, and colocation solutions all have significant benefits and limitations. One “size” doesn’t fit all for today’s needs and the rapidly changing requirements for tomorrow.

Data Center: The Anchor

Traditional data centers provide numerous benefits. They hold and store institutional knowledge of the enterprise’s IT policies and philosophies within its personnel, documentation, and equipment and usually serve as the coordinating organization for all IT resources, including cloud and colocation.

  • Enterprise-owned space and equipment offer high levels of data control
    • Servers operated and maintained by enterprise-badged employees
    • Physical access control for servers and other IT resources may not be as tight or intrusive as multiple layers of physical security found at colocation facilities
  • Data centers may provide IT support to the enterprise
    • First-line user-facing help desk organizations
    • Second-tier support for more complex issues and problems

Expansion is very capital expensive. Existing facilities can’t be stopped in mid-operation to be refitted for enhanced power and cooling to meet compute- intensive workload demands.

  • The data center environment is built for operations and stability
    • Not designed for wholesale experimentation, development, scaling/hyperscaling
    • Difficult to sandbox and deploy publicly-facing applications within the security perimeter of enterprise IT operations.
  • Expansion is very capital expensive
    • Difficult to be agile with new projects and facilities
    • Existing facilities can’t be stopped in mid-operation to be refitted for enhanced power and cooling to meet compute-intensive workload demands
  • Network connectivity is only as good as provisioned
    • Upstream broadband outages affect the data center and the entire enterprise since branch office data traffic is routed through a central location for management and security

Cloud: Easy, Limits to Scale and Customization

The idea of purchasing computing as a service is not a new one, with today’s cloud offerings simplifying the process to a simple e-commerce transaction.

  • Ease of purchase via web portal and credit card
  • Many services available through the cloud
    • Software, storage, backup, APIs incorporated into larger business applications, bare metal servers, virtual machines, container environments, and GPU services for machine learning and other specialized tasks
  • Enterprise developers can easily “sandbox” applications outside of the corporate security perimeter
    • Little/no risk to critical data
  • Applications and resources scalable to some extent
    • Scalability available without overbuilding existing data center infrastructure
  • Cloud services have single points of failure
    • At the point of the applications service provider
    • Upstream at the cloud provider
  • Third-party control and operation of the cloud can mean higher security risks
    • Applications service provider and cloud provider both attack surfaces
    • Sheer size makes clouds lucrative targets for bad actors
  • Cloud services do not offer bespoke customization needed for specific problems
    • Cloud optimized for volume delivery based on least cost rather than best service delivery
  • Clouds don’t economically scale for large problems
    • More expensive than a data center or colocation using an optimized solution with dedicated hardware.

Colocation: Where Retail and Hyperscale Converge

With increasing needs for high-performance computing across applications, the worlds of hyperscale computing and retail colocation are beginning to merge. High-end colocation facilities enable enterprises to build and control bespoke facilities designed for their hyperscale needs, with the ability to support 5,000 or more servers under one roof, network connections at speeds of 40 Gbps and faster, and capacity to deliver megawatt class power for supporting GPUs and other compute-dense, power-hungry configurations not designed for the typical retail data center scenario.

The use of highly tuned systems with optimized servers for e-commerce and other low-latency, high-demand workloads, GPUs for AI/ML, and data analytics applications working with large amounts of storage are among the factors driving retail colocation into hyperscale territory as companies realize they need dedicated computing resources for hard problems beyond the economics of cloud.

Advantages high-end colocation facilities brings to hyperscaling include

  • The ability to build customizable hardware & network architecture to suit, taking advantage of in-place physical security, power, HVAC cooling
  • Availability of megawatt-class power for compute-dense needs such as GPUs
  • Superior network connectivity, including
    • High-speed low latency broadband connections to carriers and major exchange points with multiple physical paths for redundancy
    • Access to dark fiber for direct connections between enterprise and colocation facilities
    • Meet-me-style neutral network exchanges enabling gigabit Ethernet connectivity to multiple carrier networks and hyperscale clouds as needed
  • Facilities can be physically located in specific regions or countries, ensuring data is not taken out of the country/region of origin due to regulatory requirements

Connectivity comes into play at two different levels for building these solutions. The availability of dark fiber to directly connect the enterprise to the colocation facility is likely to be a must in many cases for both security and low latency. If workloads need to be more widely accessible outside of the enterprise by customers and partners, network connectivity through meet-me network exchanges ensure low-latency access for users.

Download the full report Hybrid Cloud, courtesy of NTT to learn more about how workloads are continuing to shift between data center, cloud, and colocation. In our next article, we’ll look at how workloads are driving current and future compute needs. Catch up on previous articles here and here

About the Author

Doug Mohney

Doug Mohney has been working in and writing about IT and satellite industries for over 20 years. His real world experience including stints at two start-ups, a commercial internet service provider that went public in 1997 for $150 million and a satellite internet broadband company that didn't.Follow Doug on Twitter at @DougonIPComm

Sponsored Recommendations

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

How Modern DCIM Helps Multi-Tenant Colocation Data Centers Be More Competitive

Discover the transformative impact of modern DCIM software on multi-tenant colocation data centers, enhancing competitiveness through improved resiliency, security, environmental...


Unpacking CDU Motors: It’s Not Just About Redundancy

Matt Archibald, Director of Technical Architecture at nVent, explores methods for controlling coolant distribution units (CDU), the "heart" of the liquid cooling system.

White Papers

Dcf Cadence No Capacity Wp Cover 2023 01 11 17 29 43

No Capacity for Change

Jan. 11, 2023
The pace of digital transformation is accelerating bringing not only opportunities, but challenges for technical professionals and digital strategists. Notably, organizations ...