The Journey to the Hybrid Cloud

Sept. 13, 2018
The data center is not dead. However, the data center has indeed changed as providers adapt to support the hybrid cloud. Bill Kleyman offers some best practices for hybrid cloud adoption.

The data center is not dead. However, the data center has indeed changed.

Over the course of my career, I’ve kept my head in the cloud; a lot. However, I’ve also kept a close eye on the data center, the evolution of that market, and what IT leaders have done to keep up with cloud adoption. I’m working on projects right now with massive, global organizations. I’ve seen big data center providers like Compass, Equinix, Ragingwire, NTT, Digital Realty, CyrusOne and numerous others adopt strategies to support cloud ecosystems. Even private data center operators have adapted to the need to support cloud.

All of this has supported the rise of the hybrid cloud. Gartner predicts that by 2020, 90 percent of organizations will adopt hybrid infrastructure management capabilities. “As the demand for agility and flexibility grows, organizations will shift toward more industrialized, less-tailored options,” said DD Mishra, research director at Gartner. “Organizations that adopt hybrid infrastructure will optimize costs and increase efficiency. However, it increases the complexity of selecting the right toolset to deliver end-to-end services in a multi-sourced environment.”

IDC recently noted out that services supporting public and hybrid cloud environments are hot market items. Spending on managed and professional services around cloud adoption are, collectively, the second largest opportunity in the whole cloud market, accounting for 31% of all cloud-related spending in 2016 and 2021.

Data Services for Hybrid Clouds

So, what’s driving all of this? Data. Various industries are creating a lot of information and require better and more effective ways to manage the influx of data. In fact, IDC predicts that the data services for hybrid (DSH) cloud market is expected to grow at a five-year growth rate of 20.5 percent. “There are three significant shifts happening in the DSH cloud market that are reflected in this forecast: first, the rapid growth of data location optimization services employing cognitive/machine learning; second, the growth of integration and orchestration software with ongoing shift to hybrid/multicloud; and third, growth in security and compliance data services with heightened emphasis on regulatory compliance and ongoing increase in security breaches,” said Ritu Jyoti, research director, Storage, at IDC.

This is why hybrid cloud design must be done with a lot of caution. And yes, there might be some complexity that comes along with this design. Keeping everything on premise might seem easier right now, but it might actually be hurting your business. With that in mind, I want to share some thoughts and best practices around data center management when it comes to hybrid cloud and our evolving digital age.

Public cloud providers can work with your on-premise requirements
Your ability to interconnect is much more versatile than you might even think. For example, AWS Direct Connect enables you to integrate with a variety of APN Technology and Consulting Partners. And, there are lots of great ones to select based on your requirements and your region. Examples include CoreSite, Equinix, Lightower, CenturyLink, CyrusOne, Datapipe, XO, Level 3, and many others. This is similar to Azure ExpressRoute where you can work with a variety of partners including Aryaka, CenturyLink, CoreSite, Equinix, Level 3, NTT Communications, Telefonica, Telus, Verizon, and several others.

You can even get specific US Government Cloud partners through solutions from AT&T NetBond, Equinix, Megaport, and others. On that note, solutions like those from Megaport, for example allow you to instantly create a virtual router for on-demand, private, Layer 3 connectivity to leading service providers between key global routing zones. This is a great option to create dedicated multicloud connectivity to the top Cloud Service Providers including Google Cloud Platform, Oracle Cloud, AWS, Azure, and others.

You have great options to work with a variety of connectivity partners who can align specifically with your requirements.

Right now, I’m working on a really interesting AWS design that requires Direct Connect integration with Apache Kafka on AWS. In this solution, keeping data local and fluid is absolutely critical between AWS Zones and Regions. We allow for data locality while working with cloud systems for processing and distribution. Furthermore, leveraging AWS helps create even more resiliency and availability for these critical systems.

Regardless of the industry, data can be supported in a hybrid ecosystem

This part is really important to understand. You don’t have to migrate everything into the cloud. A good design will take legacy applications, services, and even some data points and allow them to live in the cloud. Your on-premise platform can then be leveraged for other services including virtualization, in-house processing, and even data collection. Data access can be truly transparent when you design a hybrid cloud platform.

That’s the beauty of today’s hybrid cloud designs. You have a lot of flexibility in how you architect around your data requirements. From there, cloud providers are ready to offer you options around your own specific data needs; whether this is compliance-related or revolving around latency.

Still want to keep your data onsite? Think about the pros and the cons.

OK, maybe you’re still not convinced. Maybe, you still feel that keeping your data under the data center mattress makes sense. I’ve been in these situations, and have worked with organizations who are truly reluctant in giving up data to the cloud. It usually all boils down to control – “who ‘owns the keys to my kingdom?” There’s nothing inherently wrong with keeping data local. However, it can slow down your business, innovation speed, and how you go to market.

I want to give you a very specific example. We were working with a global organization that absolutely wanted to keep their data onsite. They went as far as creating their own giant data repositories, data structures, and storage mechanisms. Then, came the need for big data processing and data analytics. Still, no cloud. They designed a Hadoop engine with HDFS, developed their own big data processing strategy, and did the processing onsite. From there, they leveraged Apache Hive to “process in stage” and publish the data. Did it work? Absolutely. Was it complicated and really hard to manage after a little while? Very much so.

Their goal was to enable business intelligence (BI), but they wanted to do so on-premise. Until there was way too much code and data to process. What happened eventually? We helped them move to an Azure PaaS. There, we developed an Azure Data Lake Store, leveraged advanced Data Lake Analytics, deployed HDInsight Spark Cluster, worked with Azure SQL, and integrated Power BI. What was the final result? A hybrid cloud ecosystem which fed data into a cloud ecosystem that was able to expand and grow with the business. No longer concerned with data complexity, massive amounts of customized code, and proper data processing, the organization immediately found the benefits of hybrid cloud while still leveraging some on-premise resources.

Migrating core services and custom applications will be the hardest part.

Bridging from our previous point, working with customized services and applications will honestly be the hardest part. This is where you’ll need to focus on the six R’s of cloud migration. That is, retain, retire, rehost, re-platform, refactor, or repurchase. It’s that refactoring part that can be the hardest. In those situations, you might have an app, data source, or some kind of service that, at some extent, will require major or minor changes to the design and code.

You’re basically architecting a cloud-native solution. I can’t stress enough the importance of this step. When you refactor an application or service, you’re literally designing a best-optimized structure for better agility, productivity, and speed. Yes, this can be a painful (and sometimes expensive) process. But it needs to happen for you to be able to support cloud and a quickly evolving market.

Don’t give up if a hybrid cloud seems challenging – leverage good partners!

Navigating the sea of data and the world of cloud can seem daunting and challenging. But it doesn’t have to be. On a weekly basis, I have several architectural conversations with global organizations aiming to go on cloud journey. You’re not alone in this boat and every single call that I have begins with the same simple questions and exploration topics. We have to understand your architecture, see where your business is going, and design an architecture that supports your future-state. This is really where good partners, who are agnostic in the cloud, can really help design around the right type of architecture for you. Remember, this isn’t only a technology conversation, it absolutely must involve the business as well!

The journey to a hybrid cloud ecosystem is exactly that – a journey. I’ve had the chance to work with truly complex designs that required connections with on-premise resources while still scaling into the cloud. The cool part is that in almost all of those cases, we were able to make hybrid cloud a real possibility. If you’re a large organization, my biggest recommendation is to test and validate your cloud platform. This might mean running a 6-month proof of concepts to ensure that you can support your environment and that it performs to your needs. The beauty here is that you can tweak your trial to really make sure you understand the architecture and that it can scale for your business.

My final recommendation is to never forget your ‘future-state.’ Be sure to design your data center and cloud platform at least a few years out to help you grow and evolve as needed.

About the Author

Bill Kleyman

Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. Bill is currently a freelance analyst, speaker, and author for some of our industry's leading publications.

Sponsored Recommendations

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Conduit Sweeps and Elbows for Data Centers and Utilities

Data Centers and Utilities projects require a large number of electrical conduit sweeps and elbows. Learn why Champion Fiberglass is the best supplier for these projects.

Prefabricated Conduit Duct Banks Enable Smooth and Safe Electrical Installation for a Data Center

Prefabricated conduit duct banks encourage a smooth, safe electrical conduit installation for a data center.

yucelyilmaz/Shutterstock.com
Source: yucelyilmaz/Shutterstock.com

The Pathway to Liquid Cooling

Jason Matteson, Global Director of Product Management at nVent Data Solutions, provides insights on successfully making the transition to liquid coooing.

White Papers

Dcf A10 Sr Cover 2023 01 17 14 23 57

The Security Gap: DDoS Protection in a Connected World

Jan. 18, 2023
The world is in love with connectivity, but it comes with a whole host of challenges for data centers. As customers continue to shift to the cloud and colocation services, security...