Executive Insights: James Leach
The Data Center Frontier Executive Roundtable features insights from three industry executives with lengthy experience in the data center industry. Here’s a look at the insights from James Leach of RagingWire Data Centers.
James Leach, RagingWire Data Centers
James Leach is the Vice President Marketing at RagingWire Data Centers. As a marketing executive, sales leader, and systems engineer, James Leach has enjoyed a 30-year career building technology and services businesses for commercial and government organizations. For the last 15 years, Mr. Leach has been at the forefront of developing innovative internet services for enterprises. He was part of the core team that introduced ultra-high availability data center colocation, second generation cloud computing solutions, virtual private networks (VPNs) and route optimization, application hosting, content delivery networks (CDNs), internet registry and DNS (domain name system) services, and web performance monitoring and testing.
Here’s the full text of Jim Leach’s insights from our Executive Roundtable:
The Key Trends for 2016
Data Center Frontier: What will be the key trends that will shape the data center industry in 2016?
James Leach: There are a number of key trends:
Internet and Enterprise Applications – The Double Edged Sword: We are seeing the convergence of two forces – Internet apps and enterprise applications – into a double edged sword that will drive the data center industry in 2016. Anyone with a smartphone probably uses internet apps every day. These apps are personalized, user-driven, and often have a useful life measured in days or months.
Enterprise applications are the foundation of the Fortune 1000. These applications and systems are corporate, process-oriented, and have a lifespan of years or decades.
The data center industry in 2016 needs to support both internet apps and enterprise applications. The result will be data centers that deliver scalable power and cooling, flexible space configurations, sophisticated telecommunications and multi-cloud connectivity, and geographic diversity.
The business case for colo is getting stronger. There has never been a better time to be a buyer or supplier of data center colocation.
For colocation buyers, they have access to a wide range of products from a critical mass of established suppliers that promotes fair pricing and superior service. The key is to understand your requirements and timelines so that you have the data center capacity you need, when you need it. Customers are incented to consider colocation as an alternative to traditional data centers because they can get a better product at a lower price than they could build themselves.
For colocation suppliers, capital is relatively accessible to established providers at fairly attractive rates. This capital supports business growth and creates a barrier to entry for new competitors. In addition, the colocation product is maturing which leads to predictable returns on investment. The key for colo suppliers is to match supply with demand. The strategic question is whether to grow your colo business vertically by providing managed services and cloud, or horizontally by opening new locations.
Some analysts will say that cloud computing is the biggest risk to the colocation market. In fact, the opposite is true. Cloud computing providers are some of the biggest and best customers for colocation companies. Also, cloud computing is a great incubator for future colo customers. You start your business in the cloud and grow your business in a data center. Lastly, cloud computing has become a value-added feature to colo. Many colo providers are now delivering direct connections to the top cloud services such as Amazon Web Services (AWS), Microsoft Azure, Google, and IBM Softlayer, as well as the many specialized or verticalized cloud providers such as DreamHost, Dimension Data, Joyent, ProfitBricks, Datapipe, and Virtustream.[clickToTweet tweet=”Jim Leach: There has never been a better time to be a buyer or supplier of data center colocation.” quote=”Jim Leach: There has never been a better time to be a buyer or supplier of data center colocation.”]
Colo Telecommunications Matters: In “Old Colo” the number one telecommunications debate was about carrier neutrality. Most colo customers preferred a carrier neutral environment with local access to dozens of telco providers. “New Colo” retains carrier neutrality and adds multi-cloud access and fiber connectivity. Multi-cloud access from your colo provider is a catalyst to deploying a hybrid strategy that leverages the best of private and public clouds. Fiber connectivity between data centers enables work load balancing, disaster recovery, and application performance optimization.
You’ll hear less about DCIM… The 15-minutes of fame for DCIM is winding down. In 2016, DCIM will become table stakes for colocation providers. DCIM will evolve from collecting and analyzing data to using that data to predict performance and prescribe operations. The result is colo providers will use DCIM to run their data centers at the highest levels of efficiency. Colo customers will plug into the colo DCIM data stream using APIs so they can optimize their individual power usage and provide an end-to-end view of their computing environment.
You’ll hear more about IoT (Internet of Things)…The Internet of Things (IoT) is happening now. The volume, velocity, and variety of data is increasing dramatically. Yahoo and Google were founded in the mid-1990s to organize the world’s information and make it accessible. Facebook, Twitter and the iPhone were launched in the mid-2000s bringing about mobility and social media. The next wave of innovation will be in data, drawing new insights from the Internet of everything.
Data Center Frontier: Hyperscale single-company data centers have traditionally been the leaders in energy efficiency. How would you assess the progress of multi-tenant providers in improving PUEs and efficiency?
James Leach: Comparing hyperscale single-company data centers and multi-tenant providers in terms of energy efficiency is like comparing a bus to a fleet of mini-vans. When fully loaded, the bus will be more efficient, but the mini-vans will provide more flexibility. Both approaches are far superior to everyone driving themselves.
The reality is both hyperscale single-company data centers and multi-tenant data centers are making great advances in solving the energy efficiency equation – the variables are the same, only the constants are different. Hyperscale data centers can drive greater efficiency than multi-tenant data centers because they can require higher levels of standardization. Multi-tenant data centers have to support a more diverse user base, so mass standardization is not possible.
The good news is that both hyperscale and multi-tenant data centers are maximizing the energy efficiency of their environments. Both approaches are vastly superior to running your computers in a general purpose office space.
A RagingWire Data Centers facility in Ashburn, Va. (Image: RagingWire)
Data Center Frontier: Analysts say the boundaries between colocation and wholesale data center offerings are blurring. Is this trend real, and if so, is it likely to continue? How are customers choosing between the different data center procurement models?
James Leach: We are entering the era of “Big Colo” where both data center providers and buyers will be the winners.
For the data center provider, the primary drivers of Big Colo are economies of scale. For example, large generators and UPSs can support multiple megawatts of load and the incremental costs goes down as the capacity increases. From a financial perspective, the capital expenses required to build a data center are significant – often exceeding $100 million. Data center providers can spread these costs over time using phased build designs and long-term depreciation schedules. The result is Big Colo providers can offer a better product at a lower price.
For data center buyers, Big Colo can deliver scalable power from a few hundred kW to multi-megawatts, sophisticated configurations of dedicated and shared infrastructure, and flexible deployments from racks to cages to suites. Big Colo sites become a hub location for telecommunications and cloud providers offering customers an integrated platform for enterprise systems and internet apps. These Big Colo locations also become the job sites for a broad set of data center services providers delivering on-site moves, adds, changes, repairs and maintenance. The result is data center buyers can lease a portion of a superior facility, paying only for what they use and avoiding the large up-front capital expense to build the facility and operational expense to run the data center.
Data Center Frontier: Airflow containment is an established strategy for optimizing data center cooling. How would you assess the adoption of containment? Does it differ between the hyperscale, enterprise and multi-tenant markets?
James Leach: We are seeing a shift in the data center cooling conversation from economization to containment.
Over the last five years, the cooling conversation was between data center providers and data center technology suppliers to improve the performance of the data center by designing and implementing sophisticated economization systems that take advantage of local weather conditions. The most common example is using free air cooling when the outside temperature and humidity allows.[clickToTweet tweet=”Leach: We are in the era of “Big Colo” where both data center providers & buyers will be winners.” quote=”Leach: We are in the era of “Big Colo” where both data center providers & buyers will be winners.”]
Today, the cooling conversation is between data center providers and data center buyers to design and implement targeted containment systems to meet the unique requirements of the customer. This discussion is typically based on CFD analysis (computational fluid dynamics) to understand air flows within the data center facility and the customer’s computing environment. Currently, the most common approaches are cold-aisle containment for newer data centers with high ceilings and good air flow, and hot-aisle containment (chimneys) for older data centers with lower ceilings that need to force the warm air out of the building.
The difference between hyperscale, enterprise, and multi-tenant containment adoption is largely driven by the design of the data center and the nature of the applications.
Hyperscale data centers tend to be optimized for well-defined, consistent systems configurations where the targeted cooling systems are built-in. In these environments, we are seeing the emergence of liquid cooling, in-chassis cooling, in-rack cooling, and rear-door heat exchangers.
Enterprise data centers typically must support legacy systems such as mini-computers and mainframes that have specialized cooling requirements. These facilities are often older (greater than 10 years) and may not have been maintained or upgraded over time so that they are challenged to support higher density deployments. We tend to see hot-aisle containment systems in these environments as a “bolt-on” to legacy systems.
Multi-tenant colocation data centers are typically newer (less than 10 years old) and have been upgraded over time to support higher density deployments. Colo providers should work closely with their customers to support containment systems tailored to their unique environments.