Executive Insights: Chris Crosby

The Data Center Frontier Executive Roundtable features insights from four industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Chris Crosby of […]

The Data Center Frontier Executive Roundtable features insights from four industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Chris Crosby of Compass Datacenters.

Chris Crosby, CEO, Compass Datacenters

Chris Crosby is a recognized visionary and leader in the datacenter space and the founder and CEO of Compass Datacenters. Chris has over 20 years of technology experience and over 15 years of real estate and investment experience. Previously, Chris served as a Senior Vice President and founding member of Digital Realty Trust. During his tenure at Digital Realty, he held senior roles with global responsibilities for sales, marketing, design, construction, technical operations and customer service, as well as establishing the company’s operating presence in Asia and Europe. Prior to the initial public offering of Digital Realty, Chris was founder and managing director of Proferian, an operating partner for the GI Partners portfolio, which was rolled into the IPO for Digital Realty Trust. Prior to Proferian, Chris served as a consultant for CRG West, now Coresite. Crosby received a B.S. degree in Computer Sciences from the University of Texas at Austin.

Here’s the full text of Chris Crosby’s insights from our Executive Roundtable:

The Impact of Cloud Computing

Data Center Frontier:  How is the rise of cloud computing impacting the data center market? How do you see this trend playing out between major public clouds, service providers and in-house corporate data centers?

Chris Crosby: Certainly the cloud has had an impact on the dynamics of the data center market from the standpoint of offering end users another alternative to choose from, but I think its impact tends to be somewhat overstated in terms of taking over the enterprise.

The common misconception is that the cloud obviates the need for data centers, or at least the total number of facilities required. I think this is a fallacious assumption since the cloud is really just an amalgamation of components that are housed within multiple data centers. What the cloud has done is to provide new revenue streams for service providers (Equinix, for example) since it enables them to offer end users a direct connection to cloud based applications.

Additionally, it has provided the enterprise with a new paradigm to look at how to control IT costs more effectively. However, the single biggest impact for the cloud has been the elimination of the data center closet for the small business. A small business owner no longer has to have an email server and storage device. The ability to buy applications as-needed for the small business has led to a big shift in thinking about whether or not small business IT closets are needed. Of course, this trend has been going on for a long time (Rackspace was founded in 1998).

While this has certainly had an impact on the smaller company end of the market, there is a behavioral component that characterizes the vast majority of companies that most prognosticators seem to miss and that’s their underlying need for control. In an Atlantic Monthly article, Frank Quattrone once talked about the concept of “career risk” when referring to financial bubbles. People do risky deals in financial bubbles because NOT doing them has more career risk when all your competitors are doing them. There is still a lot of perceived “career risk” to go to the cloud wholesale for a large company. Theoretical IT cost savings and future-proofing are great, until you start thinking about the risk-reward profile. In the world of enterprise IT, the reward is disproportionately less than the risk. As I like to say, “IT and data centers rarely get kudos in a big company, but they can get you ‘canned’.”

The other aspect of the marketplace that I think some cloud proponents are missing is that the nature of the applications to be supported is changing and the network today cannot support what is coming. The convergence of the small packet data volumes found with the Internet of Things and the large, rich packets associated with things like video, combined with their need for the lowest possible levels of latency is driving the need to hold and process data as close to the end user as possible.[clickToTweet tweet=”Chris Crosby: There is simply not enough bandwidth to send everything to the cloud.” quote=”Chris Crosby: There is simply not enough bandwidth to send everything to the cloud.”]

Cloud network structures aren’t optimized for the requirements at this point in time, thus there is going to be an increasing need for data centers at the edge (1-3MW) and micro data centers (<250kW) serving end users on a local basis. These data centers are needed to store common data as well as process what data needs to go back to the cloud. Without this capability, the network fails. There is simply NOT enough bandwidth to send everything to the cloud. I believe that the end result of all these developments will be healthy rates of growth for both cloud and data center providers.

The Evolution of Regional Markets

Data Center Frontier: In recent years the data center business has seen solid growth in geographic markets outside the tradition major data hubs. What are the most promising markets, and what trends will guide where we see growth in regional markets?

Chris Crosby: This is an interesting question since I think that the stratified architectures that we discussed in Question 1 are going to change the way that we look at the marketplace in the near future. By this I mean that how we define a Tier 2, 3 or 4 market will begin to fade away in the next 3 to 5  years. Currently, we still look at markets through the old lens, the “build it and they will come” mindset that leads us to look at a market like Phoenix and say “boy, they sure are building a lot of data centers there, so that must be a Tier 1 market.”

I think that in the very near future, the location of their customers and the latency needs of the applications they need to support will determine where they want their data center(s) to be located. If that means that the facility has to be located in Salt Lake City or Cheyenne, Wyoming that’s where they are going to want it. In other words, the edge will be wherever they need it to be, and not in some place like Boston just because some provider has a 50,000 square foot facility there. Although this won’t stop some industry analysts and pundits from continuing to try to put markets into specific “boxes,” these artificial distinctions will mean less and less to actual end users. That said, of course, most service provider business models work at scale only – meaning putting a megawatt in Des Moines doesn’t make economic sense.

Our first focus has been on the customer, and that focus has driven our market selection. With our ability to go anywhere, we go where the customer’s latency and convenience requirements dictate. That’s driven our market approach to date, and I don’t see that changing.

An aerial view of a Compass Datacenters facility. (Image: Compass Datacenters)

The Role of Pre-Fabricated Designs

Data Center Frontier: Factory-built components are playing a larger role in data center deployment. What’s your take on the impact of pre-fabrication in data center construction, and its role in the future of the industry?

Chris Crosby: This is a good question that is muddied by the lack of clarity in definition. At Compass, for example, components like our PowerCenters are built off-site and shipped to the location for installation. The walls of our facilities are pre-fabricated, structural pre-cast and then erected on site. So if we are defining pre-fabricated from this perspective, I would say that it’s making a very positive impact on the industry and will continue to become more pervasive in the future.

If we were defining “pre-fabricated” as shipping container-like structures that are built in “factories” and then shipped to site then my answer would be totally different. These PFM modules are niche plays that make a lot of sense in the right application environments. In and of themselves, they are simply not capable of providing the mission critical level of applications’ support that enterprises require. For example, no one would put a $10 million IBM mainframe into a container in the middle of the parking lot. First off, the mainframe’s footprint doesn’t work. Secondly, I would not want to be the guy who has to explain to my executive team why this was a good idea when something goes wrong.

Obviously, there are hardened PFMs, but those are for the right applications. Third-world, harsh environments tend to be the best suited for these approaches.

Most PFM have to be housed somewhere, thereby requiring the end user to either build a hardened shell or have a suitable building available. Due to the nature of their construction—a 12-foot x 40-foot footprint for example—they don’t lend themselves to delivering a flexible environment that can easily be adapted to changing requirements since “floor space” must be configured in typically less than 1,000 square foot increments and all load groups must be homogenous. This doesn’t mean that they aren’t well suited for very defined applications, but I don’t see them ever becoming a major element of the data center marketplace.

If you don’t believe me, look at Google and Microsoft. Google no longer uses containers (not since mid to late 2000s), and Microsoft just replaces the whole container when a certain amount of servers go bad. It’s hard to believe that those guys haven’t figured out that Move-Add-Change in metal boxes is really inefficient.

The Innovation Landscape in Cooling

Data Center Frontier: Cooling has been a key focus of energy efficiency efforts in the data center. Is there still opportunity for innovation in cooling? If so, what might that mean for how data centers are designed and where they are located?

Chris Crosby: I think cooling is always going to be a consideration for data centers. Whether they get denser or average per rack usage never gets above of 6-8kW, end users are always going to be looking for a better — both technically and economically — to provide an optimal data floor environment.

One important point to note in this discussion is that when people hear or see the term “innovation” they automatically think of some physical technological enhancement. This thought process overlooks the improvements that are defined by the redefining of standards. For example, more and more end users that I speak with are operating their facilities at the higher recommended and allowable thresholds defined by ASHRAE TC9.9 committee. [Quick aside: TC9.9 is a great example of how industry can come together to improve things dramatically without huge cost or regulation (TC9.9 cannot produce a standard unlike its 90.1 bigger brother, just recommendations to industry)].[clickToTweet tweet=”Chris Crosby: Not many years ago, running at any temperature above “meat locker” would be heretical.” quote=”Chris Crosby: Not many years ago, running at any temperature above “meat locker” would be heretical.”]

Not too many years ago, the idea of running your environment at any temperature above “meat locker” would have been considered heretical, but the combination of empirical evidence and corporate budget pressures continue to lead large numbers of data center operators to embrace a new “religious” perspective. The bottom line here is that I don’t think we can overlook the impact of non-technological innovation in changing the nature of data center cooling.

Of course, we should also expect to see more innovation on the physical side of the cooling issue. Right now, for example, the impact of drought in a number of geographic areas is forcing end users to take a hard look at their water-based systems. This is not a non-trivial consideration when you consider that the volume of water required to operate these facilities can run in the millions of gallons. This turn of events has people more closely looking at direct expansion functionality, or, at the least some type of hybrid offering that enables airside capability to augment water-based installations. I think that this is a positive aspect as water has been treated as a cheap, almost free resource within the data center space for too long a time. When added with the drumbeat of regulatory restrictions that has started to rear its head, I do believe you will see increased emphasis on both standards and technological innovation in the coming years.