Travels in Northern Virginia last November during the same time as the region's annual DCD Connect conference brought Data Center Frontier to Vantage Data Centers VA11, located at 45200 Vantage Data Plaza in Sterling, Va. I'm greeted there by Vantage Data Centers' Steve Conner, Chief Technology Officer, North America; and Mark Freeman, Vice President, Marketing.
As recorded by datacenterHawk, as located on the company's encompassing 52-acre Ashburn (VA1) Data Center Campus, VA11 is a purpose-built facility operated by Vantage Data Centers, currently sized at 720,000 SF, including non-critical space.
Vantage notes that its VA1 Campus generally features flexible 3 MW and 4 MW white space modules to meet customers' IT requirements.
This April, Vantage Data Centers accounced that it secured a $3 billion Green Loan to fuel its North America data center platform expansion.
Vantage said the financing, as guided by the company's Green Finance Framework and led by Wells Fargo, will fund at least eight new and existing data center sites, totaling nearly 1.4 GW of IT capacity.
The new developments will include Vantage Data Centers' third campus in Northern Virginia, VA3, which will provide 288 MW of power capacity across 2.3 million square feet, once fully developed.
The new financing was Vantage’s fifth green loan.
So far this year, Vantage has announced a total of nearly $10 billion in financing to drive the company’s continuing global growth.
Touring Vantage Data Centers VA11
Built in 2019, the 32 megawatt (MW) VA11 site features an on site substation and dual feed power, as furnished by Dominion Energy. Numerous green amenities are located throughout the campus, including solar and wind lights and EV charging stations.
Conner tells me, ""We've got different building form factors: we do 32 MW, two stories; 48 MW, three stories, 64 MW, four stories. That's kind of our standard build."
Conner and Freeman emphasize that a standardized building design across the company's fleet of global locations makes for optimal customer comfort and consistency.
"VA13 and 14 are right there: they all have the same basic design," explains Conner.
He continues, "The five to make four [redundant power], the cooling loop designs, stuff like that, we standardize on one design - and it's called 'one design.' It's not necessarily the same facade or the architecture of the building, but the guts of the building are the same."
"So as we go from place to place, our customers know what they're getting - whether they're here in Virginia or in Arizona or if they're somewhere in Germany or France, it's the same design."
Electrical and Battery Backup Systems
During the tour of VA11, Conner and Freeman discuss the data center's redundant electrical lineups, emphasizing block redundancy and serving adjacent halls to minimize impact during power outages. "We try to make our lineups distributed enough to make it such that if we have a major impact, it's really only going to take down a very small portion of the building," says Conner.
In terms of battery backup, Conner notes, "In this building, we're using lead acid. We've transitioned to all of our other buildings into lithium ion. And, we just got our first approval to use nickel zinc batteries."
The pre-integrated battery and UPS systems reduce errors and speed up the installation process.
"It solves two problems for us," adds Conner. "One, everything's pre integrated, so it's already been tested; it's been burned in at the factory. Two, by doing that and wheeling these things in on skids, we eliminate a lot of the errors that would happen with overhead wiring because all the contact points are already pre-labeled and ready to go. So it really helps with speed, and it's the same with everything: chillers, air handlers, electrical footprints."
The tour takes us along VA11's spacious hallways. The discussion moves to customer-controlled security measures within data centers
"When there's a single tenant to a building, we'll work with them on security," says Freeman. "The customer here has control over security. In certain areas, they have control of all spaces, and I have to be put on their list; or any other person that's going to get in those spaces has to be put on their list. I think every data center works through that, especially with the large customers."
Freeman adds, "Because our tenants are so large these days, the days of when I first started in this business, doing like five tours a day, doesn't happen that much anymore. I do one tour maybe a quarter."
Campus Connectivity Design
In discussing the campus connectivity plan, Conner remarks upon a perceived shift away from traditional Meet Me Room (MMR) setups.
"If you're focusing on the hyperscalers, a lot of data centers in Northern Virginia and pretty much around the country have really gone away from that massive MMR that we used to see with cross connects. If you'll notice in here, there are no cross connects. We basically just have large chunks of cable running up into some halls, upstairs or downstairs or wherever they're running. Our MMRs are really serving as kind of a junction point for carriers to run through."
Conner describes how Vantage Data Centers like to create campus networks with easy connectivity between buildings and paths.
"Whenever I design a campus, I make sure that we have a couple of things. One, I ring the entire campus - I've got to have the ability to bring the zero points of entry into every building from every path. Then I create a lot of inner building connectivity, because a lot of these workloads will have a central network hub, and we peel off of that hub into other buildings."
"My conduit structure has to be such that if I'm in Building One or Building Three, I can easily get to Building Two, which we're in right now, through the conduit structure using those big cables. So I don't have to run all the way through the outside plant and snake around - I've got a lot of direct paths to Point A and Point B, etc."
AI Workloads and Cooling Infrastructure
The tour engenders discussion of the challenges of increasing rack densities for AI workloads, leading up to an investigation of the use of customized canopies to control ambient temperatures around rooftop cooling units (more on that shortly).
Discussion first centers on the building's cooling footprint and the advantages of air-cooled data centers, which Conner and Freeman say currently make up the bulk of the Vantage Data Centers fleet.
"When you look in these galleries, everything's traditional air handlers, with air cooled chillers on the roof," says Conner, who expressed a preference for air handlers over fan walls, for purposes of better air speed and velocity.
"In order to get speed, you have to have a little bit of approach. With air handlers we've got about 3 ft. of starting block to get the velocity getting out to the room. Fan walls are starting from a dead stop; you lose that starting block for air speed."
The VA11 data center's cooling infrastructure provides for N+2 redundancy. Conner acknowledges the limitations of air cooling, hinting at the company's future considerations for liquid cooling to meet increasing rack densities. He says:
"Advantage is N+2 on the mechanical side; N+1 on electrical. We have dual pipes coming into everything that we've got fed into the mechanical side in each one of our galleries. The headers for the cross are also plumbed for water so we can take liquid out to the floor."
"We're thinking ahead because we know that liquid cooling is going to be upon us at some point. What we do with the dynamics of the building may change based on that particular known fact. But we wanted to make sure that part is at least taken care of, so we don't have to go back and tap the pipes. All of our headers are plumbed for liquid to the rack."
Conner concluded, "When I first started here six years ago, traditionally average rack density was 8 kW. Now we're starting to see rack densities for just AI workloads nearing 50 kW, and that's customer-independent. We're still able to do that in air, but we're starting to reach the outer limits of air."
Conner notes that after VA11, the company transitioned from raised floor to slab construction designs for purposes of cost and efficiency.
"This was our last raised floor building in all the Vantage fleet. We do nothing but slab now, all of our construction is done on slab. What's the advantage of slab? Well, it's cheaper. This is weighted for basically 2000 pounds per rack footprint. As the racks get heavier and denser, raised floors don't scale over time."
"Some people would say the downside of slab is you move from cold aisle containment to hot aisle containment. Now everybody's gotten pretty comfortable with hot aisle, but your purists will be like, Oh, I don't have as much control over my air, because I don't have those perforated tiles anymore."
Busy Bees
Inside a portion of data hall being prepared for new infrastructure, we step past large reels of cabling. "Our customers are obviously bringing in a lot of fiber," says Freeman. "They're busy little bees right now."
Walking along the rows of translucently blocked out cages, conversation touches on the benefits of decentralized communications. "We're co-locating our PDUs, tying all that together," says Conner. "Then, each one of our spaces has its own BMS head-end, and we've got redundant server setups for each data hall."
We step inside VA11's massive freight elevator, rated to hold 10,000 pounds.
"The good thing about these freight elevators is they're no longer quadrant based. We used to get older freight elevators, where you could only have racks in certain areas," explains Conner.
"This one can take 10,000 pounds regardless of where it is and who's on it. All of our passenger elevators are rated for freight as well. So if this one's down, you can take the elevator in the front lobby and roll up one or two racks in that as well. It's a lesson learned."
Rooftop Chillers and Canopies
We step out onto the data center's roof, where the air cooled chillers use economizers and also rely on Mother Nature for cooling, in the words of Conner. "The beautiful thing about air cooled chillers is they love the winter," he says.
"All these chillers have economizers built in. When we're 72 degrees or less, the compressors turn off. We're going to live off the goodness of Mother Nature for the rest of the winter, so we won't have to turn our compressors back on until spring or summer." Conner continues, "What you're seeing up here is basically 16 MW loops. This building is 32 MW."
On the roof, Conner and Freeman explain how the innovative use of clearly visible canopy structures, as first erected over the the rooftop chillers in the company's data centers in the U.S. southwest, effectively lower ambient temperatures and reduce energy consumption. These prototype canopies' frames are made of wood, but Conner says company will soon be adopting a steel structure design to improve efficiency across all Vantage facilities.
"That will differentiate our PUE's," he says. "In the desert, we were struggling with air temperatures on the roof. The ambient air temperature was around 135 (F). These chillers turn off at 132. Their controls stopped working. So we introduced the idea of these canopies, which basically acts as hot outside containment for chillers."
"In the summer, we'll roll the canopies out and then contain the air, and it drops the ambient temperature in the spaces where they're sucking in air by almost 20 degrees. That does two things for you. One, the chillers don't turn off - that's pretty important. Two, it dramatically decreases the amount of energy that the compressors need to work with, in order to get the water down to the set points."
"This was something that we actually picked up working with a couple of manufacturers who do a lot of work in Saudi Arabia. They kind of gave us the idea; they didn't give us the design, but they gave us the idea. This structure we'll ultimately use is made out of steel. We're going to be doing the same steel design in all of our facilities moving forward, because it really does help with PUE. This is easy to do. Anybody can do it."
Stacked Generators and HVO
We walk back downstairs and out to the generator yard, where the generators are stacked one on top of another. Conner explains, "Each 2 MW lineup gets one generator, plus one extra. So in a 32 MW block with this building, we have eleven on each side, or 22." He continues:
"This building was a little tight, so we had to stack generators and we did a slightly different thing with our transport. The transformers for these are all pad-mounted in the middle of the space. It makes it harder to work the interactions between the generator and the transformers, just because they're separate from one another."
"But since we changed to this 'one design' approach, we put the transformers up, which allows direct access to the transformer that's governing or working directly with that generator. It also cuts down on a lot of cable conveyance; so it's more efficient, more effective, cheaper, etc."
Conner continues, "All the generators are 2.75 MW diesel generators, all with their own individual tanks. In our 'one design,' there's no central storage unless it's absolutely necessary, because of the permits, environmental, all the things that go with large in-ground storage tanks. Plus, you have to service them occasionally, take them out of the ground, put in new tanks, the whole nine yards. Here, the tanks are contained in the units." I'm told that this design feature is standard across the entire Vantage Data Centers fleet.
Knowing that the use of biodiesel fuel made of hydrotreated vegetable oil (HVO) has come on-line at Vantage Data Centers facilities in Santa Clara and Cardiff Wales, I ask Conner and Freeman whether they think VA11 will see any use of HVO.
"HVO is a bit problematic for two reasons," says Conner. "One, it's not very plentiful; and two, there are no SLAs on refill on HVO. Now, the beauty of HVO and diesel is they blend well together; so if push comes to shove and you're in trouble, you can throw diesel into the engine and it's not going to hurt it if you've got an HVO diesel rated generator."
Freeman adds, "I think we've just got to get more people that are into actually producing that fuel. Availability, that's the problem. We're also looking at natural gas too if it's available. We've looking at converting diesels to natural gas because the emissions profiles are much better."
HVO and DUB1
Fast forward to this April, when Vantage Data Centers announced its entrance into the Irish market with the development of a multi-phase data center campus coined DUB1. The company will invest more than €1 billion over multiple phases to support construction and delivery of the campus in one of the largest data center markets in Europe.
The first two phases consist of 52 MW of IT capacity, with the first phase expected to be operational in late 2024. Upon completion, DUB1 will mark Vantage’s 14th EMEA campus in a growing regional portfolio that spans seven countries.
The company’s flagship Ireland campus will be located approximately nine miles (15 km) from the Dublin City Center in Profile Park, Grange Castle, an area known for its data centers. Sited on 22 acres (nine hectares), the 405,000 SF (38,000 sq. m.) campus will consist of one 32 MW facility and one 20 MW facility, and has available land and power to add a third facility in the future.
The highly efficient campus is being built in alignment with Vantage’s sustainable blueprint to deliver an annualized PUE of 1.2 using virtually no water for cooling.
Notably given the points made immediately above, Vantage said the DUB1 campus will include an on-site 100 MVA multi-fuel generation plant capable of running a combination of fuels, primarily HVO renewable fuel and gas fed by Gas Networks Ireland.
Given the temporary power constraints in Dublin, this on-site generation plant will support current capacity constraints by alleviating pressure on energy demand from the grid while achieving optimal efficiency and power output.
The generation plant is also capable of funneling power back to the grid, further supporting power availability in the Dublin area.
In addition, Vantage said it plans to deploy HVO in place of conventional diesel fuel throughout its fleet of back-up generators, and is working to obtain corporate power purchase agreements (CPPAs) for green energy, such as biomethane from local providers. Currently, the company said it is leveraging HVO for 99% of its fuel requirements during the construction phase.
Jinél Fourie, director of public policy, EMEA at Vantage Data Centers, said: “Vantage is committed to environmental responsibility and is pleased that our sustainability goals, including reducing emissions, achieving net zero carbon emissions by 2030 and maximizing energy efficiency, align closely with those of the Irish government and regulatory bodies as we continue growing Ireland’s position as a leader in the digital age for cloud computing."
"As environmental technology continues to advance, including the inaugural use of a multi-fuel generation plant in Dublin, we look forward to continuing our local partnerships to explore additional solutions to enhance the local community.”
David Howson, president, EMEA at Vantage Data Centers, added, “Throughout this development, there will be a significant positive economic impact to the community as we employ more than 1,100 individuals during peak construction and create approximately 165 jobs to operate the campus.”
“The South Dublin Chamber warmly welcomes the confidence shown in our area through the €1 billion investment by Vantage Data Centers,” concluded Peter Byrne, CEO, South Dublin Chamber. “Vantage Data Centers will not only be contributing to local employment and taxation but will be ensuring the safety of our data and future-proofing business for years to come with this major investment in technology.”
More Data Center Tours
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, and signing up for our weekly newsletters using the form below.