Inside the Flexential-CoreWeave Alliance: Scaling AI Infrastructure with High-Density Data Centers

May 23, 2025
CoreWeave's partnership expansion with Flexential signifies a milestone in AI-centric cloud infrastructure evolution, reflecting the growing demand for GPU-optimized data center solutions tailored for AI workloads.

CoreWeave's recent addition to their partnership with Flexential marks a significant milestone in the evolution of their AI-centric cloud infrastructure. The continued investment made by CoreWeave in additional resources for their specialized AI infrastructure highlights the continually growing demand for high-density, GPU-optimized data center solutions tailored for AI workloads.

A Growing Collaboration

CoreWeave’s continued growth is most recently demonstrated by their announcement at the end of April 2025 that they would be expanding their data center footprint by adding a 13 MW deployment in a Flexential-owned and operated data center in Plano, TX.

This wasn’t CoreWeave’s first employment of Flexential’s colocation data centers, having in late 2023 expanded their AI cloud footprint into Flexential data centers in Hillsboro, Oregon and Douglasville, GA, where they committed to 9 MW of power allocated at each facility.

Commenting on the Plano, TX deployment, Patrick Doherty, Chief Revenue Officer at Flexential, said:

This large-scale deployment will allow us to deliver high-performance infrastructure through the FlexAnywhere Platform, on an urgent timeline. 13 MW of contiguous capacity provides CoreWeave's customers with a reliable platform to scale their AI initiatives and powers the next generation of data-driven innovation across industries.

What CoreWeave Brings to the Table

CoreWeave has rapidly evolved into one of the most strategically important infrastructure players in the AI era, establishing itself as a leader in delivering GPU-accelerated compute at hyperscale. Originally founded in 2017 as a crypto-mining operation, the company pivoted early—well ahead of the market—to capitalize on the broader potential of GPU hardware in enterprise-scale AI and machine learning applications. That foresight is now paying dividends at scale.

At the heart of CoreWeave’s offering is a purpose-built cloud optimized from the ground up for compute-intensive AI workloads. Its infrastructure spans some of the largest and most sophisticated GPU deployments in the world, including megaclusters comprising more than 100,000 Nvidia GPUs. The fleet includes the latest H100 and GB200 NVL72 models—critical for enabling the large-scale model training and inference tasks that underpin generative AI, large language models (LLMs), and high-performance AI research.

Critically, CoreWeave’s architecture isn’t just about raw GPU volume. The company has engineered for performance at the system level, leveraging Nvidia’s Quantum-2 InfiniBand fabric to tightly couple its GPUs across nodes and regions. This architecture delivers ultra-low-latency, high-bandwidth interconnects essential for parallelized model training—an increasingly decisive factor in enterprise AI performance and developer experience.

CoreWeave is also pushing the frontier on thermal and electrical efficiency. The company has committed to wide-scale deployment of liquid cooling systems to better manage power density and reduce environmental impact—key for both sustainability and maximizing performance envelope. Its infrastructure strategy signals a shift away from retrofitted hyperscale environments toward AI-native data center design principles, where density, bandwidth, and thermal dynamics drive layout decisions as much as square footage.

What differentiates CoreWeave further is its hybrid approach to infrastructure growth: combining ground-up data center builds (like its Kenilworth, NJ campus), renewable-powered regional hubs (like Volo, IL), and strategic colocation partnerships (notably with Flexential) to enable low-latency access and rapid scaling across North America and Europe. This diversified model allows CoreWeave to deploy capacity where it’s needed—fast—and reflects a new blueprint for AI cloud providers balancing control with speed-to-market.

The company's repositioning also underscores a broader transformation sweeping cloud infrastructure: the rise of application-specific clouds, optimized not for general-purpose compute, but for the demanding performance profile of next-generation AI development. In this context, CoreWeave isn’t simply growing—it’s helping to define the category.

CoreWeave's Data Center Footprint and Other Strategic Partners

CoreWeave operates over 33 data centers across North America and Europe, strategically located to provide low-latency access and meet regional demand and are continuing to invest in that infrastructure. Current key deployment locations include:

United States

  •      Douglasville, Georgia: A high-density colocation facility operated in collaboration with Flexential, supporting large-scale AI workloads.
  • Kenilworth, New Jersey: A 280,000-square-foot data center developed with a $1.2 billion investment, enhancing CoreWeave's presence in the Northeast.
  • Volo, Illinois: A facility powered by Bloom Energy's solid oxide fuel cells, emphasizing sustainable energy solutions for AI infrastructure.

Europe

  •      United Kingdom: Two operational data centers in Crawley and London Docklands, hosting NVIDIA H200 GPUs and serving as CoreWeave's European headquarters.
  • Norway: Development of a large-scale NVIDIA AI deployment at the N01 Datacenter Campus in Vennesla, in partnership with Bulk Infrastructure.
  • Sweden and Spain: Planned investments totaling $2.2 billion to establish new data centers, expanding CoreWeave's European footprint.

Beyond the collaboration with Flexential to support their rapid infrastructure expansion, Coreweave also has significant strategic partners and investments, such as:

  • NVIDIA: As a major investor and technology partner, NVIDIA provides CoreWeave with early access to cutting-edge GPU technologies.
  • OpenAI: A five-year, $11.2 billion agreement positions CoreWeave as a key infrastructure provider for OpenAI's AI workloads.
  • Weights & Biases: The $1.7 billion acquisition enhances CoreWeave's capabilities in AI model development and monitoring

Flexential’s High-Density Edge: Building a Platform for AI-Ready Infrastructure

Flexential’s evolution into a next-generation infrastructure provider is rooted in its 2017 formation through the merger of Peak 10 and ViaWest—two regional powerhouses that combined to form a national footprint. Today, that legacy has matured into a strategically distributed platform of more than 40 data centers across 18 U.S. markets, encompassing over 3 million square feet of infrastructure capacity. But what truly defines Flexential’s position in the AI era is how it has engineered this footprint for scale, density, and hybrid agility.

At the core of its service delivery is the FlexAnywhere platform, an integrated suite that combines colocation, cloud, connectivity, data protection, and professional services. This platform is designed to meet the growing demand for hybrid IT architectures—environments that need to scale fast, span multiple geographies, and support both legacy enterprise applications and cutting-edge AI workloads.

Flexential’s high-density colocation capabilities set it apart in a market rapidly redefined by compute-intensive applications. The company supports power densities exceeding 80 kW per cabinet, making its facilities ideal for AI training, large-scale inference, and other forms of high-performance computing (HPC). To support these workloads, Flexential is investing in advanced cooling solutions, including direct liquid cooling, to ensure thermal stability and energy efficiency even at the rack level.

In a market where location, latency, and workload alignment are paramount, Flexential’s national scale and density-ready design philosophy position it as a key enabler for enterprises building and scaling AI infrastructure. Its regional strategy provides proximity to end users and cloud on-ramps, while its focus on power availability and thermal management addresses two of the most critical infrastructure constraints facing AI deployments today.

Key Flexential data center locations include:

  • Atlanta, GA: Multiple facilities, including the Douglasville campuses, offering significant capacity for enterprise deployments.
  • Dallas, TX: Data centers in Plano and Richardson, catering to the growing tech industry in the region.
  • Denver, CO: Facilities in Aurora, Centennial, and Englewood, supporting businesses in the central U.S.
  • Portland, OR: A significant presence in Hillsboro, providing access to subsea cables and serving as a gateway to the Asia-Pacific region.
  • Charlotte, NC: Two data centers supporting the southeastern US market.

Each data center is designed with high-density power capabilities, innovative cooling solutions, and versatile connectivity options to support a wide range of workloads, including AI and machine learning applications.

The data center colocation company has made a commitment to sustainable operations, employing energy-efficient designs and practices across its data centers. The company adheres to strict compliance standards, holding certifications such as HITRUST, ISO 27001, and PCI DSS, ensuring secure and compliant environments for its clients' critical workloads.

Meanwhile, Flexential's commitment to sustainability, with the focus on high-performance computing, make it stand out to businesses looking to adopt the colocation model while supporting the latest in AI and HPC computing options.

C-Suite Momentum Meets Infrastructure Reality: Insights from Flexential’s 2025 State of AI Infrastructure Report

As the AI infrastructure arms race intensifies, Flexential’s newly released 2025 State of AI Infrastructure Report provides a revealing snapshot of the accelerating pressure facing enterprise IT leaders. Based on responses from over 350 IT decision-makers—including 100 from organizations exceeding $2 billion in annual revenue—the survey shows a clear power shift: AI strategy is now being driven directly from the top.

According to the report, 81% of respondents say AI initiatives are now led by the C-suite, a steep rise from just 53% a year ago. And the executive mandate is aggressive—51% expect a return on AI investment within the next year, while 21% report they’re already seeing financial benefits. But this top-down momentum is creating ripple effects throughout the infrastructure stack.

The Execution Gap Widens

Despite growing confidence—71% of respondents are now “extremely confident” in executing their AI roadmaps, up from 53% last year—many organizations are struggling under the weight of implementation. Nearly one-third (29%) of IT leaders report feeling overwhelmed by AI infrastructure demands, more than double the rate seen in 2024. This rising anxiety reflects complex realities: integrating AI with legacy systems, managing security risks, scaling across departments, and battling a growing shortage of specialized talent.

Chris Downie, CEO of Flexential, frames the situation bluntly:

The Flexential State of AI Infrastructure Report shows that AI has moved well beyond the experimental stage and is now a cornerstone of business operations. Forward-thinking organizations recognize its role in achieving a competitive advantage, yet there's still a gap between AI ambition and infrastructure readiness.

Long-Term Planning Gains Urgency

In response to these challenges, enterprise IT leaders are beginning to look further ahead. 62% are planning their IT and data center needs one to three years in advance, while 17% are looking out three to five years—a marked shift toward proactive infrastructure strategy. Still, 44% cite infrastructure limitations as the number one barrier to scaling AI, suggesting that despite the forward planning, many are still playing catch-up.

Downie adds:

The pressure on executive leadership to deliver tangible returns on AI investments has never been greater. With the C-suite now directly steering AI strategy, the mandate is clear: convert innovation into impact—fast.

Critical Challenges: Skills, Performance, and Security

The Flexential report outlines a constellation of growing pain points:

  • Skills shortages continue to deepen: 61% report gaps in managing specialized computing infrastructure, up from 53% last year, and 53% are struggling to fill data science and data engineering roles, up from 39%.
  • Performance degradation is becoming more pronounced: Bandwidth constraints now affect 59% of respondents (up from 43%), while latency issues impact 53%, a jump from 32%.
  • Cybersecurity threats are expanding in tandem with AI adoption: 55% say their exposure to cyber threats has increased—a sharp rise from 39% in 2024—as more sensitive data moves into AI systems.
  • Sustainability concerns remain a priority: 79% feel increased pressure to improve sustainability, and 27% are willing to pay 20% or more for renewable energy, reflecting intensifying regulatory and reputational demands.

AI Is Now a Market Imperative

Only 5% of organizations now describe their AI efforts as “nascent,” compared to 10% last year. For those falling behind, the consequences are real: 28% say they risk losing market share, while 26% anticipate longer product development cycles if AI goals are not met.

With AI officially embedded into core business operations, the infrastructure foundation supporting it has become a strategic battleground. As the report illustrates, competitive advantage now depends as much on power and bandwidth as it does on algorithms and models.

Strategic Convergence: What Flexential and CoreWeave Reveal About the AI Data Center Playbook

Hearkening back to where we began, it appears that the Flexential-CoreWeave alignment represents more than a simple provider-client relationship. It reads like a case study in how next-generation AI workloads are reshaping the operational and strategic frameworks of the data center industry.

At its core, this partnership is a convergence of necessity and capability. Flexential brings the high-density colocation muscle—purpose-built facilities, robust connectivity, and power delivery systems that can support north of 80 kW per cabinet. CoreWeave brings the demand engine—sustained growth driven by hyperscale AI deployments, strategic software and hardware alignment with Nvidia, and one of the most ambitious rollout plans in AI infrastructure history.

This collaboration speaks to a fundamental shift in how infrastructure is being evaluated and consumed. The days when capacity planning could afford to be conservative are over. Today, AI infrastructure customers like CoreWeave are driving demand for instant scalability, contiguous multi-megawatt footprints, and facilities that are not just GPU-ready, but GPU-native.

From an industry lens, Flexential’s ability to rapidly deliver 13 MW of contiguous high-density space in Plano underscores a rising premium on speed-to-power. In an era where securing GPUs is only half the battle, the ability to deploy them into thermally and electrically optimized environments—on schedule and at scale—becomes the new differentiator. Flexential’s FlexAnywhere platform is positioning itself as a connective tissue for hyperscale-style agility across traditionally enterprise-class colocation.

Meanwhile, CoreWeave’s aggressive expansion strategy is rapidly rewriting what a “cloud provider” looks like in the age of generative AI. Their infrastructure approach—favoring purpose-built facilities, liquid cooling readiness, and Infiniband-based GPU megaclusters—highlights how traditional cloud architectures are being reengineered for AI-native performance tiers.

AI at Scale: A Blueprint for the Next Wave of Colocation Partnerships

The deal also exemplifies a new hybrid model for AI scale-outs: a blend of owned hyperscale campuses (e.g., Kenilworth and Volo), strategic leases with established providers like Flexential, and long-term power procurement strategies that support sustainability without sacrificing performance. This blended approach allows AI-first providers to rapidly iterate on deployment models while maintaining regional proximity to customers and latency-sensitive use cases.

From a business standpoint, these partnerships also validate the maturation of colocation as a strategic growth lever in the AI arms race. Where once wholesale leasing was considered the apex of cloud scale, CoreWeave’s colocation-centric growth—with over 33 distributed sites—suggests a broader trend toward flexible, modular, and distributed AI capacity planning.

The data center industry is watching the colocation model being rewritten in real time to serve a new generation of performance-driven, AI-first tenants—and the stakes have never been higher. With Nvidia-backed infrastructure demand accelerating, and venture-fueled AI developers seeking instant scale, infrastructure providers who can bridge the performance-density-sustainability triangle will capture the lion’s share of this decade’s digital buildout.

And in this light, the Flexential-CoreWeave model isn’t just succeeding, but is rapidly becoming a blueprint for the industry.

Visit Flexential's interactive data center guide.

 

At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT4.

 

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.
About the Author

David Chernicoff

David Chernicoff is an experienced technologist and editorial content creator with the ability to see the connections between technology and business while figuring out how to get the most from both and to explain the needs of business to IT and IT to business.
About the Author

Matt Vincent

A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.

About the Author

DCF Staff

Data Center Frontier charts the future of data centers and cloud computing. We write about what’s next for the Internet, and the innovations that will take us there.

Sponsored Recommendations

In this edition of Voices of the Industry, AFL’s Manja Thessin, Enterprise Market Manager, and Seán Adam, Vice President of Market Strategy and Innovation, explore how data center...
American Tower has advanced its data center strategy on several fronts, consolidating its CoreSite acquisition with its data center assets, and lining up infrastructure investor...
Our panel of data center thought leaders looks at the upside of 5G wireless, and what faster and larger data streams will mean for cloud computing and multi-tenant data centers...
Smart cities promise to embed intelligence in urban environments, using fast wireless & digital services to improve the quality of life for residents. Data centers will be a key...

Gorodenkoff/Shutterstock.com
Source: Gorodenkoff/Shutterstock.com
Your data center is cool. But is it efficient? Ray Daugherty, Senior Services Consultant with Modius, explains how DCIM software can provide actionable insights that lead to smarter...

White Papers

Dcf 3
May 22, 2023
During an era of unprecedented digital infrastructure growth, learn how leaders embrace design modernization to impact time-to-market and sustainability.