Hammerspace Raises the Bar for AI and HPC Data Center Infrastructure

Backed by $100 million in fresh capital, Hammerspace is advancing a global data platform built for AI-scale performance—delivering Tier 0 shared storage, a unified namespace, and multi-cloud agility.
May 21, 2025
6 min read

In today’s landscape of multi-billion dollar bets on AI-ready data centers, it’s easy for a $100 million raise to be overlooked. But the latest investment round in Hammerspace deserves closer attention. The company—positioning itself as a key enabler for AI and high-performance computing (HPC) workloads—has closed a $100 million Series B funding round, backed by investors with a track record in transformative technology companies like NVIDIA, Meta, and SpaceX.

Rather than building another storage platform, Hammerspace is tackling a more complex problem: how to move and manage unstructured data efficiently across the fragmented infrastructure powering today’s AI workloads.

A Global Data Fabric for AI and HPC: Decoupling Data from Infrastructure

At its core, Hammerspace delivers a data orchestration platform that creates a global data environment—providing unified, real-time access to files and objects across hybrid, multi-cloud, and edge environments.

With the massive investment pouring into GPU infrastructure, the real bottleneck increasingly lies in the data layer. AI applications are rarely confined to a single data source or location. Whether training a large language model (LLM) or analyzing real-time edge streams, performance depends on the ability to move data to where the compute lives—and do so with low latency and high throughput. That’s the problem Hammerspace is built to solve.

What sets Hammerspace apart is its ability to decouple data from the underlying storage infrastructure. Users and applications can access data instantly—whether it’s stored on-premises, in the public cloud, or at the edge—without needing to manually migrate or reconfigure systems.

This architectural shift is particularly important in AI and HPC environments, where compute clusters may be distributed across regions and providers. By abstracting the location of data and automating its movement based on application needs, Hammerspace ensures data is always in the right place, at the right time.

Key Technology Differentiators

Here’s a look at the core capabilities Hammerspace is bringing to bear on the AI infrastructure challenge:

  • Global Data Environment: A unified namespace spanning multiple protocols (NFS, SMB, S3), enabling seamless access to distributed unstructured data.
  • Tier 0 Shared Storage: A performance-first storage layer using GPU-attached NVMe, delivering shared, ultra-low-latency data access for AI workloads.
  • Data Orchestration Engine: Automates real-time data movement, placement, and access to align with application performance and policy requirements.
  • Data-in-Place Assimilation: No rehydration or migration required; existing data becomes immediately usable, regardless of location.
  • AI-Optimized Performance: Designed to reduce latency and maximize GPU utilization, addressing one of the most pressing challenges in today’s AI workflows.
  • Multi-Cloud and Edge Readiness: Operates seamlessly across AWS, Azure, Google Cloud, and edge environments—critical for distributed AI infrastructure.

Feeding the GPUs, Not Just Stacking Them

As Hammerspace puts it: “AI infrastructure isn’t just about stacking more GPUs. It’s about feeding them efficiently.” That statement captures the company’s role in a rapidly evolving AI landscape. Even as enterprises invest heavily in AI compute, the full performance potential often remains untapped due to data movement and access bottlenecks.

By turning data into a globally available, policy-driven service, Hammerspace removes a major barrier to performance at scale. And as the arms race for AI capacity continues, solutions like this will be essential to getting value from the massive capital being deployed.

David Flynn, Hammerspace Founder and CEO, identified the strength of the product, saying:

AI isn’t waiting. The race isn’t just about raw throughput—it’s about how fast you can deploy, move data and put your infrastructure to work. Every delay is unrealized potential and wasted investment. We built Hammerspace to eliminate friction, compress time-to-results and significantly increase GPU utilization. That’s how our customers win.

Hammerspace isn’t alone in trying to unlock the performance potential of distributed data for AI and HPC environments. The category of data orchestration and global namespace solutions has grown significantly in recent years, with vendors like VDURA, Alluxio, Panzura, and Datacore all offering platforms designed to virtualize and streamline access to unstructured data across hybrid infrastructure. 

These platforms share overlapping capabilities—particularly around unifying file and object storage and improving data mobility—but Hammerspace is betting that a laser focus on AI-centric workloads will set it apart. At the heart of its value proposition is the platform’s ability to operate as a Tier 0 data layer directly optimized for GPU environments, integrating seamlessly with existing high-performance infrastructure to reduce bottlenecks and accelerate time-to-insight.

Built for the AI Factory Floor

While solutions like Alluxio excel in optimizing data locality for AI training pipelines, and VDURA continues to serve the needs of traditional HPC environments, Hammerspace is targeting a new architectural sweet spot. In what it describes as the AI factory model—where multiple pipelines, clouds, and edge environments interact in real time—the company positions its platform as the connective tissue that keeps GPU-centric workloads fed with the right data, in the right place, at the right time.

Hammerspace believes its combination of metadata-driven orchestration, real-time data assimilation, and cloud-native architecture makes it uniquely capable of adapting to this emerging operational reality. The company’s approach enables users to treat data as a dynamic resource—automated, policy-driven, and infrastructure-agnostic—a sharp contrast to traditional systems that often require manual movement, synchronization, or refactoring of datasets to support AI workflows.

As AI factories become more distributed and more dependent on latency-sensitive, high-throughput pipelines, Hammerspace is making a strong case that intelligent data orchestration at Tier 0 will be as critical to performance as the GPUs themselves. According to Flynn:

We orchestrate data to the GPU faster regardless of where it is physically stored. We instantly assimilate data from third-party storage so it's ready to process faster. We deploy and scale easily and quickly so our customers can achieve their business outcomes faster. We are the only AI data platform that enables Tier 0, providing the absolute fastest performance for AI workloads, bar none. Hammerspace uniquely approaches performance holistically from time to first token through the entire model’s response. It is a game-changer for our customers and the industry.

Strategic Capital and Industry Momentum

The new $100 million Series B investment round was led by Altimeter Capital, a firm known for identifying and backing early inflection points in transformative technology. In explaining the firm’s interest in Hammerspace, Jamin Ball, Partner at Altimeter, emphasized the platform’s alignment with the core infrastructure needs of next-generation AI environments:

Hammerspace understands that AI is only as powerful as the data it can reach. Its architecture removes the bottlenecks that are starving today’s most advanced compute environments.

While Altimeter and other investors bring financial horsepower, Hammerspace is also gaining traction with influential technology players who are deploying its platform in real-world AI solutions. Hitachi Vantara and Supermicro, both prominent names in the AI infrastructure ecosystem, are leveraging Hammerspace software in offerings designed for enterprise and government clients.

And according to the company, its customer roster already includes major organizations such as Meta, the U.S. Department of Defense, and the National Science Foundation—a signal that its technology is gaining serious traction across both commercial and mission-critical environments.

A Data Layer Built for the AI Frontier

As enterprises and hyperscalers ramp up investment in AI infrastructure, it’s becoming increasingly clear that compute alone isn’t enough. The bottleneck has shifted to data—its availability, its mobility, and its orchestration across an increasingly complex web of systems. That’s where Hammerspace sees its opportunity: providing the intelligent, automated data layer that can scale as fast as the AI workloads it serves.

With fresh capital, growing adoption, and a technology stack built natively for the demands of AI and HPC, Hammerspace is positioning itself not just as another storage solution, but as a defining component of the modern AI infrastructure stack.

 

At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT4.

 

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

About the Author

David Chernicoff

David Chernicoff

David Chernicoff is an experienced technologist and editorial content creator with the ability to see the connections between technology and business while figuring out how to get the most from both and to explain the needs of business to IT and IT to business.

Matt Vincent

Matt Vincent is Editor in Chief of Data Center Frontier, where he leads editorial strategy and coverage focused on the infrastructure powering cloud computing, artificial intelligence, and the digital economy. A veteran B2B technology journalist with more than two decades of experience, Vincent specializes in the intersection of data centers, power, cooling, and emerging AI-era infrastructure. Since assuming the EIC role in 2023, he has helped guide Data Center Frontier’s coverage of the industry’s transition into the gigawatt-scale AI era, with a focus on hyperscale development, behind-the-meter power strategies, liquid cooling architectures, and the evolving energy demands of high-density compute, while working closely with the Digital Infrastructure Group at Endeavor Business Media to expand the brand’s analytical and multimedia footprint. Vincent also hosts The Data Center Frontier Show podcast, where he interviews industry leaders across hyperscale, colocation, utilities, and the data center supply chain to examine the technologies and business models reshaping digital infrastructure. Since its inception he serves as Head of Content for the Data Center Frontier Trends Summit. Before becoming Editor in Chief, he served in multiple senior editorial roles across Endeavor Business Media’s digital infrastructure portfolio, with coverage spanning data centers and hyperscale infrastructure, structured cabling and networking, telecom and datacom, IP physical security, and wireless and Pro AV markets. He began his career in 2005 within PennWell’s Advanced Technology Division and later held senior editorial positions supporting brands such as Cabling Installation & Maintenance, Lightwave Online, Broadband Technology Report, and Smart Buildings Technology. Vincent is a frequent moderator, interviewer, and keynote speaker at industry events including the HPC Forum, where he delivers forward-looking analysis on how AI and high-performance computing are reshaping digital infrastructure. He graduated with honors from Indiana University Bloomington with a B.A. in English Literature and Creative Writing and lives in southern New Hampshire with his family, remaining an active musician in his spare time.

You can connect with Matt via LinkedIn or email.

You can connect with Matt via LinkedIn or email.

DCF Staff

Data Center Frontier charts the future of data centers and cloud computing. We write about what’s next for the Internet, and the innovations that will take us there.
Sign up for our eNewsletters
Get the latest news and updates
nVent
Image courtesy of nVent.
Sponsored
nVent's Sam Dore explains why the smartest liquid cooling strategy is not about choosing between air and liquid. It is about building an intelligent bridge between them.
Getty, courtesy of Hitachi Energy
Source: Getty, courtesy of Hitachi Energy
Sponsored
Susan McLeod of Hitachi Energy explains why standardized power delivery has pivoted from a constraint to a competitive advantage.