As the data center industry continues to chase greater performance for AI and scientific workloads, a new joint report from Hyperion Research and Alice & Bob is urging high performance computing (HPC) centers to take immediate steps toward integrating early fault-tolerant quantum computing (eFTQC) into their infrastructure.
The report, “Seizing Quantum’s Edge: Why and How HPC Should Prepare for eFTQC,” paints a clear picture: the next five years will demand hybrid HPC-quantum workflows if institutions want to stay at the forefront of computational science.
According to the analysis, up to half of current HPC workloads at U.S. government research labs—Los Alamos National Laboratory, the National Energy Research Scientific Computing Center, and Department of Energy leadership computing facilities among them—could benefit from the speedups and efficiency gains of eFTQC.
“Quantum technologies are a pivotal opportunity for the HPC community, offering the potential to significantly accelerate a wide range of critical science and engineering applications in the near-term,” said Bob Sorensen, Senior VP and Chief Analyst for Quantum Computing at Hyperion Research. “However, these machines won’t be plug-and-play, so HPC centers should begin preparing for integration now, ensuring they can influence system design and gain early operational expertise.”
The HPC Bottleneck: Why Quantum is Urgent
The report underscores a familiar challenge for the HPC community: classical performance gains have slowed as transistor sizes approach physical limits and energy efficiency becomes increasingly difficult to scale. Meanwhile, the threshold for useful quantum applications is drawing nearer. Advances in qubit stability and error correction, particularly Alice & Bob’s cat qubit technology, have compressed the resource requirements for algorithms like Shor’s by an estimated factor of 1,000.
Within the next five years, the report projects that quantum computers with 100–1,000 logical qubits and logical error rates between 10⁻⁶ and 10⁻¹⁰ will accelerate applications across materials science, quantum chemistry, and fusion energy simulations. For HPC centers, that represents both opportunity and urgency: those who fail to integrate quantum early risk lagging behind both national labs and hyperscale cloud providers.
“Hybrid HPC-quantum workflows will allow users to shift complex subproblems to quantum processors, improving accuracy, time-to-solution, and computational cost,” said Théau Peronnin, CEO of Alice & Bob. “Centers that co-design workflows with vendors, optimize software and hardware, and deploy eFTQC prototypes now will secure a first-mover advantage.”
Implications for Hyperscalers and Cloud Providers
While the report is primarily aimed at HPC centers, its findings carry important implications for hyperscalers and cloud infrastructure providers. Amazon, Microsoft, Google, and other cloud leaders have already invested in quantum R&D, signaling that early hybrid HPC-quantum workloads could be offered as a service, much like GPU-accelerated AI.
For hyperscalers, the ability to integrate quantum into existing HPC or AI data centers could provide a competitive edge for high-value workloads in pharma, energy, and materials science.
Juliette Peyronnet, U.S. General Manager at Alice & Bob and co-author of the report, framed quantum adoption as a continuation of HPC’s historical embrace of disruptive architectures: “From vector processors to GPUs, HPC has always moved quickly to adopt new compute accelerators. Quantum computing is no exception. This is a call to action for centers to begin preparing now, so they are ready to harness the next major HPC accelerator.”
The report provides practical guidance for HPC operators, including building hybrid software stacks, training users, and deploying prototype systems to test quantum workflows alongside CPUs and GPUs. Its recommendations echo the broader industry trend of heterogeneous computing: the convergence of classical HPC, AI accelerators, and quantum processors in a single ecosystem.
Looking Ahead
The release of this report comes at a moment of intense interest in HPC and AI infrastructure. With demand for generative AI workloads surging and exascale-class supercomputing now mainstream, early adoption of quantum acceleration may determine which institutions and providers retain leadership in the next generation of scientific computing.
For data center operators, the message is apparent. Now is the time to explore partnerships, experiment with hybrid workflows, and lay the groundwork for a quantum-ready infrastructure.
Read the full report here: Seizing Quantum’s Edge: Why and How HPC Should Prepare for eFTQC
DCF’s 2025 Quantum Prediction Comes Into Focus
As noted in Data Center Frontier’s annual 8 Trends That Will Shape the Data Center Industry In 2025 report, the quantum computing event horizon is no longer a distant aspiration—it is actively shaping the data center landscape. Our Trend #8 highlighted that 2025 would be a pivotal year, as quantum computing moves from experimental demonstrations toward practical, hybrid applications in optimization, cryptography, and scientific simulations. That forecast is now manifesting with tangible developments.
“The pieces are falling into place for quantum integration,” we noted earlier this year. “Advances in qubit stability, scalability, and hybrid workflows are paving the way for broader adoption, and by 2025 quantum computing is poised to complement classical systems rather than replace them.” The new Hyperion-Alice & Bob study essentially confirms that projection: HPC centers are being called upon to invest now in hybrid CPU/GPU/quantum workflows, with early fault-tolerant systems expected to accelerate critical science and engineering workloads in the near term.
Our earlier report emphasized that quantum computing’s transformative potential spans industries and workloads—from AI model training and molecular modeling to real-time optimization and cryptography. Cloud-based Quantum-as-a-Service (QaaS) models were already flagged as an emerging approach to provide access to costly quantum hardware without necessitating full ownership. Fast-forward to September, and major HPC centers, government labs, and hyperscale cloud providers are all exploring similar hybrid integration strategies, illustrating that the prediction was not just theoretical: the industry is actively positioning itself for the quantum-augmented data center era.
In short, the convergence of HPC and early fault-tolerant quantum computing is validating DCF’s 2025 foresight. The technology is no longer a curiosity: it is poised to become a key accelerator within next-generation data centers, particularly for organizations that can leverage hybrid workflows to tackle workloads previously considered infeasible.