The Impact of Liquid Cooling: Lessons from Nautilus Data Technologies and Start Campus
We conclude our article series on the current state of cooling in data centers, how organizations are adapting their cooling strategies, and why liquid cooling is no longer a “nice to have” but a necessity. This week, we’ll see liquid cooling in action supporting modern HPC workloads.
To see liquid cooling in action supporting modern HPC workloads, we turn to a project between Nautilus and Start Campus. The collaboration between Start Campus and Nautilus Data Technologies represents a pivotal advancement in addressing the increasingly intensive cooling demands of artificial intelligence (AI) and high-performance computing (HPC) workloads. As the industry confronts unprecedented heat densities, traditional cooling methods struggle to sustain efficiency and reliability. This case study outlines how an innovative liquid cooling solution not only successfully addressed these challenges but also set new standards for data center infrastructure design and operational excellence.
While the above table illustrates the capabilities of liquid cooling to support high-density infrastructure, it’s actually only the beginning, as many have already shifted to GPU technologies beyond Nvidia H100 clusters. Already, designs are being implemented around infrastructure for emerging systems, including Nvidia GB200 and upcoming GB300 GPU clusters. To support these new design standards, looking at offerings leveraging larger data hall-scale and facility-scale CDU modules is key. With chip-set heat loads (TDP) increasing by 50% per generation, the future must incorporate liquid cooling. Note that the GB300 will be the first 100% liquid-cooled GPU rack compared to today’s hybrid paradigm. Partners like Nautilus are ready and positioned to meet and deliver with EcoCore.
Final Thoughts and Getting Started
The data center industry stands at a crossroads. The rise of AI, high-performance computing (HPC), and increasingly dense workloads have fundamentally altered how we think about cooling, efficiency, and scalability. The days of relying solely on traditional air-cooling systems are fading — not because they have failed, but because the demands placed on them have evolved beyond their capabilities.
As we’ve seen from Start Campus and Nautilus Data Technologies, liquid cooling is no longer an experimental concept; it’s an engineered solution delivering real-world results right now. With 50+ kW per rack deployments, streamlined liquid-to-air integration, and scalable cooling architectures, these innovations prove that liquid cooling isn’t just a theoretical improvement but an operational necessity. The question is no longer whether data centers will adopt liquid cooling but when and how.
Call to Action: Time to Move Beyond the Air-Cooled Mindset
So, what does this mean for you? If your data center is still operating with an air-cooled mindset, it’s time to start planning for the future.
- Assess Your Infrastructure: What’s your current and projected power density? How will AI and high-performance workloads impact your cooling requirements?
- Identify the Right Liquid Cooling Approach: Whether it’s rear-door heat exchangers (RDHx), direct-to-chip, or immersion, understanding the best fit for your environment is critical.
- Think Beyond Today: Cooling isn’t just about efficiency. It’s about scalability, sustainability, and operational resilience. What works today won’t support the next decade of AI-driven workloads.
The industry is moving forward. Don’t let legacy cooling hold your data center back. The time to start integrating liquid cooling is now — because tomorrow’s workloads are already here.
Download the full report, Survival of the Coolest: Why Liquid Cooling is No Longer Optional for HPC and AI-Driven Data Centers, featuring Nautilus Data Technologies, for exclusive content on how to understand the challenges in air cooling specific workloads.
About the Author
