The Open Compute Project's blockbuster 2024 OCP Global Summit (Oct. 15-17) swarmed with over 7000 attendees thirsty to learn about the latest liquid cooling developments in wake of roughly the past 2 years' data center AI tsunami.
Fittingly enough, as one of the event's central highlights, Meta Engineering shared its soon-to-be-released Catalina rack design for high-density AI computing.
Catalina is built to support the latest NVIDIA GB200 Grace Blackwell Superchip to ensure capacity for the growing demands of modern AI infrastructure.
Meta notes that growing power demands from GPUs mean that open rack solutions need to support higher power capability. The Catalina platform's Orv3, a high-power rack (HPR) capable of supporting up to 140 kW, embodies this support.
As unveiled to the OCP technical community, Meta billed Catalina as the company's newest high-powered rack designed for AI workloads, based on the NVIDIA Blackwell platform full rack-scale solution, and with a design focused on modularity and flexibility.
The full platform is liquid-cooled and consists of a power shelf that supports a compute tray, switch tray, the Orv3 HPR, the associated Wedge 400 fabric switch, a management switch, battery backup unit, and a rack management controller.
Meta said it aims for Catalina’s modular, open design to empower others to customize the rack to meet their specific AI workloads, while leveraging both existing and emerging industry standards.
"Scaling AI at this speed requires open hardware solutions," the Meta Engineering team wrote on its blog. "Developing new architectures, network fabrics, and system designs is the most efficient and impactful when we can build it on principles of openness. By investing in open hardware, we unlock AI’s full potential and propel ongoing innovation in the field."