Cologix Data Center Pact Supercharges AI for the Columbus, Ohio Region

Partnership with Lambda and Supermicro will see the deployment of the first NVIDIA HGX-200-based AI clusters in the region.
June 11, 2025
6 min read

As demand for AI infrastructure expands beyond traditional tech hubs, new regional alliances are reshaping the digital landscape. This latest partnership continues to cement Ohio as a significant location for data center development and AI deployment, bringing the edge to middle America.

The partnership between three industry leaders combines Cologix’s strong data center footprint, Supermicro’s high-density and sustainable hardware, and Lambda’s AI-native cloud services to create a turnkey, powerful AI/HPC platform based in the Midwest, demonstrating that cutting-edge development and service offerings are not limited to providers in better known data center hubs on the coasts or in Texas.

Location Meets Performance

Working with Lambda for software and Supermicro for hardware the companies will be deploying the first NVIDIA HGX B200-based AI clusters in Columbus, Ohio. Using the Lambda 1-Click Cluster model customers will be able to deploy and use the underlying NVIDIA hardware with significant ease of use.

The geographic location works as a midpoint between Chicago, New York, Atlanta, and Washington, D.C, which means that by hosting AI clusters here, enterprises can serve a large portion of the U.S. population within 10–20 ms latency, which is ideal for real-time AI services.

VP, Revenue at Lambda, Robert Brooks IV pointed out the importance of location and performance, saying:

Columbus is a thriving hub for AI innovation, from manufacturing to healthcare. With Supermicro’s trusted systems and Cologix’s reliable infrastructure, we’re giving Lambda’s customers in the Midwest the fastest path to production-ready AI—and the added flexibility to integrate with hyperscaler environments.

It Starts with a Significant Data Center Backbone

Cologix is a North Americawide networkneutral and hyperscale edge data center provider centered in the Columbus, Ohio area. With 4 current data centers (COL1 – COL4) and a fifth AI dedicated data center under construction, the company currently offers over 500K sq ft of data center space with 80 MW of power. The data centers are carrier-neutral and currently have more than 50 carriers, in addition to direct AWS/GCP and Google Cloud connectivity options.

The COL-4 data center, collocated with the other Cologix data centers, where the AI clusters will be deployed, is a 256,000 sq ft facility offering up to 33 MW of power across three data halls. Each critical IT load cabinet has redundant power sources, and the facility offers chilled water with free cooling. The facility has more than 50 unique networks available through the Cologix Meet-Me room.

The four current data centers—COL1, COL2, COL3, and COL4—are connected via diverse, private dark fiber paths operated and maintained by Cologix and which allows campus-wide virtual infrastructure, enabling workloads to be split, scaled, or migrated between buildings with minimal latency.

This connectivity is based on dual-path, high-capacity fiber rings to ensure redundancy and non-blocking throughput, supporting high availability for GPU clusters. This architecture allows all of the clusters deployed at the COL-4 location which host the new Lamda AI clusters to access additional compute, storage, or backup nodes hosted in COL1–COL3.

Additionally, this enables dynamic scaling of LLM training or AI inference workloads without relocating data or reconfiguring networking. Inter-site latency is <0.5ms, suitable for distributed AI training frameworks like DeepSpeed, Megatron, and Horovod, which rely on fast multi-node communication.

Site Development Continues

Even with the significant deployment of the Cologix / Supermicro / Lambda hardware and service model in COL-4, Cologix continues site development with the 60,000 sq ft COL-5 data center.

This is a single story, purpose-built facility, commissioning with 25 MW of capacity, with plans for an additional facility, which will add 95 MW in the future. Supporting racks up to 52U, the data center is designed to support power load of 5kW to 100 kW per rack, with choices of air or liquid cooling.

Chris Heinrich, Chief Revenue Officer of Cologix, is bullish not just on the latest technology, but also on his company’s Midwest location, saying:

Columbus is one of the fastest-growing digital corridors in the country and this launch brings coastal-level AI infrastructure into the region. Our collaboration with Lambda and Supermicro gives regional enterprises a powerful edge, combining low-latency access, dense interconnection and ready-to-deploy clusters, giving teams the ability to move faster and scale smarter.

What is a 1-Click Cluster?

Lambda’s trademark design is a turnkey, preconfigured GPU cluster designed by Lambda to make it fast and easy for enterprises, researchers, and developers to spin up AI infrastructure for training and inference workloads, supporting development with minimal setup and no infrastructure management requirement.

The cluster is a fully managed, production-grade AI compute environment provisioned instantly via Lambda’s platform. Each cluster is GPU-accelerated using NVIDIA’s latest hardware (e.g., H100, B100/HGX B200), which has preinstalled deep learning frameworks, software stacks, and orchestration tools.

Lambda GFC In Play

The design of the cluster orchestration makes it ready to use in seconds through a web interface, CLI, or API. Significant customization is possible via Lambda’s Flexible GPU Commitment model (GFC).

GFC is Lambda’s compute consumption model that grants a customer organization access to the company’s entire cloud portfolio, and allows, among other things, the flexibility to move to the latest generation of NVIDIA AI hardware as Lambda makes it available within their services.

The model allows customers to pre-train their models, at scale, accessing 16 to 1500 Blackwell GPUs at a single click. The real-time inference service can serve up to 10,000 token/sec, based on customer needs.

Supermicro Continues Front and Center with Next-Gen AI hardware

Despite a few hiccups in 2024, Supermicro continues to be a significant developer and provider of high-end AI cluster hardware, being a regular sight on stage at the NVIDIA annual conferences. Their rack-scale systems (up to 96 GPUs in a 52U rack) with advanced cooling maximize performance while minimizing space and power footprint.

Supermicro’s role in the AI hardware landscape goes far beyond simple participation; the company is an essential ecosystem partner for NVIDIA, often among the first to market with new platforms based on NVIDIA's latest GPUs and networking technologies. This close alignment has positioned Supermicro as a go-to systems integrator for enterprises and cloud providers seeking to deploy AI at scale.

The collaboration is particularly visible in NVIDIA’s HGX platform rollouts, where Supermicro offers a variety of turnkey solutions that integrate NVIDIA H100 and A100 GPUs with NVLink, NVSwitch, and NVIDIA Quantum or Spectrum networking. These designs not only accelerate training and inference workloads, but also reflect a shared focus on thermal and power efficiency - critical concerns as AI cluster density intensifies.

Supermicro has also been instrumental in bringing NVIDIA’s MGX modular reference architecture to market. MGX enables customers to customize systems across a range of accelerators, CPUs, and form factors, and Supermicro’s early and aggressive adoption of the platform underlines its strategic alignment with NVIDIA’s data center vision. In many cases, Supermicro systems serve as the physical backbone for generative AI services, from foundational model development to low-latency edge inference.

As AI deployments continue to evolve, Supermicro is helping shape the practical rollout of Nvidia's GPU-driven infrastructure at every level, from hyperscale environments to regional colocation facilities.

Working closely with NVIDIA, Lambda and Cologix, the company clearly understands their role with Charles Liang, President and CEO, saying:

Supermicro’s collaboration with Lambda and Cologix delivers real-world impact. Our NVIDIA HGX B200-based hardware enables the highest performance AI workloads in a space- and energy-efficient footprint. Together, we’re bringing those benefits to businesses in the Midwest and beyond.

 

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

About the Author

David Chernicoff

David Chernicoff is an experienced technologist and editorial content creator with the ability to see the connections between technology and business while figuring out how to get the most from both and to explain the needs of business to IT and IT to business.
Sign up for the Data Center Frontier Newsletter
Get the latest news and updates.