Equinix Manages AI Supercomputing with NVIDIA as Cisco Fills AI Infrastructure Gap

Feb. 14, 2024
New data center services look to address specialized skill shortages for most businesses surrounding AI technology implementation.

Jon Lin, Equinix EVP and General Manager of Data Center Services and Matt Hull, NVIDIA VP Global AI Solutions, here answer questions about the companies' recently announced collaboration.

New capabilities announced in the partnership between Equinix and NVIDIA will be able to offer significant simplification in the management and deployment of AI infrastructure for customers, with a goal of making AI more accessible and manageable.

Focusing on challenges such as data security, operational costs, and the technical complexities of back-end upgrades, the partnership hopes to be able to provide a seamless, turnkey solution for businesses looking to harness the power of generative AI.

According to Charles Meyers, president and CEO of Equinix, "Our new service provides customers a fast and cost-effective way to adopt advanced AI infrastructure that's operated and managed by experts globally."

At the same time, Cisco and NVIDIA have announced an integrated AI infrastructure that will also give customers the opportunity to simplify management and deployment.

Of this partnership, Jensen Huang, founder and CEO of NVIDIA, said, “Working closely with Cisco, we’re making it easier than ever for enterprises to obtain the infrastructure they need to benefit from AI, the most powerful technology force of our lifetime.”

When it comes to deploying NVIDIA technology, it might sound like Equinix and Cisco are two companies announcing direct competition with each other -- but that’s not the case. 

NVIDIA Facts of the Case

In working with NVIDIA, Equinix, with its operational experience, is offering the capability for customers to take the NVIDIA hardware that they are investing in, specifically the DGX AI Supercomputer infrastructure, and have it operated and maintained at an Equinix data center, by specially-trained teams of Equinix staff with the support of NVIDIA technologists. 

The combination of data center experts and the presence of specially-trained NVIDIA DGX experts makes available to Equinix customers the skills that they need to maximize the value of their AI investment. 

Cisco is taking a somewhat different approach to NVIDIA. In contrast, how does Cisco see their options? 

Rather than jump on the bandwagon with companies finding ways to deploy the top of the line DGX supercomputer or the H100 GPUs that are NVIDIA's core offerings, Cisco is betting that businesses will want to add AI capabilities to their networks -- while not spending the money required to be absolutely cutting-edge. 

So the Cisco approach involves the release of the Cisco and NVIDIA Integrated Data Center Solutions. Much in the way many networking architectures have been released in the past, there is now a jointly validated reference architecture between the companies. 

The Cisco Validated Architectures for NVIDIA are designed to simplify the deployment and management of AI clusters at scale, with use cases ranging from virtualized to containerized environments.

The M7 generation of Cisco UCS rack and blade servers is now incorporating NVIDIA Tensor Core GPUS, which work for both training and inference as well as being usable in advanced high-performance computing deployments.

Notably, unlike the DGX AI supercomputer, which use InfiniBand interconnections, the Cisco deployments use Ethernet for interconnection, a technology very familiar to IT networking professionals.

With a streamlined solution offered to the customer, both Cisco and NVIDIA should see the benefit to their customers, making the addition of AI and ML-driver solutions more practical and easier to deploy.

With scalable automated AI cluster management, AI-driven management, and the addition of AI capabilities to the Cisco Observability Platform improving the utilization of real-time telemetry, businesses will have better insight and clearer options for improving customer and employer digital experience.

It's All About the Skills

While the technology tends to take center stage as the bright and shiny bit of both of these announcements, the biggest advantage that customers will get from both companies is in addressing their skills gap, necessary to deploy AI to their greatest benefit.

The learning curve and the cost of acquiring the knowledge necessary to get the most from an AI infrastructure is not insignificant, and with both these offerings, the vendors are taking that workload off their customers' hands. 

For the vendors, the knowledge derived from their customers' operations will allow them to continually fine-tune their operations, resulting in benefits to all current and future users of their offerings.  

 

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, and signing up for our weekly newsletters using the form below.

About the Author

David Chernicoff

David Chernicoff is an experienced technologist and editorial content creator with the ability to see the connections between technology and business while figuring out how to get the most from both and to explain the needs of business to IT and IT to business.

Sponsored Recommendations

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

How Modern DCIM Helps Multi-Tenant Colocation Data Centers Be More Competitive

Discover the transformative impact of modern DCIM software on multi-tenant colocation data centers, enhancing competitiveness through improved resiliency, security, environmental...

3 Steps to Calculate Total Enterprise IT Energy Consumption Using DCIM

Embark on a simplified journey to measure and reduce the environmental impact of your enterprise IT with our practical guide, outlining a straightforward 3-step framework using...

Sashkin/Shutterstock.com

Unpacking CDU Motors: It’s Not Just About Redundancy

Matt Archibald, Director of Technical Architecture at nVent, explores methods for controlling coolant distribution units (CDU), the "heart" of the liquid cooling system.

White Papers

Get the full report

Boston Data Center Market

April 27, 2022
The Boston region is one of the most prominent data center markets in the northeast, despite a higher cost of power than is found in most major markets. DCF, in conjunction with...