NVIDIA to Acquire Mellanox in $6.9 Billion HPC Deal

March 11, 2019
In a deal underscoring the growing importance of data center networking, technical computing heavyweight NVIDIA has agreed to pay $6.9 billion to acquire networking specialist Mellanox.

In a deal underscoring the growing importance of data center networking, technical computing heavyweight NVIDIA has agreed to acquire networking specialist Mellanox for $6.9 billion deal.

The transaction has strategic implications for the data center and high performance computing (HPC) sectors, as chipmaker Intel was also rumored to be among the bidders for Mellanox, a leader in interconnect technology that ties together computing resources. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the deal will position the company to optimize data-intense computing workloads across the entire computing, networking and storage stack to achieve higher performance and lower cost solutions for customers.

“The data center has become the most important computer in the world,” said Jensen Huang, founder and CEO of NVIDIA. ““The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine. The computer no longer starts and ends at the server.”

NVIDIA’s graphics processing (GPU) technology has been one of the biggest beneficiaries of the rise of specialized computing, gaining traction with workloads in supercomputing, artificial intelligence (AI) and connected cars. NVIDIA has been investing heavily in innovation in AI, which it sees as a pervasive technology trend that will bring its GPU technology into every area of the economy and society.

Focus on Interconnects

Interconnects are network components that allow compute nodes to communicate with each other. Ethernet and Infiniband have been the leading interconnect technologies in high-performance computing.

NVIDIA founder and CEO Jensen Huang . (Photo: NVIDIA Corp.)

In 2014 NVIDIA introduced NVLink, an interconnect optimized to connect GPUs to CPUs, or connect nodes in an all-GPU system. It also has a long history of collaboration with Mellanox. The two companies have worked together on many HPC projects, including the world’s two fastest supercomputers, Sierra and Summit, operated by the U.S. Department of Energy. Many of the world’s top cloud service providers also use both NVIDIA GPUs and Mellanox interconnects.

“We share the same vision for accelerated computing as NVIDIA,” said Eyal Waldman, founder and CEO of Mellanox. “Combining our two companies comes as a natural extension of our longstanding partnership and is a great fit given our common performance-driven cultures. This combination will foster the creation of powerful technology and fantastic opportunities for our people.”

The deal will be closely watched by Wall Street, which has been keenly focused on NVIDIA’s progress in the data center sector, where Intel CPUs have long been the dominant compute platform. In recent years, NVIDIA’s stock performance has been buffeted by sales of its GPUs to cryptocurrency specialists, whose buying patterns have fluctuated wildly along with the price of bitcoin and other major cryptocurrencies.

NVIDIA plans to acquire common shares of Mellanox for $125 per share in cash, representing a total enterprise value of approximately $6.9 billion, and to fund the acquisition through cash on its balance sheet. Once complete, the combination is expected to be immediately accretive to NVIDIA’s non-GAAP gross margin, non-GAAP earnings per share and free cash flow. The transaction has been approved by both companies’ boards of directors and is expected to close by the end of calendar year 2019, subject to regulatory approvals.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

CoolIT Systems
Source: CoolIT Systems

Selecting the Right Coolant Distribution Unit for Your AI Data Center

Ian Reynolds, Senior Project Engineer with CoolIT Systems, outlines considerations for selecting the best CDU for your needs.

White Papers

Mgk Dcf Wp Cover2 2023 01 09 10 34 33

Data Center Microgrids: Planning for Your Microgrid

Jan. 9, 2023
The energy grid is increasingly vulnerable to outages thanks to aging infrastructure and the growing impact of climate change. Traditionally, data centers have turned to uninterruptible...