NVIDIA to Acquire Mellanox in $6.9 Billion HPC Deal

March 11, 2019
In a deal underscoring the growing importance of data center networking, technical computing heavyweight NVIDIA has agreed to pay $6.9 billion to acquire networking specialist Mellanox.

In a deal underscoring the growing importance of data center networking, technical computing heavyweight NVIDIA has agreed to acquire networking specialist Mellanox for $6.9 billion deal.

The transaction has strategic implications for the data center and high performance computing (HPC) sectors, as chipmaker Intel was also rumored to be among the bidders for Mellanox, a leader in interconnect technology that ties together computing resources. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the deal will position the company to optimize data-intense computing workloads across the entire computing, networking and storage stack to achieve higher performance and lower cost solutions for customers.

“The data center has become the most important computer in the world,” said Jensen Huang, founder and CEO of NVIDIA. ““The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine. The computer no longer starts and ends at the server.”

NVIDIA’s graphics processing (GPU) technology has been one of the biggest beneficiaries of the rise of specialized computing, gaining traction with workloads in supercomputing, artificial intelligence (AI) and connected cars. NVIDIA has been investing heavily in innovation in AI, which it sees as a pervasive technology trend that will bring its GPU technology into every area of the economy and society.

Focus on Interconnects

Interconnects are network components that allow compute nodes to communicate with each other. Ethernet and Infiniband have been the leading interconnect technologies in high-performance computing.

NVIDIA founder and CEO Jensen Huang . (Photo: NVIDIA Corp.)

In 2014 NVIDIA introduced NVLink, an interconnect optimized to connect GPUs to CPUs, or connect nodes in an all-GPU system. It also has a long history of collaboration with Mellanox. The two companies have worked together on many HPC projects, including the world’s two fastest supercomputers, Sierra and Summit, operated by the U.S. Department of Energy. Many of the world’s top cloud service providers also use both NVIDIA GPUs and Mellanox interconnects.

“We share the same vision for accelerated computing as NVIDIA,” said Eyal Waldman, founder and CEO of Mellanox. “Combining our two companies comes as a natural extension of our longstanding partnership and is a great fit given our common performance-driven cultures. This combination will foster the creation of powerful technology and fantastic opportunities for our people.”

The deal will be closely watched by Wall Street, which has been keenly focused on NVIDIA’s progress in the data center sector, where Intel CPUs have long been the dominant compute platform. In recent years, NVIDIA’s stock performance has been buffeted by sales of its GPUs to cryptocurrency specialists, whose buying patterns have fluctuated wildly along with the price of bitcoin and other major cryptocurrencies.

NVIDIA plans to acquire common shares of Mellanox for $125 per share in cash, representing a total enterprise value of approximately $6.9 billion, and to fund the acquisition through cash on its balance sheet. Once complete, the combination is expected to be immediately accretive to NVIDIA’s non-GAAP gross margin, non-GAAP earnings per share and free cash flow. The transaction has been approved by both companies’ boards of directors and is expected to close by the end of calendar year 2019, subject to regulatory approvals.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

From modular cooling systems to enterprise-wide energy optimization, this quick-reference line card gives you a snapshot of Trane’s industry-leading technologies built for data...
Discover how Trane’s CDU delivers precise, reliable liquid cooling for mission-critical environments. Designed for scalability and peak performance, it’s a smart solution for ...
In this executive brief, we discuss the growing need for liquid cooling in data centers due to the increasing power demands of AI and high-performance computing. Discover how ...
AI hype has put data centers in the spotlight, sparking concerns over energy use—but they’re also key to a greener future. With renewable power and cutting-edge cooling, data ...

Image courtesy of AFL
Source: Image courtesy of AFL
Alan Keizer, Senior Technical Advisor for AFL, explains how MFD enables informed decisions in fiber selection, installation practices, and maintenance procedures.

White Papers

Download the full report.
Aug. 1, 2021
This white paper from Anritsu outlines the enhanced PCIe 6.0 technologies, such as PAM4, Forward Error Correction (FEC) and link equalization. It also provides guidelines on selecting...