Project Brainwave: Intel FPGAs Accelerate Microsoft’s AI Hardware

Aug. 23, 2017
This week Microsoft unveiled Project Brainwave, a deep learning acceleration platform the latest fruits of its collaboration with Intel on FPGA-based computing.

The artificial intelligence arms race continues, as the largest tech companies explore new ways accelerate AI workloads for cloud platforms. The appetite for more computing horsepower is following several tracks, with major investment in graphics processors (GPUs) as well as custom ASIC chips.

Microsoft has been a leader in using FPGAs (Field Programmable Gate Arrays) to accelerate its cloud and AI workloads. This week Microsoft unveiled Project Brainwave, a deep learning acceleration platform based on its collaboration with Intel on FPGA computing.

Microsoft says Project Brainwave represents a “major leap forward” in cloud-based deep learning performance, and intends to bring the technology to its Windows Azure cloud computing platform.

“We designed the system for real-time AI, which means the system processes requests as fast as it receives them, with ultra-low latency,” writes Doug Burger, a Microsoft Distinguished Engineer, in a blog post. “Real-time AI is becoming increasingly important as cloud infrastructures process live data streams, whether they be search queries, videos, sensor streams, or interactions with users.

Real-Time Deep Learning

Tuesday’s announcement Microsoft at the Hot Chips 2017 conference fleshed out the details on an approach that Microsoft described in broad terms at its Build user event in April.  Microsoft says its new approach, which it calls Hardware Microservices, will allow deep neural networks (DNNs) to run in the cloud without any software required, resulting in large advances in speed and efficiency.

FPGAs are semiconductors that can be reprogrammed to perform specialized computing tasks, allowing users to tailor compute power to specific workloads or applications. FPGAs can serve as coprocessors to accelerate CPU workloads, an approach that is used in supercomputing and HPC. Intel acquired new FPGA technology in its $16 billion acquisition of Altera in 2016.

“We exploit the flexibility of Intel FPGAs to incorporate new innovations rapidly, while offering performance comparable to, or greater than, many ASIC-based deep learning processing units,” said Burger.

Microsoft is using Intel Stratix 10 FPGAs as the hardware accelerator in its Brainwave platform. Microsoft describes its approach as using a “soft” DNN processing unit (or DPU), synthesized onto commercially available FPGAs. Microsoft says this approach provides flexibility and the ability to rapidly implement changes as AI technology advances.

Microsoft’s Project Brainwave hardware, which leverages Intel Stratix 10 FPGAs. (Photo: Microsoft)

“By attaching high-performance FPGAs directly to our datacenter network, we can serve DNNs as hardware microservices, where a DNN can be mapped to a pool of remote FPGAs and called by a server with no software in the loop,” Burger explained. “This system architecture both reduces latency, since the CPU does not need to process incoming requests, and allows very high throughput, with the FPGA processing requests as fast as the network can stream them.”

Project Brainwave, leveraging the Intel Stratix 10 technology, demonstrated over 39 teraflops of achieved performance on a single request, according to Microsoft and Intel. Brainwave is currently being used in Microsoft’s Bing search engine, but the company hopes to deploy it on its Azure cloud service.

“In the near future, we’ll detail when our Azure customers will be able to run their most complex deep learning models at record-setting performance,” said Burger. “With the Project Brainwave system incorporated at scale and available to our customers, Microsoft Azure will have industry-leading capabilities for real-time AI.”

Chinese Tech Firms Adopt AMD EPYC Servers

AMD also had news at the HotChips event, announcing that Chinese tech titans Tencent and plan to deploy its EPYC servers in their cloud and e-commerce operations. The wins are a signal of progress for AMD, which recently re-entered the data center market in earnest.

Tencent Cloud said that it plans to introduce AMD EPYC-based 2P cloud servers with up to 64 processor cores before the end of 2017. also committed to future adoption of EPYC servers, but did not set a timeline.

“To continue as a leading provider of high-performance and high-value cloud services, Tencent needs to adopt the most advanced infrastructure and the chip industry’s latest achievements,” said Sage Zou, senior director of Tencent Cloud. “Tencent Cloud is continuously seeking more cores, more I/O interfaces, more secure hardware features and improved total cost of ownership for server hardware products.”

“By partnering with these market leaders, AMD is bringing choice and competition to one of the fastest growing technology markets in the world,” said Forrest Norrod, senior vice president and general manager, Enterprise, Embedded and Semi-Custom products, AMD.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

Get Utility Project Solutions

Lightweight, durable fiberglass conduit provides engineering benefits, performance and drives savings for successful utility project outcomes.

Guide to Environmental Sustainability Metrics for Data Centers

Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

Photon photo/

In the Age of Data Centers, Our Connected Future Still Needs Edge Computing

David Wood, Senior Product Manager – Edge Computing at nVent, explains why edge computing will play a pivotal role in shaping the future of technology.

White Papers

Get the full report.

Northern Virginia Data Center Market

May 30, 2022
As developers seek to secure land for hyperscale operators looking to take advantage of Northern Virgina’s capacity to power cloud computing platforms and social networks, leasing...