Intel Partners with Baidu on Neural Network Training Chip
A pillar of Intel’s emerging AI product portfolio, its upcoming Nervana Neural Network Processor for training (NNP-T), will be a collaborative development effort joining Intel with Baidu, the Beijing-based AI and internet company referred to as “the Google of China,” with 2018 revenues of more than $100 billion.
Announced today at the Baidu Create conference in Beijing, Intel’s Naveen Rao, corporate VP and GM of the AI Products Group, said the collaboration involves the hardware and software designs of the custom AI accelerator “with one purpose – training deep learning models at lightning speed.”
The announcement comes at a time when AI chip wars (Nvidia vs. Google, vs. Intel vs. AMD vs. ARM vs. AWS) are heating up in a fight for supremacy in the AI systems market that industry watcher IDC reported earlier this year will grow to nearly $35.8 billion by the end of 2019 and double by 2022. To date, Nvidia GPUs have taken a dominant position in data- and processing-intensive ML/DL training, but that company has come under increasingly competitive pressures over the past 18 months from other companies and other architectures.
Intel launched its Nervana NNP AI chip for inference at the Consumer Electronics Show last January, the culmination of a development effort begun in 2017. At CES, Rao said the NNP-I, built on a 10-nanometer process and optimized for image recognition, includes Intel Ice Lake cores for general operations and neural network acceleration.
Intel did not provide many details on NNP-T other than to say it’s “a new class of efficient deep learning system hardware designed to accelerate distributed training at scale.”
The broadening Intel-Baidu partnership – which comes at a time of increasing U.S.-China trade tensions, much of it focused on technology – encompasses other Intel technologies, such as its Optane DC Persistent Memory, which Intel announced in April and said delivers up-to 36TB of system-level memory capacity, a 3X improvement over previous Intel Xeon Scalable processors.
On the data security front, Intel and Baidu are working on MesaTEE, a “memory-safe function-as-a-service (FaaS)” computing framework based on Intel Software Guard Extensions technology.
Intel and Baidu also announced progress in the optimization, begun in 2016, for Xeon Scalable processors of Baidu’s PaddlePaddle deep learning framework. The two companies also said they are exploring integration of PaddlePaddle and Intel nGraph, a framework-neutral, deep neural network (DNN) model compiler, which Intel open-sourced in March. “With nGraph, data scientists can write once, without worrying about how to adapt their DNN models to train and run efficiently on different hardware platforms,” Intel said.
Also today at the Baidu conference, Intel VP Gadi Singer announced the companies’ joint effort to power Baidu’s Xeye, a new AI retail camera, with Intel Movidius Myriad 2 vision processing units (VPUs) “highlighting Baidu’s plans to offer workload acceleration as a service using Intel FPGAs,” Intel said. The combination of Intel VPUs with Baidu’s ML algorithms enables the camera "to analyze objects and gestures, while also detecting people to provide personalized shopping experiences in retail settings,” according Intel.