AI Ecosystem Coalesces Around Hardware Accelerator
The requirement for hardware accelerators used in machine and deep learning applications along with HPC are driving efforts to establish specifications that take account for growing bandwidth and interconnect flexibility needed for emerging AI workloads.
With that in mind, the Open Compute Project announced this week that Chinese AI developer Baidu (NASDAQ: BIDU) along with Facebook (NASDAQ: FB) and Microsoft (NASDAQ: MSFT) have contributed design specifications for an Open Accelerator Module (OAM). The spec would help define an “open-hardware compute accelerator form factor” and required interconnects, the group said this week.
The trio joins a growing list of companies forging an AI accelerator specification as the technology rapidly evolves. Open Compute Project noted that the pace of innovation and the growing number of AI accelerator vendors has exposed technical challenges and design complexities associated with proprietary AI hardware frameworks.
Those complications have translated into delays of up to 12 months for integrating hardware accelerators into new systems. “This delay prevents quick adoption of new competitive AI accelerators,” the standards group noted.
Hence, Baidu, Facebook and Microsoft are seeking to break the logjam by authoring a new OAM spec in collaboration with other stakeholders such as Alibaba, Google and Tencent. AI chip makers that include AMD, Intel, Nvidia and Xilinx are also contributing to the effort designed to create a new form factor that optimizes bandwidth and interconnects.
Huawei, IBM, Lenovo and other OEMs are also pitching in to advance the hardware accelerator spec.
In an effort to meet growing market demand for AI accelerators, initial efforts have focused on available industry-standard form factors such as a PCI Express card spec. Those early solutions are seen as unable to meet the demands of emerging AI workloads since they lack the necessary bandwidth requirements and interconnect flexibility.
“The OAM design specification defines the mezzanine form factor and common specifications for a compute accelerator module,” the group said. “In contrast with a PCIe add-in card form factor, the mezzanine module form factor of OAM facilitates scalability across accelerators by simplifying the system solution when interconnecting high-speed communication links among modules.”
“The OAM specification along with the baseboard and enclosure infrastructure will speed up the adoption of new AI accelerators and will establish a healthy and competitive ecosystem” added Bill Carter, CTO of the Open Compute Project Foundation.