Baidu and Inspur team on open accelerator hardware

Inspur's concept for an open accelerator module was used in a collaboration with Baidu on the X-MAN 4.0, which has eight accelerators in a liquid cooled box. (Inspur)

Inspur recently announced two open computing systems for AI, including a liquid cooled rack-scale module for deep neural network applications developed with Baidu and called the X-MAN 4.0.

Inspur also announced a 21-inch full-rack reference system designed to manage inter-module communications, called the OAI UBB system.

The two products are primarily designed to help internet companies reduce the complexity and time involved in integrating AI accelerators.

OAI refers to Open Accelerator Infrastructure, which is a standard developed by a working group inside of the Open Compute Project (OCP). UBB refers to Universal BaseBoard, another specification developed by Inspur and OCP partners.

The X-MAN 4.0 is also OAI compliant, making it the world’s first rack-scale product using that spec, Inspur said in a release. The OAI spec has been led by Baidu, Facebook and Microsoft in the OCP community. It is largely designed to simplify the design complexity of AI accelerator systems, which are increasingly complex. Integrating an AI accelerator can currently take internet companies up to a year, Inspur said.

Ultimately, OAI shortens the time-to-market to install an accelerator system, and the two new products can help customers speed up innovation. Baidu, based in Shanghai, is one of the world’s biggest AI and internet companies with a top-ranked search engine. Inspur, based in Jinan, China, has a reputation in cloud computing and servers and more recently in AI. It is third in global server shipments behind HP Enterprise and Dell, according to IDC and other analysts.

Baidu and Inspur said the new fourth-generation X-MAN can outperform traditional GPU servers at lower cost. It has eight AI accelerators in a single box based on Intel Nervana’s Spring Crest chip, which can be scaled up to 32 accelerators in a rack. The accelerator resources can be specified in software. Ultimately, it means that different vendors can be chosen to support AI applications on different workloads.

The OAI UBB system will support disparate network architectures and supports two OCP interconnect topologies: Hybrid Cube and Fully Connected.

RELATED: CEVA announces AI core and support for auto, robot vision