Nvidia pushes AI hardware for workgroups, even in the home

 

Nvidia announced AI supercomputing innovations on Monday, including a graphics processing unit with double the memory of its predecessor and a petascale workgroup server.

The workgroup server, Nvidia DGX Station A100, offers machine learning and data science capabilities for corporate offices, research facilities and even home offices, the company said. It provides 2.5 petaflops of AI capability with four A100 Tensor Core GPUs for up to 320 GB of GPU memory.

The DGX system can be plugged in anywhere, said Charlie Boyle, general manager of DGX at Nvidia.

Nvidia DGX Station workgroup server

DGX Station also supports Nvidia’s Multi-Instance GPU technology to provide up to 28 separate GPU instances for running parallel jobs with multiple users.

Nvidia named a series of DGX Station users, including BMW Group Production, Lockheed Martin and NTT Docomo.  Pacific Northwest National Lab is using the servers to conduction federally funded research to support national security, while DFKI in Germany is building models such as computer vision systems for emergency responders.

Lockheed is developing AI models that use sensor data and service logs to predict maintenance needs, while NTT Docomo is developing AI-driven services such as image recognition.

The new GPU, called the Nvidia A100 80GB GPU, has twice the memory of its predecessor introduced six months ago, which means 2 terabytes per second of memory.  It can be used in the DGX Station system as well as DGX A100 systems.

Dell, Hewlett Packard Enterprise, Lenovo and other server makers are expected to use the A100 80GB GPU in the first half of 2021.   Nvidia said it will be useful for both AI training – with faster speed-- and inferencing with such models as automatic speed recognition.

The company also announced the seventh generation of Mellanox 400G InifiniBand for fast networking at 400 Gibabits pers second of data throughput – double the previous generation.

Nvidia said it will offer more insights online at 3 p.m. PT Monday during SC20.

RELATED: Advances in AI, MEMS usher in the Internet of Voice era