Nvidia launches new AI embedded module, runs the table on MLPerf

Nvidia on Wednesday announced Jetson Xavier NX, a small embedded computer module for artificial intelligence jobs inside edge devices like small commercial robots, airborne drones with multiple cameras, high-resolution cameras on assembly lines, portable medical devices and industrial IoT settings.

The NX is smaller than a credit card (70x45 mm) yet delivers 21 trillion operations per second with 15 watts of power and can run multiple neural networks in parallel for AI workloads. It runs on the CUDA-X AI architecture, the same as three other Jetson family products already on the market.

It is priced at $399 and will ship in March through Nvidia’s distribution channels.

Jetson Xavier NX “fits a needed market space” between the Jetson TX2 series and Jetson AGX Xavier products announced by Nvidia earlier, said Patrick Moorhead, principal analyst at Moor Insights & Strategy, in an email. “It’s ideal for devices with the need for many sensors, even a battery-powered, highest-performance drone. The biggest benefit is a massive amount of machine learning inference performance with a mature toolchain.”

Intel and Qualcomm compete in the same space, Moorhead noted.

A new set of MLPerf benchmarks released Wednesday showed Nvidia demonstrated superior performance in handling AI inference workloads for data centers and at the edge. The benchmarking tests examined performance across various form factors when looking at image classification, object detection and translation.

In the MLPerf Inference 0.5 benchmarks, Nvidia was the only AI platform company to submit results across all five benchmarks. Nvidia set eight records during a July MLPerf 0.6 benchmark for AI training.

RELATED: Machine learning training puts Google and Nvidia on top

“Nvidia is the only company that has the production silicon, software, programmability and talent to publish benchmarks across the spectrum of MLPerf and win in almost every category,” said Karl Freund, an analyst at Moor Insights & Strategy, via email.

“They ran the table." he added. "The only competition was from Habana, which was expected since they are one of the few contenders with working silicon. Nvidia’s programmability uniquely positions them well for future MLPerf releases.”

The Xavier NX is built around a low-power version of the Xavier SoC that was used in the MLPerf Inference 0.5 benchmarks.

Nvidia’s Jetson family of now four embedded AI computers starts at the low end with its Jetson Nano, which is priced at $129 for a 45x70mm device that runs on 5 to 10 watts and delivers 0.5 trillion floating operations per second (TFLOPS). Next is the Jetson TX2 Series, starting at $249 for a 50x87 mm device that runs on 7.5 to 15 watts and delivers 1.3 TFLOPS. 

Then comes the new Jetson Xavier NX announced Wednesday that is priced at $399 and runs on 10 to 15 watts on a 45x70 mm board to deliver up to 21 trillion operations per second or 6 TFLOPS. The top-end Jetson AGX Xavier Series starts at $599 and is a 100 x 87 mm component that runs on 10 to 30 watts and delivers up to 32 trillion operations per second and up to 11 TFLOPS.