Machine learning training puts Google and Nvidia on top

healthcare AI
Chips used by Nvidia and Google for machine learning are getting much faster.
(Ryzhi/Getty Images)

Artificial intelligence (AI) has advanced to the point where leading research universities and dozens of technology companies including Google and Nvidia are taking part in comparisons of their chips.

Results of the latest round of benchmarks released this week showed that both Nvidia and Google have demonstrated they can reduce from days to hours the compute time necessary to train deep neural networks used in some common AI applications.

“The new results are truly impressive,” Karl Freund, senior analyst for machine learning at Moor Insights & Strategy, wrote in a commentary posted on EE Times. Of the six benchmarks, Nvidia and Google each racked up three top spots. Nvidia reduced its run-time by up to 80% using the V100 TensorCore accelerator in the DGX2h building block.

The performance benchmarks, called MLPerf Training v0.6, were the second round of such benchmarks sponsored by MLPerf, a group of more than 40 companies and universities researching AI. The second round measured the time it takes to train each of six machine learning models for tasks including image classification, object detection, translation and playing MiniGo.

In a release, MLPerf said the submissions “showed substantial technological progress over v0.5.” Overall the v0.6 received 63 entries from five groups.

While Nvidia and Google dominated, Intel is expected to make a showing in later benchmarks with the Nervana NNP-T late in 2019. Google provides AI supercomputing as a service to AI developers using its Tensor Processing Units, first introduced in 2016.

Nvidia touted its achievement in a press release:“Our AI platform now slashes through models that once took a whole workday to train in less than two minutes.”

The company said its DGX SuperPOD, along with Mellanox InfiniBand and Nvidia AI software was able to train an image recognition model in just 80 seconds, “less time that it takes to get a cup of coffee.” In spring 2017, it took eight hours to handle the same workload on the DGX 1 system.

While such results are impressive, Freund said he worries about the cost of AI hardware. The DGX2h SuperPod used by Nvidia in the training routine has a “mind-boggling” price tag of about $38 million, Freund said.

RELATED: “Xilinx joins AI chip race”

Suggested Articles

Advances in biointegrated sensors allow continuous monitoring of a wide variety of biophysical parameters; they are smaller and more comfortable, too.

New receiver and transmitter to spur broader adoption of wireless device charging.

Now stocked by Mouser, the proximity and light detection sensor from ams offers five channels and provides programmable gain and integration time.