Machine learning training puts Google and Nvidia on top

healthcare AI
Chips used by Nvidia and Google for machine learning are getting much faster. (Ryzhi/Getty Images)

Artificial intelligence (AI) has advanced to the point where leading research universities and dozens of technology companies including Google and Nvidia are taking part in comparisons of their chips.

Results of the latest round of benchmarks released this week showed that both Nvidia and Google have demonstrated they can reduce from days to hours the compute time necessary to train deep neural networks used in some common AI applications.

“The new results are truly impressive,” Karl Freund, senior analyst for machine learning at Moor Insights & Strategy, wrote in a commentary posted on EE Times. Of the six benchmarks, Nvidia and Google each racked up three top spots. Nvidia reduced its run-time by up to 80% using the V100 TensorCore accelerator in the DGX2h building block.

Sponsored by Infosys

Infosys positioned as a Leader in Gartner Magic Quadrant for IT Services for Communications Service Providers, Worldwide 2020

The Gartner Magic Quadrant evaluated 12 vendors and Infosys was recognized for its completeness of vision and ability to execute.
Infosys leverages its global partner ecosystem, CSP-dedicated studio, design tools, and 5G Living Labs to boost service delivery. Innovative solutions such as the ‘Infosys Cortex2’ are driving business value for CSPs.

The performance benchmarks, called MLPerf Training v0.6, were the second round of such benchmarks sponsored by MLPerf, a group of more than 40 companies and universities researching AI. The second round measured the time it takes to train each of six machine learning models for tasks including image classification, object detection, translation and playing MiniGo.

In a release, MLPerf said the submissions “showed substantial technological progress over v0.5.” Overall the v0.6 received 63 entries from five groups.

While Nvidia and Google dominated, Intel is expected to make a showing in later benchmarks with the Nervana NNP-T late in 2019. Google provides AI supercomputing as a service to AI developers using its Tensor Processing Units, first introduced in 2016.

Nvidia touted its achievement in a press release:“Our AI platform now slashes through models that once took a whole workday to train in less than two minutes.”

The company said its DGX SuperPOD, along with Mellanox InfiniBand and Nvidia AI software was able to train an image recognition model in just 80 seconds, “less time that it takes to get a cup of coffee.” In spring 2017, it took eight hours to handle the same workload on the DGX 1 system.

While such results are impressive, Freund said he worries about the cost of AI hardware. The DGX2h SuperPod used by Nvidia in the training routine has a “mind-boggling” price tag of about $38 million, Freund said.

RELATED: “Xilinx joins AI chip race”

Suggested Articles

Deal gives Marvell access to pulse amplitude modulation DSPs that Inphi has in its portfolio

Legendary Samsung Chairman Lee Kun-hee died earlier this week after six years of illness

Lab inside ST fab in Singapore will bring together scientists from A * STAR Institute of Microelectronics and Japan’s ULVAC