Machine learning training puts Google and Nvidia on top

healthcare AI
Chips used by Nvidia and Google for machine learning are getting much faster.
(Ryzhi/Getty Images)

Artificial intelligence (AI) has advanced to the point where leading research universities and dozens of technology companies including Google and Nvidia are taking part in comparisons of their chips.

Results of the latest round of benchmarks released this week showed that both Nvidia and Google have demonstrated they can reduce from days to hours the compute time necessary to train deep neural networks used in some common AI applications.

“The new results are truly impressive,” Karl Freund, senior analyst for machine learning at Moor Insights & Strategy, wrote in a commentary posted on EE Times. Of the six benchmarks, Nvidia and Google each racked up three top spots. Nvidia reduced its run-time by up to 80% using the V100 TensorCore accelerator in the DGX2h building block.

Free Daily Newsletter

Interesting read? Subscribe to FierceElectronics!

The electronics industry remains in flux as constant innovation fuels market trends. FierceElectronics subscribers rely on our suite of newsletters as their must-read source for the latest news, developments and predictions impacting their world. Sign up today to get electronics news and updates delivered to your inbox and read on the go.

The performance benchmarks, called MLPerf Training v0.6, were the second round of such benchmarks sponsored by MLPerf, a group of more than 40 companies and universities researching AI. The second round measured the time it takes to train each of six machine learning models for tasks including image classification, object detection, translation and playing MiniGo.

In a release, MLPerf said the submissions “showed substantial technological progress over v0.5.” Overall the v0.6 received 63 entries from five groups.

While Nvidia and Google dominated, Intel is expected to make a showing in later benchmarks with the Nervana NNP-T late in 2019. Google provides AI supercomputing as a service to AI developers using its Tensor Processing Units, first introduced in 2016.

Nvidia touted its achievement in a press release:“Our AI platform now slashes through models that once took a whole workday to train in less than two minutes.”

The company said its DGX SuperPOD, along with Mellanox InfiniBand and Nvidia AI software was able to train an image recognition model in just 80 seconds, “less time that it takes to get a cup of coffee.” In spring 2017, it took eight hours to handle the same workload on the DGX 1 system.

While such results are impressive, Freund said he worries about the cost of AI hardware. The DGX2h SuperPod used by Nvidia in the training routine has a “mind-boggling” price tag of about $38 million, Freund said.

RELATED: “Xilinx joins AI chip race”

Suggested Articles

Users are uncomfortable about becoming dependent on wearables due to concern over inaccurate health measurements or malfunctions.

According to a study from Research N Reports, the market for SiC and GaN devices will grow at a 50% CAGR through 2026, reaching $35.8 billion.

University of Illinois researchers have developed an affordable, reliable paper-based sensor to detect iron in fortified food products.