Nvidia again on Wednesday touted its prowess in AI inference performance in data center and edge computing.
The GPU maker won all the tests in the latest round of MLPerf benchmarks. It noted that eight A100 GPUs in a single DGX A100 system can provide the same compute performance of nearly 1,000 dual-socket CPU servers on some apps.
Nvidia GPUs are used in systems from a wide range of server providers including Cisco, Dell EMC, Fujitsu, and Lenovo. Nvidia noted in a blog by Paresh Kharya, senior director of product management and marketing at Nvidia, that MLPerf benchmarks are relied upon by Arm, Facebook, Google, Intel, Lenovo and Microsoft.
In its blog, Nvidia said AI breakthroughs are having a profound impact on natural language processing, medical imaging and recommendation systems. The company’s GPUs are being used in auto, robotics, retail, manufacturing and financial services by companies such as American Express, BMW, Capital One, Dominos, Ford, Kroger, and Toyota.
Separately on Wednesday, Synopsis announced a collaboration with IBM Research’s AI Hardware Center to advance AI compute performance by 1,000 times in the coming decade, more than an annual doubling of AI compute performance.
Nvidia’s promotion of its GPUs for AI inference and other industry efforts to improve AI compute performance stand in stark contrast to a recent finding that just 11% of companies say they have seen a significant financial return on investment from AI deployments. The finding was based on a survey of 3,000 managers globally and interviews conducted by Boston Consulting Group in partnership with MIT Sloan Management Review.
“Compute performance is important, but not as important as how you’ve trained your AI program and how well you’ve defined the AI parameters,” said Jack Gold, an independent analyst at J. Gold Associates, in an email to Fierce Electronics.
When it comes to AI, Gold said, “what matters is how good your algorithms are, and more importantly, how good your learning data is…AI benchmarks are troublesome in that there are many and they may not apply that closely to what your AI process is actually doing. I take all benchmarks with a large grain of salt.”