Nvidia has 80% share of AI processors, Omdia says, except…

 

Analyst firm Omdia recently ranked Nvidia at the top of the AI processor market with an 80% share in 2020, well ahead of its competitors.

The tabulation puts Nvidia AI processor revenue at $3.2 billion in 2020, an improvement from $1.8 billion in 2019.  Omdia ranked Xilinx second with its FPGA products.

Google finished third with its Tensor Processing Unit, while Intel finished fourth with its Habana AI ASSPs and its FPGAs for AI cloud and data center servers.  AMD ranked fifth with AI ASSPs for cloud and data center.

The report is notable for leaving out Intel Xeon CPUs in the Omdia tabulation even though Xeons are used extensively for AI acceleration in cloud and data center operations, as Omdia admits.  Xeon does not meet the Omdia definition of an AI processor which includes “only those chips that integrate distinct subsystems dedicated to AI processing,” Omdia said.

As far back as 2019, Intel began a heavy push for Xeon for a variety of AI workloads alongside other compute tasks and has seen steady progress.  In April, Intel launched its third generation Ice Lake Xeon Scalable processor with what Intel called built-in AI.

At the time, Intel said it already had shipped 200,000 of the 10nm processors in the first quarter and boasted it offers 1.5 times higher performance across 20 AI workloads when compared to AMD EPYC7763 and 1.3 times higher performance versus the Nvidia A100 GPU. Cisco announced in April it was using Xeon third gen in three new UCS servers.

RELATED: Intel launches Xeon Ice Lake processor with built-in AI

Jack Gold, an analyst at J. Gold Associates, said it is hard to know accurately Intel revenues for Xeon chips used for AI jobs because of how Intel reports earnings. But he estimated at least half of the latest Data Center Group revenues for second quarter of $5.6 billion were from Xeon, or more than $2.8 billion. (Xeon is also used in 5G implementations, not a part of the Data Center Group.) Based on Gold’s estimate, Xeon for data centers could be as much as three to four times the size of the Nvidia AI processor revenue in 2020, although not all the Xeon CPUs would be used for AI work.

Omdia’s definition of AI processors only includes GPU-derived AI application-specific standard products (ASSPs) , proprietary core AI application-specific standard products, AI application-specific integrated circuits (AI ASICS) and field programmable gate arrays (FPGAs)  

“Despite the onslaught of new competitors and new types of chips, Nvidia’s GPU-based devices have remained the default choice for cloud hyperscalers and on-premises data centers, partly because of their familiarity to users,” said Omdia principal analyst Jonathan Cassell.  GPU-based chips were the first type of AI processor widely used for AI acceleration.

Cassell told Fierce Electronics that CPUs are the most widely used chip for AI acceleration in the cloud and data center, even though Omdia doesn’t classify them as AI processors.  If they were counted as AI processors, Intel would be the leading supplier in AI in 2020, beating Nvidia, he added.

 CPUs are mainly used for AI inference work in cloud and data center environments as opposed to AI training. In an older Omdia report from 2020, the analyst firm calculated that CPUs accounted for the majority of market share for 2019 and were estimated to lead the market through 2025.

The market for AI processors is growing rapidly, with global revenues for cloud and data center AI processors reaching $4 billion in 2020.  Revenue should grow nine times by 2026 to $37 billion, according to the definition used by Omdia.