Before Nvidia reported fiscal second quarter 2024 earnings this week, there was no shortage of observer commentary suggesting the company was due for a down quarter. Instead, Nvidia delivered record revenue and massive earnings growth driven by still-booming demand for AI chips and software.
So, naturally, the biggest question now is: How long can this continue? Having blown out expectations and conquered short-term concerns, Nvidia itself issued an outlook for the fiscal third quarter of 2024 that included a projection for $16 billion in revenue.
Lucas Keh, semiconductors analyst at global research firm Third Bridge, said there is reason to believe the run can continue. “Our experts see the AI boom continuing with Nvidia taking another leg up,” he stated in an email to Fierce Electronics. “Nvidia today accounts for more than 70% of A.I. chip sales, and holds an even bigger position in training generative A.I. models with 90%.”
And, the data center migration to accelerated computing solutions to support AI model training and generative AI applications development is still in its infancy. Clearly, Nvidia sees no slowdown ahead, but here are five challenges the company may confront as it tries to keep the pedal to the metal:
1- Hardware competition: At the core of Nvidia’s strength is a massive data center migration from CPU-centric computing to GPU-driven accelerated computing, but the market for AI chips is getting more crowded and complicated, with AMD and Intel focusing more on this area, as well as Google and Amazon and a growing throng of companies dedicated to AI acceleration.
Jack Gold, president and principal analyst at J. Gold Associates, said of Nvidia, “As they continue to excel in that space, they are also starting to see a lot more competition than they ever have.”
Some of that competition is coming from a revived Intel and a determined AMD. “AMD has stated they will focus on AI going forward,” Gold added. “Can they do to Nvidia what they did to Intel? That is, offer really compelling solutions at price points below the competition. It won’t be overnight, but Nvidia needs to be looking over their shoulder at them (paranoia is a requirement in the tech marketplace). Also, the hyperscalers were doing a lot of buying of Nvidia chips to build up capabilities. But as AWS, MSFT, GCP [Google Cloud Platform], etc., continue to design and deploy their own AI accelerator chips, will that put a dent in the Nvidia numbers? And there are a lot of AI chip startups – none of which yet have much impact, but they could in the next couple of years.”
Leonard Lee, executive analyst at neXt Curve, agreed that competition will be an issue for Nvidia–at least eventually. He told Fierce Electronics via email, “One of the big benefits of GenAI is that it is driving intense innovation in AI computing. For example, Cerebras' wafer-scale AI processors and their Condor Galaxy 1 AI supercomputer has the potential to rewrite the script on GenAI tech. In the meantime, Nvidia will enjoy market share and dominance even with AMD coming to market with MI300.”
2- Software competition: One of Nvidia’s not-so-secret weapons amid the AI boom has been its software and programming language products, proprietary capabilities that have help the company build an “AI moat” around itself, to borrow a commonly-used term to describe Nvidia’s AI dominance.
But, as in the hardware space, more companies will continue to build their own AI software, and unified approaches are merging from the likes of Modular and others.
While Nvidia has been dismissive of Intel, Gold noted, “There is a lot going on at Intel that will capture increased market share over the next 1-2 years, as they offer credible AI capabilities to their customers, and [software] assets to back it up.”
3- China questions: Inventory dynamics in China, a huge market for Nvidia, remain uncertain, but the bigger issue could be the potential for the U.S. government to restrict sales of GPUs and other products into China.
Charlotte Kress, executive vice president and CFO of Nvidia, on this week’s earnings call acknowledged such a policy as a possible long-term threat for Nvidia and others, stating, “We believe the current regulation is achieving the intended results. Given the strength of demand for our products worldwide, we do not anticipate that additional export restrictions on our data center GPUs, if adopted, would have an immediate material impact to our financial results. However, over the long term, restrictions prohibiting the sale of our data center GPUs to China, if implemented, will result in a permanent loss of an opportunity for the U.S.”
Lee was not so certain this will be an imminent problem for Nvidia. “Regarding U.S. restrictions on China, I don’t see having huge near-term implications on Nvidia’s GenAI business given what appears to be a healthy domestic pipeline of demand.”
4- The evolution from AI training to AI inference: Much of Nvidia’s AI success has been built on the ability of its product to support training of very large AI models, a necessary early phase as organizations of all types experimenting with AI and start to adapt the technology to address their own particular needs. The market in the coming years will transition AI training to AI inference, that is, well-trained AI taking action and making decisions to support business and IT operations. Gold said, "The AI market is diversifying, with the majority of the market today being on large AI model training systems in cloud/data centers, but in the next few years the majority of the AI market opportunity will shift to inference on edge-based systems. Not clear Nvidia has a good answer to that space."
5- Uncertainty around the still-emerging generative AI economy: The AI market is still in its early phases, and generative AI is an even more recent phenomenon. While many companies, especially hyperscalers, have been rushing to invest in AI training and development infrastructure, it remains to be seen how quickly this market will continue to mature.
This is particularly the case with monetization of the technology, Lee said. “Regarding Nvidia’s outlook, much of the massive revenue bump has been an outcome of what I characterize as speculative POC (proof of concept) buying primarily by hyperscalers, most notably Microsoft and Google, and by what appear to be a fast-growing number of VC-backed GenAI service provider ventures,” he said, adding, “At the moment, there is a disturbing absence of substantial and sustainable monetization cases to justify the massive spend in GenAI infrastructure and platforms. The question of how Gen AI applications and services will be monetized has largely gone unanswered across the board.”
Lee said that neXt Curve’s own research revealed “a paucity of enterprise GenAI initiatives that have come out of POC. Most POC efforts that we have learned of in end user interviews have highlighted and socialized the limitations and issues with GenAI technology and applications. This is not surprising given the relative beta state of the technology.”
He added, “All this being said, my sense is that the aperture of end market value realized in the near term for GenAI is very narrow and I have not seen much to suggest that that aperture will widen significantly in the mid term and sufficiently in the long term. Nvidia and its customers will likely find themselves under a great deal of pressure to make ROI happen in the "GenAI economy". The clock is ticking, and rather quickly.”