Update: ChatGPT runs 10K Nvidia training GPUs with potential for thousands more

*This article has been updated

Artificial intelligence is already providing business opportunities for GPU chipmakers, mainly Nvidia, but AMD, Intel and Google are releasing new hardware and software as well to support deep learning workloads.

The deep learning field will inevitably get even bigger and more profitable for such players, according to analysts, largely due to chatbots and the influence they will have in coming years in the enterprise. Nvidia is viewed as sitting pretty, potentially helping it overcome recent slowdowns in the gaming market.

The most popular deep learning workload of late is ChatGPT, in beta from Open.AI, which was trained on Nvidia GPUs. According to UBS analyst Timothy Arcuri, ChatGPT used 10,000 Nvidia GPUs to train the model.

“But the system is now experiencing outages following an explosion in usage and numerous users concurrently inferencing the model, suggesting that this is clearly not enough capacity,” Arcuri wrote in a Jan. 16 note to investors.

The inferencing function is no less compute intensive than LLM (large language model) training which implies that “OpenAI will need to scale quite rapidly,” Arcuri said.  Perhaps the Open.AI release of ChatGPT-4 in mid-2003 will mark better performance, Arcuri added.

UBS analysts summarized what GPT-4 is supposed to do, noting it won’t be much bigger than GPT-3, which was trained on 175 billion parameters, but will use more compute and storage “with the intention of outperforming GPT-3 in terms of performance and functionality.”  The next version will remain a text model, not multimodal, and will be given an update on the training set to supplement data from 2021.

Like other analysts, UBS noted that ChatGPT needs to improve its accuracy and get better at discovering its own misconceptions.  “Though, if GPT-4 is built on the same infrastructure it is likely to still have a number of instances of inaccurate and imaginary results,” UBS said.

Despite such concerns, Arcuri, like others, sees ChatGPT as a “game changer for AI adoption in the enterprise,” which benefits GPU maker  Nvidia especially. “ChatGPT represents the first major foray into generative AI, setting the stage for AI to gain widespread adoption across countless end-user applications, including within the enterprise, and offering a glimpse into the opportunity ahead for Nvidia.”

*In a release notes blog from Feb. 9, Open AI announced it was beginning to roll out a way for Plus users the ability to choose between Default and Turbo versions of ChatGPT, with Turbo optimized for speed.  "Depending on feedback, we may roll out this [choice] feature (or just Turbo) to all users soon," the release note said.   It doesn't appear on the surface that the roll out will do anything to reduce concurrent users causing slowdowns, but analysts are watching ChatGPT to see how long early popularity of ChatGPT will continue. 

Harsh Chauhan at The Motley Fool described the coming financial opportunity for Nvidia as seated around its data center business. Nvidia controls more than 90% of the data center GPU market, with AMD most of the rest.

Nvidia’s data center revenue was up by 30% in the fiscal third quarter. “Demand for data center GPUs is reportedly growing at annual pace of 23.5%, a pace that this market is expected to sustain until the end of the decade,” Chauhan noted. In addition, Nvidia is entering the Arm-based server processor market, which is expected to triple in size in the next decade.  As such, the Nvidia data center business could offset its weakness in the gaming space.

 

outdoor sculpture of chatgpt two arms
Dall-e depiction of ChatGPT as an elegant outdoor sculpture (Dall-e)

How ChatGPT describes itself

Fierce Electronics asked ChatGPT to describe its compute infrastructure and got the following responses to four different questions immediately:

“ChatGPT is a machine learning model that runs on a distributed compute infrastructure, which typically consists of a cluster of powerful servers with multiple GPUs (graphics processing units) for parallel processing.

“The specific hardware and vendors used to train and run ChatGPT may vary depending on the implementation, but it is commonly trained on NVIDIA GPUs, as they are widely used for deep learning due to their high performance and CUDA support. Additionally, some cloud computing platforms such as AWS, GCP, and Azure provide the infrastructure to train and run large deep learning models like ChatGPT.”

Asked what compute supports inferencing, ChatGPT said the following:

“Inference is the process of using a trained machine learning model to make predictions on new data. In the case of ChatGPT, inference is used to generate human-like text based on the input provided.

“The compute infrastructure used for inference with ChatGPT can vary depending on the implementation. It could run on a single powerful GPU or on multiple GPUs for faster processing. It could also be run on servers with CPU-only. In some cases, it might also be run on edge devices such as mobile phones or embedded systems.

"The specific hardware and vendors used for inference may depend on the requirements of the application, such as the desired inference speed and power consumption. As with training, some cloud computing platforms such as AWS, GCP, and Azure provide the infrastructure to run large deep learning models like ChatGPT for inference.

“It's worth mentioning that OpenAI has released the model in different sizes and architectures (small, medium and large) to adapt to different needs and constraints of the end user.”

Asked will ChatGPT use GPUs from other vendors than Nvidia, the chatbot responded as follows:

“It is possible that ChatGPT or other deep learning models could be trained or run on GPUs from other vendors in the future. However, currently, NVIDIA GPUs are widely used in the deep learning community due to their high performance and CUDA support. CUDA is a parallel computing platform and programming model developed by NVIDIA that allows for efficient computation on NVIDIA GPUs. Many deep learning libraries and frameworks, such as TensorFlow and PyTorch, have built-in support for CUDA and are optimized for NVIDIA GPUs.

“Additionally, other vendors such as AMD, Intel and Google have recently released new hardware and software to support deep learning workloads, and it is likely that more and more models will be trained and run on those platforms as well.

“It's also worth mentioning that OpenAI, the organization that developed ChatGPT, is focused on developing new AI techniques and models, so they are always experimenting with new hardware and software to improve their models performance.”

In a demonstration of ChatGPT’s ability to be politically astute, the chatbot did not say, when asked, how many Nvidia GPU’s have been used for its training with the following response:

“It is possible to train ChatGPT on a large cluster of GPUs, such as 10,000 Nvidia GPUs. However, the exact number of GPUs used for training would depend on the specific implementation and resources available. Additionally, the number of GPUs used in training may also depend on the size of the model being trained and the amount of training data available.”

RELATED: ChatGPT was scary good in these simple FE tests

RELATED: ChatGPT has gone viral. Here are four tech companies that stand to benefit—or not

android version of chatgpt
Asked to create ChatGPT as an android form in space, this is what Dall-e created (Dall-e)