Nvidia is not the only semiconductor giant accelerating its pace of new AI chip introductions, as AMD CEO and President Lisa Su announced at Computex this week that AMD also is putting the pedal to the metal as it lines up new chip unveilings to occur on an yearly basis, rather than the traditional schedule that had new products being introduced roughly every two years.
Speaking to a large crowd during her Computex keynote, Su said that AI demands an “accelerated roadmap” of new chips and upgrades. “It's just so clear that the demand for AI is just accelerating so much going forward,” Su said. ”We're really just at the beginning of a decade-long mega-cycle for AI and to address this incredible demand we have an exciting new roadmap.”
Su continued, “We launched MI300X last year with leadership inference, performance, memory size, and compute capabilities, and we have now expanded our roadmap so it's on an annual cadence. That means a new product family every year. Later this year, we plan to launch MI325X with faster performance and more memory [288GB of HBM3E memory and 6 terabytes per second of memory bandwidth], followed by our MI350 series in 2025.”
The MI350 series will be built on a new CDNA 4 architecture, which is expected to bring a 35x increase in AI inference performance vs. the current MI300X and its CDNA 3 architecture. Both the MI325X and MI350 series will leverage the same industry standard Universal Baseboard server used by MI300 series.
“What that means is that our customers can very quickly adopt this new technology,” Su said, adding that the 2025 offerings will be followed a year later by another new architecture–CDNA Next–which will be used in the planned MI400 series.
The blistering yearly pace of new AI chip introductions may be the new normal in the semiconductor industry if Su and Nvidia chief Jensen Huang are to be believed.
While AMD highlighted accelerated pacing for new data center AI products, the company also unveiled a wide range of other new products for client computing at Computex, starting with the AMD Ryzen AI 300 Series Processors, which Su said will bring a 50-TOPS neural processing unit to laptop PCs as AI processing migrates from the data center to more end user devices. The Ryzen AI 300 Series also delivers greater performance for Microsoft Copilot+ PCs across a broad range of applications compared to the latest x86 and Arm CPUs, according to Su and Pavan Davuluri, corporate vice president of Windows+ Devices at Microsoft.
Davuluri, who joined Su on-stage at Computex, said, “I truly believe we're at an inflection point here where AI is making computing radically more intelligent and personal, and we've collaborated with AMD since day one to make that possible.”
He added, “With the Copilot+ PCs, we want to make it possible to deliver these next gen AI experiences by using on-device capabilities to process AI and to do that in concert with the cloud. On-device AI really means faster response times, better privacy, lower costs. It means running models that have billions of parameters from PC hardware. Compared to traditional PCs even from just a few years ago, we're talking about 20 times the performance and up to 100 times the efficiency for AI workloads.”
AMD also announced its new Ryzen 9000 Series Desktop Processors, powered by the “Zen 5” architecture to deliver ~16% generational increase in IPC and higher performance across a broad range of gaming and productivity applications. Su claimed the flagship Ryzen 9 9950X is the world's fastest consumer desktop processor.
The company also unveiled its ROCm 6.1 for AMD Radeon GPUs to make AI development and deployment with AMD Radeon desktop GPUs more compatible, accessible, and scalable, a move which some industry observers have been expecting as AMD seeks to clarify its GPU pitch. AMD also announced the AMD Radeon PRO W7900 Dual Slot Workstation Graphics Card, offering scalable AI performance and optimized for high-performance platforms supporting multiple GPUs.
Su also previewed the upcoming fifth-generation AMD EPYC family of processors, which will be available in the second half of 2024, and will support up to 192 cores and 384 threads on the “Zen 5” architecture. Su also talked about how a range of customers, including Canon, Subaru, Illumina, and Hitachi Energy, are leveraging its AI products.