Like other big chipmakers, AMD wants to seize the huge opportunity afforded by artificial intelligence, which requires chips to accelerate both training and inference primarily in data centers in enterprises and cloud providers, but also massive systems used for research.
AMD has semiconductors ready to take on such work, including the MI300 coming later in 2023. But some financial analysts and investors don’t really see how or when these devices, amazing as they are, translate into future profits.
“We believe that AI is a huge driver of compute growth,” CEO Lisa Su told a skeptical Goldman Sachs analyst during the 4Q earnings call on Tuesday. “And given our portfolio, it should be a driver of our growth as well.”
She enumerated the coming GPUs for training and a recent Ryzen AI launch for PCs, adding, “You can expect additional AI acceleration coming in our server portfolio as well. So you’re going to see AI broadly across our road maps.”
Data center GPU ambitions are a “large opportunity,” Su added. “As we go into the second half of the year and the launch of MI300, sort of the first user of MI300 will be the supercomputers or El Capitan, but we’re working with some large cloud vendors as well to qualify MI300 in AI workloads. And we should expect that to be more of a meaningful contributor in 2024.”
She concluded her answer by adding: “So lots of focus on just a huge opportunity, lots of investments in software as well as to bring the ecosystem with us.”
Toshiya Hari, the Goldman Sachs analyst, noted that AMD is doing really well in gaining market share, but in relatively mature markets like the data center. AMD’s 4Q results showed data center increased 42% year over year to $1.7 billion, led by increased use of EPYC processors by cloud providers. For cloud, sales to North America hyperscalers more than doubled year-over-year as those customers moved more of the internal workloads and external instances to EPYC, Su said.
EPYC processors are now used to power more than 600 publicly available instances globally following launches of new AMD-based instances from AWS, Microsoft and others in 4Q. Also, Su said a “number” of new wins came in the quarter with financial services, automotive, technology, energy and aerospace companies. Meanwhile, high performance computing also relied on EPYC processor adoption, with the number of AMD-powered supercomputers on the latest top 500 list increasing by 38% in 4Q over a year earlier.
AMD also powers more than 100 of the world’s fastest supercomputers. Fourth-gen EPYC is up to 80% more energy efficient. In enterprises, AMD has 140 fourth-gen EPYC platforms being developed by HPE, Dell, Lenovo, Super Micro and others. Xilinx, acquired a year ago, has contributed record sales, led by strong demand from financial services for Alveo X3 series boards.
Sales for Pensando DPUs ramped “significantly” from the third quarter, Su added. DPUs are expected to become a standard part of cloud and enterprise data centers. Meanwhile, she said GPUs for data center were down significantly from a year ago with shipments supporting multiple Instinct MI250 accelerator supercomputer contracts.
After MI250, AMD is looking to MI300 accelerators for large model AI applications in cloud data centers. That next-gen chip is being used to power the two-plus exaflop El Capitan exascale computer at Lawrence Livermore National Labs. It is the first industry data center chip to combine a Zen 4 CPU, GPU and memory on a single integrated chip, with eight times more performance and five times better efficiency for AI compared to the MI250.
AMD will launch MI300 in the second half of 2023 and will be sampled to lead customers later this quarter.
Based on comments Su made at CES 2023 in early January, the MI300 will include 24 Zen 4 CPU cores, while the GPU will use an undisclosed number of CDNA 3 architecture control units. All of this will be combined with 128GB of HBM3 memory. Also, the MI300 has 9 of TSMC’s 5 nm chiplets, atop 4 of TSMC’s 6nm chiplets. In all, it has 146 billion transistors.
“I’m very excited to show you MI300 for the very, very first time,” Su said during her CES keynote, while holding it high. “This is a big one, guys.”
Industry experts believe the MI300 will compete with Nvidia’s Grace Hopper super chip, which has an Arm v9 Grace CPU and a Hopper GPU. Intel will have a Falcon Shores XPU sometime in 2024.
“Over the next several years, one of our largest growth opportunities is in AI, which is in the early stages of transforming virtually every industry service and product,” Su said on the earnings call. “We expect AI adoption will accelerate significantly over the coming years and are incredibly excited about leveraging our broad portfolio of CPUs, GPUs and adaptive accelerators in combination with our software expertise to deliver differentiated solutions that can address the full spectrum of AI needs in training and inference across cloud, edge and client.”
RELATED: AMD seesaws with PC revenues down, data center chips up