AMD fixes on $300B total addressable market, with data center efforts leading the charge

AMD is chasing a total addressable market opportunity of $300 billion, almost four times what it viewed as its addressable market two year ago, and increasingly the company sees data center products as key to helping it make the most of that opportunity, according to comments made by AMD CEO Lisa Su at the firm’s recent analyst day.

“Part of our strategy has been to really strengthen our mix of revenue,” she said. “When you looked at AMD a few years ago, we were a very consumer-centric company. Most of our revenue was in PCs and gaming, which was good revenue, but we wanted to really change the mix. Data center was the place to lean in. We are… mid 20s [percent] In terms of being data center embedded, but we know that we can grow this much more… Our view is that we can be over 50% data center embedded with our roadmap and the product mix we can have.”

Much of that product mix is arriving via acquisitions, such as the recently completed Xilinx acquisition and its purchase of DPU specialist Pensando.

Su said AMD sees significant synergy to come with Xilinx “in AI, and that's a doubling-down increase of our investments in AI. [The synergy also will come from] bringing the product portfolios together with data center communications, automotive and embedded, and [synergy also comes from] really bringing sort of leadership solutions together as we think about what customers want going forward over the next couple of years.”

Talking more about why it is important to increase AMD’s investment in AI, she added, “It's really an explosion of AI. Everybody wants more capability, whether you're talking about training or inference, whether you're talking about data center or cloud or edge or endpoints. The models are getting more complicated, and you want better accuracy, you want more capability, which drives more compute.”

Regarding Pensando, she said, “Pensando is exactly right where we want to be in terms of broadening our data center solutions capability. Our goal is to be the most strategic supplier to the largest data centers in the world. We have a great portfolio already with EPYC and Instinct and with Xilinx assets, but Pensando has real leadership technology on the DPU front, and what we see is that their solutions will now work hand in hand with the rest of our computing technologies, such that we are accelerating key aspects of networking, security storage, and we're also offloading from the CPU.”

Su and other AMD executives announced ramped up product roadmaps on a number of fronts, but to stick with the data center theme, the company has plans for new EPYC processors and more over the next year:

  • 4th Gen AMD EPYC processors powered by “Zen 4” and “Zen 4c” cores.

    • “Genoa” powered by “Zen 4”: On track to launch in Q4 2022 as the highest performance general purpose server processor available, with the top-of-stack product delivering greater than 75% faster enterprise Java performance compared to top-of-stack 3rd Gen EPYC processors.

    • “Bergamo” powered by “Zen 4c”: Expected to be the highest performance server processors for cloud native computing, offering more than double the container density of 3rd Gen EPYC processors at their launch planned for the first half of 2023.

    • “Genoa-X” powered by “Zen 4”: An optimized version of 4th Gen EPYC processors with AMD 3D V-Cache technology to enable leadership performance in relational database and technical computing workloads.

    • "Siena” powered by “Zen 4”: The first AMD EPYC processor optimized for intelligent edge and communications deployments that require higher compute densities in a cost and power optimized platform.

AMD also is planning to roll out new Instinct MI300 accelerator processing units (APU), which AMD claimed will be the world’s first data center APUs, and which is expected to deliver a greater than 8x increase in AI training performance compared to the AMD Instinct MI200 accelerator. MI300 accelerators combine AMD CDNA 3 GPU, “Zen 4” CPU, cache memory, and HBM chiplets that are designed to provide higher memory bandwidth and better application latency for AI training and HPC workloads.

See this link for more information on other announcements the company made.