AI

Computex: Intel set to bring Xeon 6 into data center AI battle

At Computex, Intel CEO Pat Gelsinger said the company is increasing its efforts to help customers modernize their data centers for the AI era, and that it will leverage its installed base of data center chips to make a difference.

Gelsinger called the AI era the “most consequential” of his own career and the careers of many others at Computex.

“AI everywhere will push the boundaries of what's possible in every human experience–what we do in the data center and the cloud, what we do on the edge, what we do on the PC, and everywhere in between,”he said. “And with open standards, security, sustainability, central to all of it… AI is and has been central to the data center, and what Intel is doing has been the key to enabling the cloud data center for decades. We have a stunning 130 million Xeon-powered data centers around the world, and this installed base is a huge advantage and a huge priority for us collectively. Our customers want infrastructure that's based on scalable and flexible platforms to integrate with those existing systems, and run with those decades of software, those exabytes of data that they have in place… Of course they also need more compute performance, greater density, greater energy efficiency, greater server capacity.”

Intel’s key to addressing everything on that spectrum of need will be the new and much-anticipated 144-core Xeon 6 family of CPUs. Those processors are led by the previously code-named Sierra Forest chip, now coming out of the wild as the Xeon 6 E-core (the “E” is for “Efficient”), which is available now, and the Xeon 6 P-core (the “P” is for, yes, “Performance”) coming out next quarter. The Xeon 6 family is being unveiled barely six months after Intel offered up its fifth-generation Xeon family, but as other executives at Computex have noted, AI demand is driving semiconductor companies to move faster than ever in bring new products to market.

How efficient is the Xeon 6 E-core? Justin Hotard, executive vice president and general manager, Data Center and AI Group at Intel, said during a briefing before Computex that Intel is touting density and power efficiency improvements that enable rack-level consolidation of 3-to-1, providing customers with a rack-level performance gain of up to 4.2x and performance per watt gain of up to 2.6x when compared with second-generation Intel Xeon processors on media transcode workloads. Using less power and rack space, Xeon 6 processors free up compute capacity and infrastructure for innovative new AI projects, he said.

Hotard also described the Xeon 6 P-core as “the most powerful Xeon” Intel has ever made, and claimed that it outperformed competing processors, processing AI inferencing 3.7 times faster.

As Gelsinger indicated, Hotard also said Intel is intent on helping enterprises make sense of generative AI and unlock the value of their data.

“AI is driving a tremendous inflection point to the data center,” he said. “In fact, the reason it's driving such a massive inflection point in data center is because 80% of the world's data actually resides in enterprises. And what we see is that the vast majority, well over 80% of enterprises, are planning to deploy Gen AI in the next few years, and we anticipate they will spend over $140 billion to do so. However, the reality is, today, most enterprises aren't yet ready for Gen AI. [Their existing] data is not formatted to be deployed in Gen AI use cases. The data resides in multiple locations. There's concerns about security and trust, and most enterprises don't have the expertise and need to balance efficiency and the sustainability commitments that they've made before they deploy AI.”

In addition to the Xeon 6 family, Intel also announced its Gaudi 3 AI accelerator chips, which the company described as “an 8,192-accelerator cluster… projected to offer up to 40% faster time-to-train versus the equivalent size Nvidia H100 GPU cluster and up to 15% faster training throughput for a 64-accelerator cluster versus Nvidia H100 on the Llama2-70B model. In addition, Intel Gaudi 3 is projected to offer an average of up to 2x faster inferencing versus Nvidia H100, running popular LLMs such as Llama-70B and Mistral-7B.”

Gaudi 3 production partners include Dell, Hewlett Packard Enterprise, Lenovo, and Supermicro, but Intel also announced it is expanding the partner list to include Asus, Foxconn, Gigabyte, Inventec, Quanta, and Wistron.

The company also is looking to chip away at Nvidia’s AI chip dominance with more attractive pricing on its Gaudi AI accelerators. An Intel kit with eight Intel Gaudi 2 accelerators with a universal baseboard (UBB) is being offered to system providers at $65,000, estimated to potentially be one-third the cost of comparable competitive platforms. In addition, a kit including eight Intel Gaudi 3 accelerators with a UBB will list at $125,000, estimated to be two-thirds the cost of comparable competitive platforms.

Meanwhile, on the AI PC front, Intel updated previous shipping metrics, stating that it had shipped 7 million units of Meteor Lake processors to power AI PCs, and that its next-generation Lunar Lake processor will be released next quarter, and that its Arrow Lake processors, designed for “scaling AI from mobile to desktop” will follow soon after.

Lunar Lake architectural details include:

  • A fourth-generation Intel neural processing unit (NPU) with up to 48 tera-operations per second (TOPS) of AI performance. This powerful NPU delivers up to 4x AI compute over the previous generation, enabling corresponding improvements in generative AI.

  • An all-new GPU design, code-named Battlemage, combines two new innovations: Xe2 GPU cores for graphics and Xe Matrix Extension (XMX) arrays for AI. The Xe2 GPU cores improve gaming and graphics performance by 1.5x over the previous generation, while the new XMX arrays enable a second AI accelerator with up to 67 TOPS of performance for extraordinary throughput in AI content creation.

  • Advanced low-power island, a novel compute cluster and Intel innovation that handles background and productivity tasks with extreme efficiency, enabling amazing laptop battery life.

Intel said Lunar Lake already is set to power more than 80 different AI PC designs from 20 original equipment manufacturers (OEMs).