Intel catches AI bug at Hot Chips Conference with Nervana chip

Intel's AI chip for training
Intel's new chips for AI applications include the Nervana neural network processors for training, designated NNP-T. (Intel)

The trend toward artificial intelligence has not been lost on Intel Corp., but the semiconductor giant nevertheless made sure everyone knew it intends to make strong inroads into AI at the Hot Chips Conference in Palo Alto, California this week.

At Hot Chips, Intel revealed details of its upcoming high-performance artificial intelligence (AI) accelerators, designated the Intel Nervana neural network processors, with the NNP-T for training and the NNP-I for inference. The trend towards high-speed computing, robotics, and other computing-intensive applications is driving demand.

“To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources,” said Naveen Rao, vice president and general manager of Intel’s Artificial Intelligence Products Group, in a statement. “Data centers and the cloud need to have access to performant and scalable general-purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.”

Sponsored by Infosys

Infosys positioned as a Leader in Gartner Magic Quadrant for IT Services for Communications Service Providers, Worldwide 2020

The Gartner Magic Quadrant evaluated 12 vendors and Infosys was recognized for its completeness of vision and ability to execute.
Infosys leverages its global partner ecosystem, CSP-dedicated studio, design tools, and 5G Living Labs to boost service delivery. Innovative solutions such as the ‘Infosys Cortex2’ are driving business value for CSPs.

Related: Report: Artificial Intelligence (AI) chip sales to grow at 45.2% CAGR through 2025

According to Intel, Nervana NNP-T is designed from the ground up to train deep learning models at scale. It is built to prioritize two key real-world considerations: training a network as fast as possible and doing it within a given power budget. Intel said the chip will strike a balance among computing, communication and memory, building in features and requirements needed to solve for large models, without the overhead needed to support legacy technology. Intel added that Nervana NNP-T can be tailored to accelerate a wide variety of workloads, encompassing both existing and future needs.

Intel’s inference neural network processor, Intel Nervana NNP-I, is reportedly designed to accelerate deep learning deployment at scale, introducing specialized leading-edge deep learning acceleration while leveraging Intel’s 10nm process technology with Ice Lake cores to offer high performance per watt across all major data center workloads. Intel added that Nervana NNP-I offers high programmability without compromising performance or power efficiency.

Related: Intel's Ice Lake processors for laptops show jump in graphics performance

Aside from AI, Intel discussed details of its Lakefield hybrid chip packaging technology. Lakefield combines 3D stacking and IA hybrid computing architecture for a new class of mobile devices, leveraging Intel’s latest 10nm process and Foveros advanced packaging technology. According to Intel, Lakefield reduces standby power, core area and package height over previous generations of technology.

Intel also demonstrated TeraPHY, an in-package optical I/O chiplet for high-bandwidth, low-power communication: Intel and Ayar Labs demonstrated integrated monolithic in-package optics (MIPO) with a high-performance system-on-chip (SOC). The Ayar Labs TeraPHY* optical I/O chiplet is co-packaged with the Intel Stratix 10 FPGA using Intel Embedded Multi-die Interconnect Bridge (EMIB) technology, offering high-bandwidth, low-power data communication from the chip package with determinant latency for distances up to 2 km. Intel said this collaboration will enable new approaches to architecting computing systems for the next phase of Moore’s Law by removing the traditional performance, power and cost bottlenecks in moving data.

Suggested Articles

Legendary Samsung Chairman Lee Kun-hee died earlier this week after six years of illness

Lab inside ST fab in Singapore will bring together scientists from A * STAR Institute of Microelectronics and Japan’s ULVAC

The rush to test ventilators was “like sprinting down a pier while also building the pier”