Intel pulled back the curtain on 15 technical papers on Monday describing its chip research into transforming computing towards a focus on data running across the core, edge and endpoints of systems.
Intel described the transition as moving from one that focuses on hardware and programs to one that focuses more on data and information. Such a change requires greater energy efficiency and more powerful processing closer to devices where data is generated like image sensors, according to Vivek De, an Intel fellow and the director of Circuit Technology Research for Intel Labs.
The research promises more efficient computation techniques with promise for various applications such as robotics, augmented reality, machine vision and video analytics. At such endpoints and other locations in the movement of data, there are often limits on bandwidth, memory and power that need to be overcome, De said.
Some of the research could eventually be directly applied to production of new chips, but Intel didn’t share any timeline. “Our research influences what capabilities we choose to incorporate into future products over time,” a spokeswoman said. The presentation of the papers was covered in an Intel blog and announced at the 2020 Symposia of VLSI Technology and Circuits.
In one of the 15 technical papers, 11 Intel researchers demonstrated the use of an all-digital Binary Neural Network (BNN) accelerator chip based on a 10 nm design of a FinFET (a fin field-effect transistor) CMOS (complementary metal-oxide-semiconductor). Traditionally, BNNs have been analog and not digital in some power-constrained edge devices, but analog BNNs have lower accuracy at making predictions and aren’t as tolerant as digital accelerators in processor variations and noise.
In this research paper, Intel said it was able to deliver energy efficiency with digital approaching that of analog in-memory and also provide better scale for advanced processing. It said it reached energy efficiency of 617 trillion operations per second (TOPS) per watt by using Compute Near Memory (CNM), inner product compute and Near-Threshold Voltage operations.
“The digital BNN design approaches the energy efficiency of analog in-memory techniques while also ensuring deterministic, scalable and precise operation,” the authors wrote. CNM designs increase energy efficiency by interleaving memory sub-arrays and Multiply-Accumulate units, the authors said.
Other papers presented included one for doubling the local memory bandwidth for artificial intelligence, machine learning and deep learning applications and another for reducing the power needed for a deep learning based video stream analysis.
In the latter of those two papers, De described for FierceElectronics how a chip for event-driven visual data processing could be used with new algorithms to only process visual inputs based on motion.
For example, a surveillance camera and underlying technology could focus on two people walking in a large parking lot. The purpose of the technology is to provide improved image accuracy while alleviating high compute and memory requirements of visual analytics at the edge.