The battle for enabling AI computing is intense. Every chip vendor is creating AI accelerators that each claims is better than anything else available.
Yet the market for AI is very diverse, and each product has its own place and advantages – from Nvidia’s massively parallel systems targeted at machine learning, to Intel’s chips more targeted at data center inference workloads, and even down to smartphones and other mobile devices where Qualcomm has embedded significant AI acceleration for things like image processing and security capabilities.
The truth is, AI is not one market – rather it’s a series of different sub-markets that cannot be served by only one type of chip.
Intel is exploring a different approach as an addition to its current AI acceleration capabilities, and not as a replacement for its current devices. While not the only one exploring this approach, it is building neuromorphic chips that emulate the brain. Instead of traditional massively parallel architectures based on the Von Neumann architecture using processing elements and shared memory systems that have powered computers since the beginning, Intel is experimenting with an approach that confines many individual compute elements (processing capability with dedicated memory cells) and connecting them through high speed mesh networks. It’s now on its second generation Loihi neuromorphic chips with the faster and more densely configured Loihi 2.
By emulating the biologic brain functions with densely packed neurons and synapses equivalents, Intel expects to be able to build chips for certain kinds of AI functions that can be very low power compared to larger traditionally designed parallel processing AI chips - with chip power in milliwatts vs. tens to hundreds of watts for traditional chips. While not a replacement for all learning systems in AI, it does hold promise in special workloads like routing, scheduling and audio/video processing, where biological brains are much more capable than current AI systems.
Intel does not currently plan to sell the Loihi chips. Rather, it plans to build a limited number of systems and “rent” them to groups and institutions that can experiment and advance the Neuromorphic ecosystem by creating specialized programming and similar enhancements that can ultimately be shared and made part of the open source ecosystem. At this time, Loihi and neuromorphic is clearly in the experimental stages of development, with mass deployments at least 5+ years away.
Perhaps even more important than the hardware is the software Intel is making available for enabling neuromorphic capability; Lava and Magma.
Lava is an open source environment meant to enable modeling and development of neuromorphic code. Its intent is to ultimately enable programs that can run on the Loihi 2 chips, but it also creates a version of the code that can be run on a standard processor to allow evaluation of code on generally available computers. While the initial version of the platform is still a work in progress, and will include source code enhancements, it ultimately should be similar in approach to what Intel has done with its oneAPI code. That is, make an environment that enables code to be produced, and then allows it to run on the most compatible and/or optimized platform (e.g., CPU, GPU, FPGA, AI, etc.). Further, since Lava is being open-sourced, there will be community enhancements that make it more relevant to users over time.
Magma is the code that runs on the Loihi chip environment and will remain a proprietary Intel property – much like the IA/x86 instruction set. The translation from Lava to Magma will be a key component of the higher level capabilities inherent in the Lava environment, much like higher level languages like C and Python are today.
What all of this means is that in 5-10 years, you can expect to see a range of neuromorphic subsystems embedded in SoCs and optimized for specific AI and Deep Learning workloads. It’s unlikely that the neuromorphic chips will be made into standalone servers, but rather will become add-on accelerators for specific purposes in dedicated SoCs much like GPUs are companion to a CPU.
Bottom line: AI is ultimately not going to be served by a single type of processing architecture. By adding the capability to include neuromorphic computing, Intel is adding another tool to the growing number of specialized functions that eventually will greatly accelerate AI processing. Intel is making a bet on a technology that still needs to be proven, but if successful it adds a key component to a growing array of AI specific processing capabilities that will enable a much broader implementation of AI powered systems longer term. This will ultimately result in a wider range of systems that will be deployed by enterprises and end users alike. Neuromorphic computing is definitely a technology that bears watching over the next several years.
Jack Gold is founder and principal analyst at J. Gold Associates, LLC., an information technology analyst firm based in Northborough, Massachusetts. He has more than 25 years of experience as an analyst and covers the many aspects of business and consumer computing and emerging technologies. He works with many companies. Follow Jack on Twitter and on LinkedIn.