CEVA announced a second-generation AI processor architecture called NeuPro-S for neural network inferencing work at the edge.
In conjunction with that architecture, CEVA introduced compiler technology in firmware for co-processing of NeuPro-S cores with custom-made neural network engines. The product is called CDNN-Invite API, for use atop CDNN, the company’s neural network compiler.
Together, the CEVA products are designed for vision-based devices needing edge AI processing. These devices include autonomous vehicles, smartphones, surveillance cameras, consumer cameras and emerging XR headsets, robots and industrial applications.
NeuPro-S is designed to process neural networks for detection and classification of objects within videos and images in edge devices.
CEVA’s announcements are likely intended to make its licensable IP core products attractive to SoC designers who might instead be creating their own AI chip designs, much as Tesla has done with its full self-driving computer chips, according to Junko Yoshida in EE Times.
CEVA, based in Mountain View, California, made the announcements at AutoSens in Brussels on Sept. 17.
The new NuePro-S offers 50% higher performance, 40% lower memory bandwidth and 30% lower power consumption than the first-generation AI processor, CEVA claimed. The new products are designed to help lower the cost and complexity of innovating in neural network technology.
NeuPro-S is available and has already been licensed to creators of auto and consumer camera applications, while CDNN-Invite API will be ready for general licensing by the end of 2019. The NeuPro-S family of processors includes NPS1000, NPS2000 and NPS4000, which have 1,000, 2,000 and 4,000 8-bit MACs respectively per cycle. The NPS4000 offers 12.5 Tera Operations Per Second (TOPS) and can be scaled to reach up to 100 TOPS. That is the highest convolutional neural network performance per core available, CEVA claimed.