LAS VEGAS, NV --- CEVA, Inc. introduces CDNN2 (CEVA Deep Neural Network), its second generation neural network software framework for machine learning. CDNN2 enables localized, deep learning-based video analytics on camera devices in real time. This significantly reduces data bandwidth and storage compared to running such analytics in the cloud, while lowering latency and increasing privacy. Coupled with the CEVA-XM4 intelligent vision processor, CDNN2 offers significant time-to-market and power advantages for implementing machine learning in embedded systems for smartphones, advanced driver assistance systems (ADAS), surveillance equipment, drones, robots and other camera-enabled smart devices.
CDNN2 builds on the successful foundations of CEVA’s first generation neural network software framework (CDNN), which is already in design with multiple customers and partners. It adds support for TensorFlow, Google’s software library for machine learning, as well as offering improved capabilities and performance for the most sophisticated and latest network topologies and layers. CDNN2 also supports fully convolutional networks, thereby allowing any given network to work with any input resolution.
Using a set of enhanced APIs, CDNN2 improves the overall system performance, including direct offload from the CPU to the CEVA-XM4 for various neural network-related tasks. These enhancements, combined with the “push-button” capability that automatically converts pre-trained networks to run seamlessly on the CEVA-XM4, underpin the significant time-to-market and power advantages that CDNN2 offers for developing embedded vision systems. The end result is that CDNN2 generates an even faster network model for the CEVA-XM4 imaging and vision DSP, consuming significantly lower power and memory bandwidth compared to CPU- and GPU-based systems.
CDNN2 is intended to be used for object recognition, advanced driver assistance systems (ADAS), Artificial intelligence (AI), video analytics, augmented reality (AR), virtual reality (VR) and similar computer vision applications. The CDNN2 software library is supplied as source code, extending the CEVA-XM4’s existing Application Developer Kit (ADK) and computer vision library, CEVA-CV. It is flexible and modular, capable of supporting either complete CNN implementations or specific layers for a wide breadth of networks. These networks include Alexnet, GoogLeNet, ResidualNet (ResNet), SegNet, VGG (VGG-19, VGG-16, VGG_S) and Network-in-network (NIN), among others. CDNN2 supports the most advanced neural network layers including convolution, deconvolution, pooling, fully connected, softmax, concatenation and upsample, as well as various inception models. All network topologies are supported, including Multiple-Input-Multiple-Output, multiple layers per level, fully convolutional networks, in addition to linear networks (such as Alexnet).
A key component within the CDNN2 framework is the offline CEVA Network Generator, which converts a pre-trained neural network to an equivalent embedded-friendly network in fixed-point math at the push of a button. CDNN2 deliverables include a hardware-based development kit which allows developers to not only run their network in simulation, but also to run it with ease on the CEVA development board in real-time.