Electronics

It’s Getting Crowded at the Edge – Simplifying Data Capture with Neuromorphic Analog Signal Processing (NASP)

As investment in sensors, cameras and other devices that make up the Internet of Things (IoT) accelerates, the roadblock moves to the edge of the network. Whether the sensors are used for fitness, security, medical or industrial purposes; with more devices come more data, and not all of it is useful. An innovative new concept known as Neuromorphic Analog Signal Processing (NASP) provides fast and easy conversion of trained Neural Networks onto Tiny AI silicon chips with ultra-low power consumption, low latency and small size. These Neuromorphic Front End (NFE) chips based on NASP rapidly extract only useful information from a mountain of raw sensor and telemetry data substantially reducing the volume of data sent to applications while reducing latency and requiring significantly less power than any other solution today.

An Analog Answer

A neuromorphic chip imitates the brain through elements that act as “neurons”, nodes that process information, and “axons”, connections between the nodes with specific weights. Operational amplifiers are used to implement neurons and a mask programmable resistor layer is used to implement axons. This type of analog structure performs true parallel analog data processing without accessing memory or other data traffic. The analog neuromorphic structure is trained to take raw sensor data and extract specific information which is passed on to external applications and systems for further processing.

Neurons are physically implemented as analog circuitry elements according to the mathematical simulation of a single neuron. Unlike costly efforts to accommodate a neural network on a general-purpose digital chip, an application-specific neural network is modeled and verified using standard analog structures that can be processed by most chip foundries. The analog structure is the key to extreme energy efficiency, low latency and chip-level processing. The result is an application-specific analog inference engine on a chip tailored for the task of extracting useful information from raw, noisy sensor data.

Tiny ML Implementation

The neural core is modeled and generated using NASP Compiler that can convert almost any trained neural network in any framework into a NFE analog inference structure. The resulting neural core extracts the right data from noisy, raw sensor data. Because this “capturing” is a fixed task, it can be accomplished quickly and efficiently using a fixed NFE neural core. Analyzing or/and classifying the extracted data requires additional flexibility and is done outside of the NFE chip.

The NASP chips are true Tiny ML implementations that improve latency and power consumption, and enable inference computations directly on devices like wearables, IoT sensors and more, increasing their functionality while addressing user privacy concerns, as the data stays on the device.

NASP is always-on processing of raw data at the sensor. Extracting only the relevant pieces from raw sensor data immediately reduces the volume of data required to be processed. Specific, relevant data is then passed to another processing node (MCU for example) or applications.

While NASP is an emerging market, it is significant in that it substantially offloads AI processing of sensor data and greatly reduces the volume of data required to be processed at the edge. Reducing the volume of sensor data, up to 1000 times; and overall power consumption spares the network and IoT applications from having transport and process the tsunami of data accumulating at the edge.

The editorial staff had no role in this post's creation.