The proliferation of battery-powered IoT devices and advent of edge processing are on a collision course with available battery power. Fierce Electronics sat down with Tom Doyle, Founder and CEO of Aspinity, to talk about his company’s analyze-first analog architecture and how it works to address power challenges.
Fierce Electronics: What is the driver for more edge processing? What problems does edge processing solve and what problems does it introduce?
Doyle: The move to edge processing is being driven by requirements for always-on devices to respond quickly, to preserve user privacy, and to support the development of portable intelligent devices that don’t always need to be connected to the internet. While these problems can be solved by integrating more processing into the device so that less data need to travel between the device and cloud, incorporating more functionality into the device uses more power. So, if we just incorporate the same always-on processing methodology that is used in wall-powered devices, we will end up with devices with very short battery life. This problem is only compounded when we’re talking about tiny devices like hearables, which have very small batteries in the first place.
FE: What’s the relationship between edge processing and machine learning?
Doyle: Edge devices often use sound, motion, or touch as their main mode of interfacing with users. The ability of these devices to interpret what they sense is generally enabled by machine learning. The machine learning used to be handled only in the cloud. But integrating this capability into small, portable edge devices has been made possible with the introduction of TinyML, the implementation of machine learning intelligence that previously required the heavy processing capabilities of the cloud into very-low-power semiconductor processors. Without TinyML processors, we could never do this level of sophisticated processing on the device and we would have continued to rely on an internet connection and the cloud.
Fierce Electronics: One example of an application of edge processing is in always-listening and voice-first devices. Why are today’s systems struggling with battery life?
Doyle: Always-listening devices are continuously analyzing environmental sound for data, and they come in several sub-categories:
1) Voice-first devices such as voice-enabled TV remote controls and wireless earbuds are always processing and analyzing that sound data for wake words and commands
(2) Other acoustic event-detection devices are always listening for specific acoustic events such as window glass breaks, fire and smoke or other types of alarms, gunshots, or water drips so they can alert you to any sign of trouble and enable you to address the problem immediately—even if you're away.
A traditional always-on architecture uses a digitize-first approach where all incoming sound data are treated the same way—they’re digitized immediately and then analyzed by the digital processor. Since digital processors tend to be the highest-power components in the always-listening system, the most efficient way to save system power is to make sure that when the system’s on, it’s actually processing data that could contain what it’s looking for. For example, the digital processor in a voice-enabled TV remote spends 100% of the time processing all sound data in case it hears a wake word. But if there’s no voice present, there isn’t going to be a wake word. We estimate that voice is present between 10%-20% of the day, so for the other 80 or 90% of the time, the voice-enabled TV remote is digitizing and analyzing microphone data that’s completely irrelevant to wake words and will ultimately be thrown away.
In the case of a glass-break detection device, that system is always on, even if the glass break only happens once every ten years. Again, we’re talking about irrelevant data, resulting in excessive power consumption. So there has be a better way.
FE: You advocate the idea of analyzing first using analogML. What does that mean?
Doyle: Analog machine learning (which we call analogML) and the analyze-first architecture are intimately connected. Our analogML core implements feature extraction, a neural network, and data compression and is built on our patented RAMP technology platform, which allows us to replicate sophisticated digital tasks, such as inferencing, in ultra-low-power analog. The analogML core can be programmed to detect many specific acoustic events so it acts like an intelligent analog gatekeeper at the front end of the always-listening system, which keeps the system asleep unless the specific event is detected.
An analyze-first architecture addresses the root of the power challenges that we discussed earlier, which is the wasteful high-power processing of unimportant data. So, using analogML is not just replacing one machine learning processor with another. Instead analogML enables a completely new architecture that spends just a little bit of power up front to minimize the amount of data that are ultimately digitized and analyzed by downstream processors. By determining which data are important while they are still analog and keeping the digital system asleep for most of the time, we can significantly extend the battery life of the device—particularly if the important data come around <1% of the time as typically happens for a window break.
FE: How challenging is it to determine what data is important earlier in the signal chain? Is that the job of a domain expert? A data scientist? The embedded developer? Everybody?
Doyle: Can you provide a specific, real world example of how an analog ML chip eliminates the digital processing of extraneous data?
Sure, here’s an example of a demo that I did using one of our evaluation platforms. This demo is using our analogML core that has been programmed to recognize glass break to gate the digital system. You’ll see as you listen that the analogML core is very discriminating and is not triggering on loud noises or noises of similar frequencies, but only triggers when it detects an actual glass break.
This is just one demo of acoustic event detection. There are others as well for detecting various alarm sounds and, of course, voice as we’ve talked about. We’re able to use standard neural network training tools to develop machine learning models for many acoustic events that can then be programmed into our analogML core—so the possibilities are really endless.
FE: How does an analogML chip differ from legacy digital architectures?
Doyle: The analogML core is unique among all other machine learning processors in that it accomplishes ALL of its processing from sensor interfacing to feature extraction to inferencing to data compressing within the analog domain.
There are some lower-power machine learning processors today that leverage analog for some portion of these functions, but they still ultimately require digitizing all data before any processing can start. These solutions don’t create more efficient overall systems, which is what’s needed if you want to truly extend battery life. That’s where Aspinity’s solution is completely different: analogML actually eliminates the digitization of the data that aren’t important so that the entire system can be made lower-power, and the overall amount of data that even requires the high-power, high-resolution analysis is minimized. And that’s how we can effect a revolutionary change in battery life.
FE: What are the systems considerations for integrating analog ML?
Doyle: Our analogML core is built to easily drop into an always-on system at the very beginning of the signal chain—right after the sensor. Once the hardware is integrated, the analog ML models can be loaded into the core depending on the application needs, even once deployed. With its flexible sensor interface, the analogML core interfaces to the analog output of many different kinds of sensors, such as microphones or accelerometers, and can detect events directly from the raw analog data. If the application requires the analogML core to collect and compress preroll, such as in a voice wake-up device, then we are able to work with our customers to make sure that their preferred MCU is properly configured to decompress the preroll for the wake word engine as we demonstrate with our recently announced voice activity detection evaluation kit that uses an ST microcontroller with a wake word engine.
Fierce Electronics: Isn’t analog scary for most embedded developers?
Doyle: It can be for many up-and-coming developers because they’re trained on digital much more often than on analog. Analog has that reputation because designing circuits to work with continuous analog signals rather than the discrete zeroes and ones of digital is more complex. However, we’re not asking people to be analog design experts to use analogML – that’s our expertise. Instead our analogML development environment has been designed for engineers without analog expertise, making it easy for them to build algorithms for the analogML core through the standard programming or training interfaces that they’re already accustomed to using.
FE: What tools and resources are available for engineers to get up to speed with analog ML and more easily work with the technology?
Doyle: We have two evaluation kits that speed up the process of designing with analogML. In December, we released a Voice-First Evaluation Kit (EVK2) that includes audio test files for quick evaluation and a high-performance MEMS microphone from Infineon for live testing along with an integrated high-performance MCU from STMicroelectronics running a wake word engine. The EVK2 gives systems designers the opportunity for hands-on experience with analog voice activity detection and the ability to measure for themselves the power savings that can be realized in their own designs—as well as the accuracy that can be achieved using the Aspinity preroll solution for keyword detection.
We also have a second evaluation kit (EVK) that is more focused on acoustic event detection that does not require preroll. The Aspinity algorithm library for that EVK currently supports voice detection, glass break, and T3 smoke/T4 carbon monoxide alarms.
And as I mentioned before, the analogML development environment has been designed so that it’s straightforward for application-specific algorithms that can be developed using standard training interfaces and then loaded onto the analogML core.
FE: Can you give us an update on your company?
Doyle: At CES 2020, we demonstrated the first end-to-end voice activity detection kit with STMicroelectronics, and this past December, we launched the EVK2, the Voice-First Evaluation Kit, which includes ST’s STM32H743ZI. Also, we announced a partnership with Infineon in May 2020. Then in December 2020, we launched our EVK2 that uses an Infineon microphone, and we expect more news to follow with Infineon fairly soon. We also expect to announce some additional collaborations in the near term. Our first product will be based on our analogML core, and we expect that it will be announced later this year.
FE: How does your chip differ from the Analog Inference Accelerator (AnIA) test chip from imec and Global Foundries and Mythic’s M1108?
Doyle: Both of these technologies fall into the category that we previously discussed where the chips perform select power-intensive computations with lower-power analog circuitry but are otherwise performing in a digital paradigm where all sensor information is still digitized first. So these, like many other approaches, are focused on improving the efficiency of the neural network, not the efficiency of the entire system that relies on it.
FE: What advice do you have for engineers on how to get started using analogML?
Doyle: Aspinity makes it straightforward for customers to use analogML and benefit from an analyze-first architecture in their systems. We have already developed an algorithm library that supports voice activity detection with and without preroll support, glass break detection, and smoke and carbon monoxide alarm tone detection, and we continue to add to this library regularly. Customers can drop an analogML core running one of these algorithms directly into their existing system.
In addition, Aspinity can build specific analogML models for customers for custom applications, and in the future, we will release an SDK that customers can use to design and compile their own analogML models using industry-standard tools (e.g., Pytorch), no analog expertise required.
Editors Note: Learn more about Aspinity's analogML technology during a panel discussion at the Low Power Technologies Summit, digital event series taking place February 16-17, with a focus on power management strategies for IoT and energy-hungry devices and energy harvesting. You’ll get insights and tips from engineers in the trenches who know their stuff. For more information and to register for your free pass, click here .