From wireless, displays and software to MCUs, there are lots of elements of a design that can wreck a power budget. And given the almost insatiable demand for battery-powered devices that can do more with less power, design engineers are often required to make tough choices--like trading off features for acceptable battery life.
But, fortunately, there are things that engineers can do in hardware and software say three embedded engineers with expertise in low power design. Fierce Electronics spoke with Walt Maclay, President, Voler Systems; Colin Walls, Embedded Software Technologist, Mentor, a Siemens Company; and Tom Doyle, President, Aspinity.
FE: Let’s start off right away with a simple question—Is there one thing engineers can do that will give them the biggest bang for their buck? Because if there is, this could be a very short interview with you all! Or, is it more the case of a lot of little things that add up to an energy efficient design?
Maclay: There are lots of little choices in most cases. You need to identify what the biggest power drains are. Then put your effort into reducing those or finding alternatives that drain less power.
Walls: The analysis of use cases should show what the hot spots are and, hence, where to apply effort.
Doyle: This is the issue with the current paradigm, where the focus is on adding up the component power and not focusing on system power and architecture. Just simply reducing small percentages of power for each component does not solve the system power, because these components are still on all the time even though they are processing irrelevant data.
FE: So, what are some of the most common causes of power consumption issues in battery powered devices?
Maclay: Power is used by sensors, wireless transmission, processors, and displays. The software must be written correctly to get these devices to their lowest power.
Walls: Improper management of resources. At different times in the operation of a device [i.e. different use cases] different resources [CPU power, peripheral availability etc.] are needed.
Doyle: The biggest cause of short battery life is the processing of enormous amounts of data. Many battery-powered devices are constantly collecting and processing data from their environment. Voice-enabled devices, for example, are always-listening for a wake word. Instead of sending all of the incoming data to the cloud, we’re doing more edge processing, or local processing, in order to preserve user privacy, to reduce latency, and in some cases, to eliminate the need to be connected to the internet all of the time. But continuously processing all of that incoming data is power-consuming, and the result is a reduction in battery life, which becomes very frustrating for users.
FE: Power management seems like such a big hairball of choices and trade-offs, it seems a wonder that engineers manage at all to put products on the market that hit the market in terms of the right features, functions, and acceptable battery life. What is your recommendation on how engineers get their head around it all?
Doyle: There are constantly new features and functionality being added, so that’s why we need to think holistically about making each system the most efficient in its entirety, regardless of how many new features there are. The biggest obstacle to battery life is having to process a high volume of data, especially when so much of it is irrelevant to the task at hand. So, there are huge benefits to be gained by architecting the system to intelligently minimize the amount of data as early as possible in the signal chain. Aspinity’s analog machine learning chip—which we call analogML— is the only chip that classifies sensor data while it is still analog, so we’re able to determine which data is important at the moment the data enters the system.
This type of architecture, which we call “analyze-first,” uses just a little power up front in the analog domain to detect important data. It then keeps the majority of the downstream chips, including the analog-to-digital converter, asleep unless something relevant is happening. So, while it will always be important to select low-power components, the lowest power will be achieved by using a more efficient architecture in which most of those components are kept off unless they’re actually needed.
FE: So, how does the whole process of maintaining a power budget work? Should you start with a power budget (and if so how does one even define what that is!) and work to meet it, or do you optimize your design around other key market-driven features and functions, pick your components, then see where you land with power consumption, then iterate?
Maclay: You need to start with product requirements. They define how long the battery lasts, how big it can be, how fast data must be processed, what type of display is used, and what types of sensors are used. The key decisions about power are made during the system design while completing the requirements. Tough choices have to be made. Sometimes major performance changes are needed, if the power budget cannot be met with the performance desired. To do the system design you need to have the experience to calculate the power consumption of each part of the product.
Walls: You start with use cases and maybe an idea of how large a battery might be acceptable. You can’t have a massive hanging off someone’s wrist with a wearable! It is then somewhat a matter of paring the former to align with the latter.
Doyle: Our experience is that most new projects start with a specification for the battery lifetime requirement. The engineering design team then translates that requirement into a power budget, i.e., the amount of battery power that’s available between device charges. And then the fun starts with trying to find components that do everything needed AND meet that power budget. With early analog event detection, design engineers can operate within a new paradigm that gives more flexibility for design choices. Because now, not every chip has to be on 100% of the time, so engineers can choose a component that may be slightly higher-performance or that adds a new feature, without linearly adding to the power consumption of the system.
FE: How iterative is the power management process?
Maclay: There may be iteration in the system design process. Once it is complete there should not be iteration in power design during the design of the product, unless mistakes were made. There may be some parts of the power that are difficult to estimate. This should be addressed during the system design or at the beginning of the product design by testing or calculation.
Walls: It is iterative insofar as the design will start out in simulation, which iteratively gets better. Moving to real hardware is another iteration.
Doyle: System design within a power budget has always required a series of trade-offs between the functionality and the battery life, so this has always led to an iterative process to determine which trade-offs are acceptable for the end user. But like we said previously, there is significantly more flexibility if you start with an architecture that’s already power- and data efficient.
FE: How tricky is the testing?
Maclay: Some devices are tricky to test. For example, if you want to know the battery life, you can measure the current flow and calculate the battery life. In some small devices it is hard to get probes where you can measure the current, so you have to wait for the battery to discharge. If it takes days to discharge, you may need to set up a camera or other device to catch it when it stops functioning.
Walls: Testing is a matter of verifying power consumption against use cases. This can be done by tying hardware simulation or a real ammeter to the software debugger/trace tool.
FE: What about software versus hardware—does choosing the lowest power hardware possible get an engineer a large part of the way to meeting a power budget?
Maclay: A lot of software is about implementing what the hardware can do. If you plan to operate in a power-down mode, the software must put the device in that mode at the right time. The other part of software is estimating the power consumption of the data processing done by the software. When you have multiprocessors and devices that automatically power up and down various parts, it may be necessary to do a test to determine the power consumption. Often you can use rules of thumb based on experience, particularly for less critical subsystems.
Walls: Hardware sets the best possible outcome. Choosing the wrong hardware will ruin the design from a power perspective. Choosing the right hardware gives the software guys a chance to meet their goals.
Doyle: Data and power consumption go hand in hand, so the biggest bang for your buck as far as meeting a power budget, is to focus the power consumption of the system on the data that is important— so only using the chips that are needed at any given time. Using the lowest-power always-on components is not really the answer. You should use the correct components, no matter the power level, but turn them off when not needed. That’s because you want to have the processing capability when relevant data is present, but not otherwise. And it’s why it’s so important to let the analogML chip let the system know when relevant data is present.
FE: Does the advent of edge processing just exacerbate the whole challenge of power management for design engineers?
Maclay: Edge processing can be a boon or a bane. An edge processor may be located where it can be plugged into power, so power management is a much smaller issue. In many cases processing must be done on a battery operated device to reduce the amount of data being transmitted wirelessly. There is a trade-off. A certain level of compression of the data will improve power consumption. At some point highly compressing the data takes more power than is saved in data transmission.
Walls: Wireless networking is certain an issue from a power perspective.
FE: What is the most exciting development you’ve seen (or is coming…) in hardware, software, even battery technology that is going to help engineers to develop more energy efficient devices?
Maclay: Despite the huge investments in improving battery technology, the changes are small and incremental. I haven’t seen anything to get excited about in batteries. In our field of IoT, the biggest news is the availability of NB-IoT and LTE-M (or CAT-M) wireless communication for over 90% of the US population. Finally, you can design devices that communicate directly to the cloud without a gateway (such as a phone), and at power levels that are similar to Bluetooth.
Walls: Apple's recent success with the M1 chips shows that getting the right silicon helps a lot. I think the most exciting thing from the software POV is the realization that is is a software issue.
Doyle: The most exciting development for edge devices has been the development of small, low-power machine learning chips that can perform complex processing tasks at the edge—which is something that was formerly performed only in the cloud. The fact that we have now extended this capability to do complex inferencing on analog data opens up a whole new way for us to design devices with extended battery life by starting at the system architecture level—rather than at the component level.
Walt Maclay, Colin Walls, and Tom Doyle will be participating in a panel discussion on low power design at the Low Power Technologies Summit, Feb 16-17. Registration is free for the virtual event.
RELATED:
As IoT Devices proliferate, more eyes are on energy harvesting
How low can IoT power go? Good question.
Analyze-first analog addresses power challenges at the edge
Three tricks to pimp out your device for low-power GPS, from an expert