A Case for Smart Sensors

E-mail Ian Chen

Following the massive proliferation of sensors in smartphones, many have predicted that mobile devices would soon see new generations of smart sensors. Over the last few years, while sensors in smartphones have gotten smaller, now consume less power, and feature better performance, they haven't gotten much smarter; while the performance of individual sensors has increased, their functionality has not expanded. What happened?

Let's look at how mobile apps use the sensors they currently have. Hundreds of such apps, including many that use a smartphone's inertial motion sensors, are available at Google Play. Many of these apps provide graphical visualization of sensor data and sensor fusion results. Some apps are games but all of them require active user involvement.

These apps run only when the system or application processor is already active and, consequently, it is cheaper and easier to use the processor for computation. Sensors only need a larger sample buffer to reduce the number of interrupts they send to the application processor, leaving all the "smarts" to the processor. A recent study we performed shows that typical smartphone users only actively use their devices for 6% of their waking day. Devoting a few extra processor cycles to run sensor algorithms that are required for just a fraction of that 6% of the day hardly impacts the appreciable battery life of the device. So smartphones continue to rely on commodity sensors controlled by a powerful and power-hungry processor.

The motivation for using smart sensors lies in reducing the power consumed by applications, and not just in large market size. In fact, we are seeing smart and differentiated sensors appear in applications that require always-on deployment. Freescale Semiconductor's smart meter tampering sensor is a good recent example: The smart accelerometer includes algorithms that can recognize the motions characteristic of user tampering to protect smart meter installations.

The 6% usage model for smartphones is finally about to change because smartphones are getting much smarter. Using data from low-power sensors, algorithms can better interpret users' contexts: their activities, situations, and surroundings. A context-aware smartphone can know, for example, that a user is seated and it can skip location service updates to save power. It knows it is in a pocket and prevents inadvertent pocket dialing. In other words, it monitors the user's activity and overall context to proactively provide services rather than requiring the user to feed his device with every bit of input it needs.

This context awareness will require some sensors to be always-on. Because these sensors cannot rely solely on the application processor they must have some intelligence of their own to allow them to act autonomously when the application processor and the rest of the smartphone is in standby mode, i.e., they will need to be truly smart sensors. Smart sensors are differentiated by various tradeoffs. Those with substantial computation facilities may be able to process sensor data and derive user context autonomously for long periods of time. Lower-cost smart sensors may require frequent intervention from the application processor to assist with most of the context-detection processing.

A smart sensor could also choose to optimize its performance for a specific set of contexts or a segment of use cases. For example, a smart sensor targeting outdoor exercise applications may be able to keep track of its user's steps and jogging cadences by itself, but require application processor intervention to figure out if the user is seated.

As the world becomes increasingly stitched together by sensors, context-aware devices represent a new frontier in smarter personal devices. While the cell phone may remain the most prevalent "body-worn sensor," smart sensors will find applications in devices ranging from mobile headsets, to exercise and spots equipment, to health monitors. Indeed, Sensor Platforms developed its FreeMotion Library so that context detection and sensor fusion capabilities can be distributed among the application processor, sensor hub, and smart sensors, allowing system designers to create different partitions and platforms optimized for their target market segment.

Ahead of us lies a very exciting new landscape of smart devices and autonomous applications. In five years, we are likely to look back at the rudimentary examples we think of now as context-aware applications and find them crude and limited. Personally, I can't wait.

ABOUT THE AUTHOR
Ian Chen is Executive Vice President for Sensor Platforms Inc., San Jose, CA. He can be reached at 408-850-9350 or [email protected].