The Future of Context-Interactive Devices

E-mail Ian Chen

Context awareness has become a hot topic in mobile devices. The idea seems self-evident: design these devices to be smart enough to adapt themselves to their users. In 2009, the Gartner Group defined context-aware computing as "the concept of leveraging information about the end user to improve the quality of interaction." Indeed, context awareness represents a fundamental shift in interactivity: instead of end users programming their devices with their preferences, devices observe their users and deduce what the users want and then adapt themselves accordingly.

Applications that leverage usage patterns, emails, calendar entries, GPS locations, and motion sensors are each addressing a piece of the context puzzle. But such basic data do not convey much information about the user's context or intents, and only become useful when combined with other information, such as the user's habits, calendar appointments, and purchase patterns. Together, these inputs can augment and complement each other, and thus provide the user with useful information before she even knows she needs it.

Today's smartphones include a large number of sensors that monitor the smartphone's environment, for example:

  • Location: GPS, cell-tower trilateration, and/or WiFi fingerprint
  • Proximity: proximity sensors and/or Near Field Communication (NFC), and Bluetooth beacons
  • Orientation and motion: accelerometers, magnetometers, and gyroscopes
  • Lighting conditions: ambient light sensors
  • Altitude: pressure sensors
  • Voices and ambient noise: multiple microphones
  • Immediate surroundings: front and back cameras

Eventually, devices will become so in tune with their users that they will act as extensions of the users themselves; an electronic version of the vademecums of an earlier time, the handbooks that people carried which contained all the information they would need that day.

For example, using combined sources would allow a travel assistant application to recommend a tapas restaurant to a user, and also provide walking directions to the restaurant along a scenic river-walk, because the app knows that the user enjoys Mediterranean cuisine, likes to take long walks, and has a free hour before his usual meal time.

Of course, computer programs have committed some prominent gaffes when trying to interpret usage patterns. In his classic 2002 article in The Wall Street Journal, "Oh no! My TiVo thinks I'm gay," Jeff Zaslow reported on the pitfalls of simplistic context interpretation technologies that operate on insufficient sets of information. Context interpretation gone wrong can be inconvenient and counterproductive, and sometimes really dangerous. Consider a scenario in which a company is designing a cellphone that calls the police automatically if the user is in a car accident. Given the tens of millions of cellphones the company wishes to sell, even a one percent false alarm rate may be high enough to inundate the police with bad reports. Therefore the context interpreter must combine as many sensory inputs as possible so that multiple algorithms can be used to validate the "car accident" context.

To get more and higher-order sensory information in mobile devices where battery life is an issue, we need to rely more heavily on low-power sensors, such as accelerometers, magnetometers, and pressure sensors. Sensors in cellphones are capable of measuring higher-bandwidth and lower-magnitude signals than simply motion. For example, the accelerometer can detect changes in human muscle tremor and resonance. The magnetometer can recognize signatures in the ambient magnetic field. The pressure sensor can detect the effects of air conditioning.

By combining these higher-order information inputs, sensors can provide contextual information such as:

  • How the device is carried: in hand, in a pocket, in a purse or briefcase, or in a holster
  • The user's posture: standing, sitting, walking, or running
  • How the user is traveling: in a car, in an elevator

Because these sensors consume little power, they can stay on in the background to monitor and report on a user's context and activities at all times. Context-aware devices will not rely only on information gleamed from the user's email and calendar, but can also verify the activities and context in real time.

"The most profound technologies are those that disappear," said Mark Weiser, onetime chief scientist at Xerox PARC. "They are those that weave themselves into the fabric of everyday life." Mobile devices are on their way to meeting that definition, although we are still a long way from creating a digital Jiminy Cricket. Operating system advances and standardized application programming interfaces are still needed to enable a context aware paradigm. But it is clear that how we interact with our electronic devices will be changing soon, and for the better.

ABOUT THE AUTHOR
Ian Chen is Executive Vice President for Sensor Platforms Inc., San Jose, CA. He can be reached at 408-850-9350 or [email protected].