Walking down the street, if you were to ask random passersby for an example of augmented reality (AR), it’s likely you’d hear mention of Pokémon Go. At that, I’m disappointed. Pokémon Go is merely a location-based mobile application, and it’s four years old. In no way does it represent the actual potential of AR today; it actually undersells it.
In reality, AR is a sophisticated interactive experience where the objects of the real world are enhanced through advanced computer vision or depth sensing. And great AR mixes real and virtual objects and lets them actually interact, so changes in physical objects are reflected in the virtual world.
So, why the confusion? At the root of misconception, I think, is a larger problem within the AR space. For years, companies have touted the value of AR, promising game-changing experiences and novel applications. Yet, time and time again, these companies overpromise and underdeliver. Consider AR glasses. They’re often deemed the next frontier of mainstream AR and expected to overtake mobile applications in usage, but so far, the reality has not lived up to the hype. Headlines of failed products and companies have highlighted the shortcomings of current technology, particularly for consumer applications.
It’s not easy for companies to build AR glasses that are affordable, attractive, comfortable and multi-purpose. Today’s devices typically offer limited functionality or remain tethered to local devices, introducing a host of power and design concerns. As a result, there’s a mismatch between what is actually possible and what consumers expect, and none of the attempts so far have delivered an application that both fills a compelling need and meets those expectations.
Despite years of struggle, I do think there's hope on the horizon for AR glasses. Recent strides in edge computing have unlocked faster and more efficient applications, paving the way for smaller devices with better accuracy, privacy, and power efficiency. I believe edge computing will be critical to the development of practical AR glasses and, in turn, finally enable compelling experiences that demonstrate the true potential of augmented reality. Here’s why:
Edge computing would allow for AR glasses that are actually comfortable to wear
AR wearables have become known as boxy, binocular-like headsets, something that consumes your entire face. The industry has struggled to develop glasses that offer user comfort and convenience while also powering game-changing experiences. So, how do you create big experiences in small, functional packaging?
The reality is that visual data is far more storage and bandwidth hungry than the audio and accelerometer data we typically find in today’s most common wearables. Processing that data in the device itself in real time, reduces that need for storage and bandwidth and improves power efficiency, enabling designers to work with smaller batteries and provide longer runtimes between charges. For consumers, this means we’ll finally be able to say goodbye to the clunky headgear of the past, and find AR glasses that are comfortable—maybe even stylish, too.
Bringing power on-device enables more frictionless experiences
AR developers have put a lot of energy toward creating wearable applications that are connected to a local device, like a laptop, phone or compute pack. While off-device computing has enabled more powerful AR experiences, it limits ease of use. Wired connections are cumbersome and seem anachronistic compared to the wireless earbuds we’ve all become accustomed to. Wireless connections mean higher power consumption, more heat emissions and reduced battery life—while still requiring multiple devices for one experience.
Over the last two years, the edge AI space has grown exponentially with new hardware architectures and machine learning methodologies that bring the power and performance of data centers directly into devices. With edge AI, devices no longer have to send all data off the device for processing or rely on wireless network availability, enabling more consistent experiences regardless of location. AR wearables using edge AI would save power and battery life by processing certain information on the device, while using a phone for higher-level processing, storage and connection to the internet. The shift toward on-device processing will introduce more seamless AR experiences for consumers that don’t compromise on performance or features.
Edge computing enables essential levels of speed and accuracy
AR is extremely complex. To realistically and accurately present an enhanced version of the user’s surroundings, the device must synthesize and understand data from multiple sensors and react in real-time to the environment. This is extremely difficult to do if every step requires sending data off the device for processing. Consider AR glasses that are designed to train doctors for procedures. Even an imperceptible delay in rendering an image can increase the risk of error. Speed and accuracy matter.
Edge computing enables devices to react to their environment immediately, without the delay of transferring data elsewhere. AR glasses that can process data from multiple sensors will be able to portray a more accurate, augmented rendering of someone’s surroundings. We may even broach superhuman abilities by processing high-resolution images beyond what human eyes can see, or incorporating sensors that use infrared or radar. Think Geordi’s visor, as seen on Star Trek: The Next Generation. In short, edge computing will introduce the speed and accuracy that would make these devices more valuable across a wide range of applications, making the promise of game-changing AR experiences a reality.
Good AR that is easy to wear and use, offers decent battery life and is capable of doing a range of useful or entertaining things, is really hard to achieve. But with edge computing we are on our way to delivering on a set of criteria that will provide more compelling applications and support wider adoption. And with that, we’ll finally be able to demonstrate that AR offers far more than Pokémon Go.
Author David McIntyre is VP of marketing at Perceive, where he oversees all aspects of adoption and expansion of the company’s first product, Ergo, an edge inference chip.
RELATED:
Qualcomm joins hands with Tencent in digital entertainment