Partner content
Robots, in general, need vision. And all sorts of robots, from industrial equipment to advanced driver assistance systems (ADAS) have relied heavily on high-resolution cameras to sense the world around them. But as we demand more and more of our inventions—industrial machines that operate without humans, cellphone apps that deliver consuming virtual reality, or fully autonomous vehicles in place of ADAS—the algorithms that control these devices need the ability to sense their surroundings in the third dimension.
One solution—arguably the best for high performance and reliability—depends on light detection and ranging (lidar). Lidar scans its field of view with a laser beam, capturing the reflected light in a special sensor chip that also records how long the laser beam took to fly from sensor to scene and back to sensor—thus measuring distance. The result is in effect a 3D image of the field of view.
Smaller systems
Great for geographic mapping or for self-driving taxis. But small systems such as warehouse robots or delivery drones have just such needs, over shorter ranges. They must maintain an internal map of their surroundings in order to navigate and avoid collisions. And stationary industrial equipment needs to monitor a fixed danger zone for potential intrusion by a distracted pedestrian or an incautious hand. Back on the highway, advanced driver assist systems (ADAS) can also benefit from
Figure 1: ADAS
a vehicle
At the extreme end of small size, low-power, and low-cost, smartphones need similar mapping techniques to support augmented or virtual reality apps. A virtual reality game needs a map of its surroundings so engaged players don’t walk into furniture, or worse. In augmented reality applications, the app may actually overlay virtual objects over real world objects (Figure 2.) In each of these cases, lidar can help identify and locate objects.