Engineers at Massachusetts Institute of Technology have developed a system that can sense tiny changes in shadows on the ground to determine if there’s a moving object coming around the corner, in an attempt to improve safety for autonomous vehicles.
In a paper being presented at the upcoming International Conference on Intelligent Robots and Systems (IROS), the researchers describe successful experiments with an autonomous car driving around a parking garage and an autonomous wheelchair navigating hallways. When sensing and stopping for an approaching vehicle, the car-based system beats traditional LiDAR — which can only detect visible objects — by more than half a second.
“For applications where robots are moving around environments with other moving objects or people, our method can give the robot an early warning that somebody is coming around the corner, so the vehicle can slow down, adapt its path, and prepare in advance to avoid a collision,” said the study’s co-author Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “The big dream is to provide ‘X-ray vision’ of sorts to vehicles moving fast on the streets.”
RELATED: WaveSense betting future on ground-penetrating radar
Currently, the system has only been tested in indoor settings. Robotic speeds are much lower indoors, and lighting conditions are more consistent, making it easier for the system to sense and analyze shadows.
For their work, the researchers built on their system, called “ShadowCam,” that uses computer-vision techniques to detect and classify changes to shadows on the ground. MIT professors William Freeman and Antonio Torralba, who are not co-authors on the IROS paper, collaborated on the earlier versions of the system, which were presented at conferences in 2017 and 2018.
ShadowCam uses sequences of video frames from a camera targeting a specific area, such as the floor in front of a corner. It detects changes in light intensity over time, from image to image, that may indicate something moving away or coming closer. Some of those changes may be difficult to detect or invisible to the naked eye, and can be determined by various properties of the object and environment. The system then computes that information and classifies each image as containing a stationary object or a dynamic, moving one. If it gets to a dynamic image, it reacts accordingly.
To adapt ShadowCam for autonomous vehicles, the researchers developed a process that combines image registration and a new visual-odometry technique. Often used in computer vision, image registration essentially overlays multiple images to reveal variations in the images. Medical image registration, for instance, overlaps medical scans to compare and analyze anatomical differences.
As ShadowCam takes input image sequences of a region of interest, it uses the DSO-image-registration method to overlay all the images from the same viewpoint of the robot. Even as a robot is moving, it’s able to zero in on the exact same patch of pixels where a shadow is located to help it detect any subtle deviations between images.
Next is signal amplification, where pixels that may contain shadows get a boost in color that reduces the signal-to-noise ratio. This makes extremely weak signals from shadow changes far more detectable.
If the boosted signal reaches a certain threshold — based partly on how much it deviates from other nearby shadows — ShadowCam classifies the image as “dynamic.” Depending on the strength of that signal, the system may tell the robot to slow down or stop.
In one test, the researchers implemented ShadowCam in an autonomous car in a parking garage, where the headlights were turned off, mimicking nighttime driving conditions. They compared car-detection times versus LiDAR. In an example scenario, ShadowCam detected the car turning around pillars about 0.72 seconds faster than LiDAR. Moreover, because the researchers had tuned ShadowCam specifically to the garage’s lighting conditions, the system achieved a classification accuracy of around 86%.
Next, the researchers are developing the system further to work in different indoor and outdoor lighting conditions. In the future, there could also be ways to speed up the system’s shadow detection and automate the process of annotating targeted areas for shadow sensing.