One normally does not associate spiders with high technology, but these lowly arachnids have depth perception that enables them to accurately pounce on targets from several body lengths away.
Using spiders as inspiration, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a compact depth sensor that could be used on board microrobots, small wearable devices, or in lightweight virtual and augmented reality headsets. The device combines a multifunctional, flat metalens with an ultra-efficient algorithm to measure depth in a single shot.
"Evolution has produced a wide variety of optical configurations and vision systems that are tailored to different purposes," said Zhujun Shi, a Ph.D. candidate in the Department of Physics and co-first author of the paper, in an article appearing on the site of Harvard John A. Paulson School of Engineering and Applied Sciences. "Optical design and nanotechnology are finally allowing us to explore artificial depth sensors and other vision systems that are similarly diverse and effective."
The research is published in Proceedings of the National Academy of Sciences (PNAS).
The scientists developed the technology based on the techniques jumping spiders use to measure depth. Each principal eye has a few semi-transparent retinae arranged in layers, and these retinae measure multiple images with different amounts of blur. In computer vision, this type of distance calculation is known as depth from defocus. But up to now, performing those calculations with existing technology has required large cameras with motorized internal components that can capture differently-focused images over time. This limits the speed and practical applications of the sensor.
"That matching calculation, where you take two images and perform a search for the parts that correspond, is computationally burdensome," said Todd Zickler, the William and Ami Kuan Danoff Professor of Electrical Engineering and Computer Science at SEAS and co-senior author of the study. "Humans have a nice, big brain for those computations but spiders don't."
The scientists, building off existing metalens research, designed a metalens that can simultaneously produce two images with different blur.
"Instead of using layered retina to capture multiple simultaneous images, as jumping spiders do, the metalens splits the light and forms two differently-defocused images side-by-side on a photosensor," said Shi.
An ultra-efficient algorithm, developed by Zickler's group, then interprets the two images and builds a depth map to represent object distance.
"Being able to design metasurfaces and computational algorithms together is very exciting," said Qi Guo, a Ph.D. candidate in Zickler's lab and co-first author of the paper. "This is new way of creating computational sensors, and it opens the door to many possibilities."