Better Vision, Courtesy of Bees

E-mail Melanie Martella

Machine vision is evolving and becoming more important—and not only for the industrial automation and inspection applications Barbara Goode recently highlighted in her blog entry "What the New Direction in Vision Sensing Means for You." Autonomous robotics—research into robots that can find their own way through an environment—is also focusing on vision.

3D is Hard
The idea of adding machine vision to autonomous robots isn't a new one; it's been a hot research topic for a long time. Unfortunately, most vision analysis (the ability to make sense of an image for navigation and obstacle avoidance) is very computationally intensive. In practical terms, computationally intensive things are bad when they're required for navigation. A robot that travels faster than it can tell where obstacles are and what its own position is? Not a successful robot.

Taking a page from nature's book, Mexican researchers Gustavo Olague and Cesar Puente have developed a stereoscopic camera system that claims to be faster at analyzing images. It mimics the way foraging honeybees notify their compatriots of where to find the really good pollen by using 'virtual honeybees' to identify areas of interest in the stereo camera images. The virtual honeybees cluster around things like edges, and the system can then render these in 3D. If you read this article on the New Scientist Web site you can see sample images and read a longer explanation of their approach.

Another, similar research effort is called the Fly algorithm, again using pairs of stereoscopic images. In this approach 'flies' cluster along the edges of obstacles, allowing them to be rendered in 3D. Again, by concentrating on the really important things in the images (people, walls, other obstacles) rather than processing the entire camera images, the vision system can enable real-time obstacle avoidance.

For a good overview of why you'd want to copy honeybees for robot navigation, take a gander at this Web page.

Not Yet Ready for Prime Time
Right now these approaches are still in the research project stage. However, the algorithms will get better, and processors just keep getting faster. Robots with 'eyes' will happen; it's just a question of when. Could these 3D vision approaches—once the bugs are worked out (sorry, I couldn't resist)—migrate into the world of industrial automation? Only time will tell.

Suggested Articles

New tech relies on time-of-flight sensor tech used in HoloLens combined with CMOS

Edge computing has been around for a while, but the intelligent edge? Ah, come on!

4D imaging radar helps cars see objects better than before, including bridge and tunnel clearances