Vision sensor uses event, not frame-based algorithms

Prophesee SA has reportedly introduced the industry's first standard package vision sensing chip that leverages event-based vision technology, considered an advancement over frame-based vision approaches. The chip, under development for five years, is aimed at developers of cameras to enable next-generation vision in industrial automation and IoT systems such as robots, inspection equipment, monitoring and surveillance devices.

The part leverages Prophesee's neuromorphic vision technology to offer highly efficient capabilities for various use models, including ultra high-speed part counting, vibration measurement and monitoring or kinematic monitoring for predictive maintenance. 

The chip, available in a 13 x 15 mm mini PBGA package, integrates Prophesee's third generation CMOS Image Sensor (CIS) vision module. It features 640 x 480-pixel resolution with 15-μm pixels in a 3/4 in. optical format. It is manufactured in a 0.18-micron specialized CIS process. 

RELATED: Machine Vision Cameras Feature Autofocus, Autoiris, & PoE

"This is a major milestone for Prophesee and underscores the progress in commercializing our pioneering Event-Based Vision sensing technology. After several years of testing and prototyping, we can now offer product developers an off-the-shelf means to take advantage of the benefits of our machine vision inventions that move the industry out of the traditional frame-based paradigm for image capture,” said Luca Verre, co-founder and CEO of Prophesee, in a statement.

In Prophesee Metavision sensors, each pixel is independent and asynchronous, only activating if it senses a change in the scene, a movement, an event. According to the company, this allows for major reductions of power, latency and data processing requirements imposed by traditional frame-based systems. It reportedly enables sensors to achieve much higher dynamic ranges than commonly associated with high-speed vision. It allows cost-efficient sensors and systems to record events that would otherwise require conventional cameras to run at 10,000 images/second.

The sensor can be used by system developers to improve and in some cases create whole new industrial uses, including accelerating quality assessment on production lines; positioning, sensing and movement guidance for robots to enable better human collaboration; and equipment monitoring (e.g. caused by vibration, kinetic deviations) suiting the system for predictive maintenance and reducing machine downtime.

The sensor is supported by a comprehensive software development kit (SDK), a full set of drivers, data recording tools and an online knowledge center.