AV Perception Engine Uses AI Tools

VAYAVISION’s VAYADrive 2.0 is an AV perception software engine that fuses raw sensor data together with AI tools to create an accurate 3D environmental model of the area around the self-driving vehicle. The software is said to break new ground in several categories of AV environmental perception: raw data fusion, object detection, classification, SLAM, and movement tracking -- providing crucial information about dynamic driving environments, enabling safer and reliable autonomous driving, and optimizing cost-effective sensor technologies.

 

The application combines state-of-the-art AI, analytics, and computer vision technologies with computational efficiency to scale up the performance of AV sensors hardware. It is compatible with a wide range of cameras, LiDARs, and radars

 

VAYADrive 2.0 solves a key challenge: the detection of “unexpected” objects. Roads are full of “unexpected” objects that are absent from training data sets, even when those sets are captured while travelling millions of kilometers. Thus, systems that are mainly based on deep neural networks fail to detect the “unexpected.”

 

To detect objects, no single type of sensor is enough; Cameras don’t see depth, and distance sensors, such as LiDARs and Radars, possess very low resolution. VAYADrive 2.0 up-samples sparse samples from distance sensors and assigns distance information to every pixel in the high-resolution camera image. This allows autonomous vehicles to receive crucial information on an object’s size and shape, to separate every small obstacle on the road, and to accurately define the shapes of vehicles, humans, and other objects on the road. For more details, checkout VAYAVISION.