AI Algorithm + Multiple Sensors Is Key To Safest AVs

Sensors Insights by Dr. Leilei Shinohara

A fusion of multiple sensors, software, and technologies is optimal for a perception sub-system, with sensors, such as front-camera, radar, and LiDAR, combined with AI software algorithms to build an ASIL D system with full redundancy for quasi human perception and vehicle level safety.

In the world of autonomous driving and based on established definitions, six levels have been defined: Levels 0-5 for ADAS (Advanced Driver Assistance Systems) and autonomous driving (AD). Levels 0-2 relate to ADAS, basic low-level systems (available since 2006), that increase car safety, with features such as adaptive cruise control, lane departure warning, and so on. L3-5 relate to more advanced autonomous driving, with sensor products shipping from RoboSense and others in 2020 and beyond.

High level autonomous driving provides advanced sensing of the environment and driving with little or no human input. L3-5 autonomous driving includes automatic safety systems that control essential breaking, steering, and other aspects of driverless AD. Therefore, for these higher-level AD functions, safety is one of the most important elements that car manufacturers must take into consideration. AD levels higher than L3 require systems with afunctional safety level of ASIL D, the highest level of the Automotive Safety Integrity Level (ASIL), a risk classification system specified under the ISO 26262, defining risk parameters by the severity, exposure, and controllability of the vehicle.

When our start-up team at RoboSense began LiDAR-based perception algorithm research, we realized that LiDAR was not only too expensive, but was also an immature technology. Sensors relied, to a large extent, on raw point cloud data, which measures many points on the external surfaces of objects around them. This raw point cloud data was produced by the sensor alone, seriously hindering technology development. To provide a real power like human eyes, the sensor hardware and software algorithms needed to be combined.

Therefore, we began development of our own LiDAR hardware/software combination in 2015. First, we developed a single laser transmitter that emitted 500,000 points per second, with an accuracy of a 2mm 3D laser scanner and point cloud algorithm software for a variety of uses. Then, we launched a multi-line LiDAR that met the needs of autonomous driving and brought LiDAR hardware and perception algorithms together to the market, creating a completely merged LiDAR based environment perception system for autonomous driving.

We realized that due to the physical limitations of current camera and radar sensors, neither sensor alone could achieve a fully functional ASIL D level to provide complete driving safety for the passengers of AD vehicles. Therefore, to provide the highest ASIL D level perception system in passenger vehicles, we discovered that a fusion of multiple sensors and technologies was optimal for a perception sub-system, with sensors, such as front-camera, radar, and LiDAR, combined with AI software algorithms to build an ASIL D system with high redundancy for quasi human perception and vehicle level safety. In addition, for the highest safety, a highly powerful embedded computation platform with a reliable communications system, such as 5G V2x, and an AD-friendly infrastructure, such as an intelligent vehicle-road cooperative infrastructure system, is also needed for interaction and data sharing between vehicles and infrastructures for safer road traffic and to reduce potential dangers.

 

Fusion Based Sensor Technology

This fusion sensor concept is the basis of our perception system, combining different mechanical and MEMS-based LiDAR sensors with AI algorithms for autonomous driving technology that can virtually see the road, even in the darkest or harshest weather conditions, whether in fog, rain, or snow. The merged AI-based Smart Sensor System, a perception system that RoboSense has developed since the founding of the company, is targeted to be ASIL D and is well-tested with our P-Series product, a combination of one 32-layer mechanical LiDAR plus two 16-layer mechanical LiDAR and the AI-perception algorithm. The smart LiDAR sensor system will be the first developed for the RoboSense M1 MEMS solid state LiDAR with the perception algorithm integrated within the M1 and running in an SOC chip.

Traditionally, LiDAR sensors provide point cloud data which is measured and detected by directly shooting a laser beam. With the M1 Smart Sensor System, the point cloud is processed within the SOC, extracting the object information from the point cloud. In other words, the perception algorithm software is the brain of the smart sensor, built to understand the surrounding environment. For example, from the point cloud, the M1smart sensor understands the points from different objects and extracts them, then outputs the detected and classified object data. To illustrate this, see figure 1 below. On the left, you can see the final output from a smart sensor with the point cloud.

Fig. 1: Smart Sensor System: Left side: output from a Smart Sensor with the point cloud; Right side: pure point cloud data; Bottom center: View of the road from the driver’s perspective
Fig. 1: Smart Sensor System: Left side: output from a Smart Sensor with the point cloud; Right side: pure point cloud data; Bottom center: View of the road from the driver’s perspective

On the right side, you can see the pure point cloud data alone. The Smart Sensor System can output the data depending on the customer’s needs, as figure 2 below shows. Point cloud data with additional object info can be also provided.

Fig. 2: Smart Sensor System Data Output, Right side: View of the Road from the Driver’s Perspective
Fig. 2: Smart Sensor System Data Output, Right side: View of the Road from the Driver’s Perspective

 

Conclusion

To achieve high level safe autonomous driving, an ASIL D safety grade environmental perception system is mandatory, especially for the object detection function. To achieve this essential high level of safety, a single sensor is clearly not enough. Therefore, a fusion of multiple sensors and algorithms is the safest, easiest, and lowest cost way to accomplish this crucial safety level. Furthermore, the addition of perception algorithms is superior in comparison to using multiple sensors alone, since sensors provide an incomplete raw-data-only interpretation of the road. This sensor fusion technology solution is what we believe will solve the inherent safety dilemma holding back consumers from the mass adoption of autonomous driving for passenger vehicles.

 

About the author

Dr. Leilei Shinohara is Vice President of R&D, RoboSense. With more than a decade of experience developing LiDAR systems, Dr. Shinohara is one of the most accomplished experts in this field. Prior to joining RoboSense, Dr. Shinohara was the Technical Lead at Valeo, leading the Valeo’s SCALA LiDAR product development for Asian customers, the world’s first automotive grade LiDAR product. He was responsible for multiple programs, including automotive LiDAR, automotive safety products, and sensor fusion projects. Dr. Shinohara managed an international product development team located in six countries, responsible for the development and implementation of systems, software, hardware, mechanics, testing and validation, and functional safety, building the first automotive grade LiDAR product. RoboSense’s website is: http://www.robosense.com.