Open Source Multi-Sensor Self-Driving Dataset Available To Public

Scale has released what it believes to be the largest open source multi-sensor (LIDAR, RADAR, and camera) self-driving dataset published by nuTonomy (acquired by Aptiv in 2017), with annotations by Scale. Academic researchers and autonomous vehicle innovators can access the open-sourced dataset, nuScenes.

 

The nuScenes open source dataset is based on LIDAR point cloud, camera sensor, and RADAR data sourced from nuTonomy and then labeled through Scale’s sophisticated and thorough processing to deliver data ideal for training autonomous vehicle perception algorithms. It provides the full dataset that includes 1,000 twenty-second scenes, nearly 1.4 million camera images, 400,000 LIDAR sweeps, and 1.1 million 3D boxes.

 

Like RADAR, LIDAR emits invisible infrared laser light that reflects off surrounding objects, allowing systems to compile three-dimensional point cloud data maps of their environments and identify the specific objects within them. Correctly identifying surrounding objects from LIDAR data allows autonomous vehicles to anticipate those objects’ behavior – whether they are other vehicles, pedestrians, animals or other obstacles – and to safely navigate around them. In this pursuit, the quality of a multi-sensor dataset is a critical differentiator that defines an autonomous vehicle’s ability to perceive what is around it and operate safely under real-world and real-time conditions.

 

For more details, visit nuScences and Scale.