Nvidia unveils Orin chip for robots and AVs

Nvidia's new Orin chip will power a Drive AGX Orin platform for autonomous vehicle and robot training. It has 17 billion transistors. (Nvidia)

Nvidia announced on Wednesday a software-defined platform for autonomous vehicles and robots powered by a new chip called Orin.

The Orin system-on-chip (SoC) has 17 billion transistors and relies on Nvidia next-gen GPU architecture and Arm Hercules CPU cores. Coupled with new deep learning and computer vision accelerators, Orin delivers 200 trillion operations per second, which is seven times the performance the previous generation Xavier SoC.

The software platform that Orin powers is called Nvidia Drive AGX Orin. It is designed to handle deep neural networks running robots and autonomous vehicles. Both Orin and Xavier are programmable through CUDA and TensorRT APIs, which gives programmers forward compatibility in their applications.

Sponsored by Digi-Key

Digi IX20 Secure LTE Router Available for Immediate Shipment from Digi-Key

The IX20 rugged, secure LTE router is a great choice for applications from basic connectivity to industrial-class and security solutions. Its high-performance architecture gives primary and backup WWAN over software selectable multi-carrier LTE.

Nvidia recognizes the complexity and expense of building safe autonomous vehicles (AVs). Nvidia CEO Jensen Huang called the task “perhaps society’s greatest computing challenge.” The Orin family of products will be available to carmakers for their 2022 production timelines.

Virtually every company working on AVs is using Nvidia technology, according to Sam Abuelsamid, an analyst at Navigant Research.

Nvidia also announced that it will provide vehicle makers and researchers with access to Nvidia Drive deep neural networks (DNNs) for AV development on the Nvidia GPU Cloud container registry.

Also, Nvidia announced availability of software tools for developers to customize DNNs while using their own datasets. The tools are for training DNNs for active learning, federated learning and transfer learning, which allows developers to speed development of perception software by leveraging Nvidia AV development.

In addition, Nvidia and Didi Chuxing announced they are working together to develop AV and cloud computing solutions.

Didi, maker of a leading mobile transport platform, will use Nvidia GPUs in data center servers to train machine learning algorithms as well as Nvidia Drive for inference work on its Level 4 AVs. Drive enables data to be fused from various sensors, such as cameras, LiDAR, radar and more and then uses DNNs to comprehend the 360-degree environment around a vehicle.

RELATED: Nvidia GPUs and Arm processors marry in new server reference design


Suggested Articles

New tech relies on time-of-flight sensor tech used in HoloLens combined with CMOS

With the rise of wearables, electronics are making their way into challenging environments unimaginable before, making protective coatings essential.

Edge computing has been around for a while, but the intelligent edge? Ah, come on!