Qualcomm lays out its smart transport vision, including vehicle prediction AI

Engineering autonomous vehicles and an intelligent transport system of the future was never going to easy.  The current state of research and development of AVs by major companies shows how complex the work will continue to be.

Many technologies come to bear to make AVs possible, including cellular vehicle-to-everything (C-V2X) communications that allow cars to talk to other cars or to infrastructure like traffic signals to navigate and move in a steady flow along highways and busy urban streets.

In addition, many companies make chips and sensors that are focused on how a car should react to inputs it receives, whether they are cellular signals from other cars or signals sent from an array of sensors, some that are tiny cameras or radar, sound or motion sensors.

At the heart of that interpretation of various inputs, companies are also wrestling with processing of that data with super-fast GPUs and other processors that can rely on artificial intelligence training and inference to make decisions directly within the vehicle or with a split second connection to the cloud.

Qualcomm on Tuesday laid out the state of its AV work with a blog and lengthy slide presentation that described a multi-pronged engineering approach with connectivity and telemetrics as well as in-vehicle compute capabilities. 

With its reputation with radio access networks and 5G, Qualcomm may seem to be focused primarily on cellular connectivity for AVs, but it also makes a Snapdragon Ride platform for AI deep learning functions such as perception, planning, action and connectivity.

The planning, prediction and action components of AI in autonomous vehicles may be the toughest nut to crack, according to one expert in the field. In other words, what will a car actually do when an oncoming collision or other problem is detected by sensors or is communicated from the cloud, either a central traffic management system or an edge device like a roadside unit?   

“There are plenty of problems in self-driving everywhere, but I believe the largest blocker is that we, as a total humanity, do not know how to solve is the problem of prediction,” said Vladimir Iglovikov, senior computer vision engineer at Lyft. He works in the Level 5 (the highest level in the SAE scale) self-driving division at Lyft and holds a PhD in physics from UC Davis.

 “V2X is an interesting technology but does not really help to solve problems that the autonomous industry is facing,” he added in a recent email to Fierce Electronics.

“For prediction, you know the map, you know how every car and pedestrian was moving in the last N seconds and you need to predict how will they move in the next M seconds.  The largest blocker lies in the research plane," Iglovikov added.  "We need more good public datasets and a lot of researchers focusing on the topic.”

RELATED: Lyft engineer sees self-driving as long game, calls for public prediction data

Lyft is sponsoring a $30,000 coding competition featured on the Kaggle developer community website that challenges developers to build models to reliably predict the movement of traffic and pedestrians around self-driving vehicles.  The competition, started Tuesday, has attracted 53 teams who face a three-month deadline.

The problem of prediction research goes beyond efficient sensors or faster accelerator chips and low latency networks.  It will involve studies of the way that the human brain makes predictions, or more specifically, the human driver, and how that knowledge can be applied to machines.

Qualcomm has offered a few insights in its new slide presentation about how it perceives an intelligence transport system will develop.  One approach envisions a highway system with smart roadside units that contain on-device intelligence for sensing, processing, security and intelligence that are shared by multiple network operators and the vehicles themselves.

 Within a vehicle, Qualcomm’s Snapdragon Ride platform relies on more than 30 concurrent deep learning networks as well as advanced Radar perception with deep learning, according to one slide. Model-based reinforcement learning approaches are used for prediction and planning.  Qualcomm also has developed a family of System-on-Chips and accelerators for increasingly complex levels of autonomous driving, including Level 5 at the highest level.

COVID-19 and the downturn in auto sales gives AI and AV researchers an opening to do some of the deep research into prediction that is still needed.  Some surveys show that sales of semiconductors for automobiles could be stunted for a year or two, but some companies that make sensors for cars have told Fierce Electronics they see a four or five year impact from COVID on their sales, which may result in AV research dollars being stretched over a longer period.