Automotive ADAS: Driving force behind embedded vision’s future growth

Embedded vision is expected to grow substantially over the next decade. To find out what will be the key applications and what hurdles are yet to be overcome, Fierce Electronics sat down with Pierre Cambou, Principal Analyst, Technology & Market, Imaging, Yole Développement, to get answers.

FE: What will be the key applications for embedded vision in the next decade?

Cambou: Automotive ADAS will dominate, starting with forward-looking cameras and exploding into a flurry of sub-applications, such as side views, surround, mirrorless, and driver monitoring. Robotic vehicles will be another area important to monitor because they may introduce new approaches that will be important for the future of autonomy. One example here is the introduction of thermal imaging.

FE: What are the core components of vision-based ADAS systems?

Cambou: Sensors and computing. On the sensor side, the image sensor is integrated into a camera module. On the computing side, the function can be partly within the camera module and partly within the centralized computing unit of the car.

 FE: What do you mean by “unification of technologies” of the sensors and computing suite?

Cambou: The sensors and computing suite will continue to evolve simultaneously. I do not think they will fuse physically, but some computing will be embedded into the sensors and it will be custom-designed to receive the sensing data. For both ADAS and robotic approaches, the technology trend will eventually merge meaning they will use similar sensors with similar specifications.

 FE: What is the main difference in the technologies that will be adopted in robotic cars and the technologies adopted into conventional cars?

Cambou: Robotic vehicle cameras come from the industrial world, so they are very versatile in terms of how they operate. They are at the forefront of what is possible in terms of performance, resolution, dynamics, noise, and other key characteristics. When it comes to technologies for conventional cars, other aspects such as cost, reliability, and robust supply chains are what matter most.

FE: You point out that robotic cars are currently using or will use high-end industrial sensors. Why borrow from industrial, rather than developing something unique for automotive?

Cambou: Automotive is a difficult market for semiconductor players, as it is a very specific culture. The automotive cameras and radars which are currently produced at volumes well above 100 million units a year are very standardized, cost-effective solutions for a very precise job. But these sensors are currently designed with a $30,000 car in mind, and that is probably not the best way to aim at autonomy right now. Robotic vehicles do realize autonomy today, but that requires far greater performance, even when it means paying 50X more for the technology.

FE: What is the expected market size for sensor modules and computing platforms in 2030 and how does it compare to today?

Cambou: Today we consider the ADAS sensor market to be in the range of $3B, and the associated computing platform is at $4B. In 2030 if we consider a market size of 10M vehicles beyond Level 3, the sensor market will represent $15B and the associated computing could represent as much as  $150B.

FE: What will hitting those market projections depend on?

Cambou: Both ADAS and robotic vehicle market dynamics have to sustain with time. Given the continuous investments we saw in the last decade and the first palatable results we see today--Mobileye achieving >$1B and Waymo launching its MaaS operation--there is a good probability we can get there. Maybe what we really need is an iPhone 2007 moment, a player designing the right product for the right market, and in that case the future could be beyond our expectation.

 FE: And, finally, what needs to happen in order for ADAS vehicles to achieve L3 by 2025?

Cambou: Computing is currently the bottleneck. Tesla invested in a full self-driving (FSD)  computer, which integrates two FSD chips each having two custom-designed neural processing units with 36.86 trillion operation per second (TOPS) peak performance.  This computer has an overall peak performance of  144 trillion operations per second (TOPS). But in our opinion, this is far from what’s needed. NIO announced it would go with four Nvidia Orin at 1 PetaOps (a quadrillion deep learning operations for 1 kW), so, yes, this could start to work. By 2025 there are two options: Either find a way to drive an autonomous car with 144 TOPS (2 FSD chips), or bring down the price of 1 Peta Ops computers below $2,000.

Editor’s Note: Yole Développement’s Pierre Cambou will be speaking on Embedded Vision, at Embedded Innovation Week: Winter Edition, a digital event series taking place January 25-27, 2021. For more information and to register for your free pass, click here.

RELATED:

Embedded vision tech pops up all over from robots to smart cars

The relentless rise of CMOS image sensors

CMOS Image Sensor Invades AI & IoT Apps