Far Infrared Tech Brings Safe, Fully-Autonomous Driving To The Masses

The race for the fully autonomous vehicle continues. But the current state of most OEMs’ sensing technologies makes further advancement of AVs very challenging. The most commonly used sensing solutions—radar, CMOS cameras, and lidar---have profound gaps in their abilities. Currently, none of these solutions (separately, or combined) can adequately and reliably provide autonomous vehicles with safe and complete coverage of their surroundings in all weather and lighting conditions.


Radar, for example, can detect sufficiently long-range objects but cannot clearly identify them. Cameras are much better able to identify objects but can only do so at close-range and are blind in some severe environmental conditions. Many automakers, therefore, are combining their radar and camera efforts to provide more complete coverage of vehicles’ surroundings: A radar sensor detects an object far down the road; as it approaches, a camera provides a clearer picture of it. Alternatively, for more detailed information about objects that are far away, a wide field-of-view radar detection can be complemented by a narrow field-of-view camera input.

Fierce AI Week

Register today for Fierce AI Week - a free virtual event | August 10-12

Advances in AI and Machine Learning are adding an unprecedented level of intelligence to everything through capabilities such as speech processing and image & facial recognition. An essential event for design engineers and AI professionals, Engineering AI sessions during Fierce AI Week explore some of the most innovative real-world applications today, the technological advances that are accelerating adoption of AI and Machine Learning, and what the future holds for this game-changing technology.


Lidar (light detection and ranging) sensors are another perception solution for most autonomous vehicles, but there are situations when they don’t work effectively. Like radar, lidar works by sending out signals and using the reflection of those signals to measure the distance to an object. (Radar uses radio signals; lidar, lasers or light waves.) Lidar sensors can provide a wider field-of-view than radar, but the cost of this solution remains prohibitive for mass market applications. The angular resolution in lidar has also yet to be improved to allow for identification of small and/or distant objects. When developing new lidar sensors, manufacturers must choose among eye-safe wavelength, price, resolution and range. Eye-safe wavelength (around the 1550 nm wavelength band) is known to require expensive hardware; however, non-eye safe wavelength (near IR), which is cheaper to produce, is required by regulation to use lower emission power, resulting in shorter detection range. For this reason, lidar manufacturers, at present, can only produce lidar that is either unfeasibly costly or short-range--neither of which can deliver Level-3 or higher autonomy to the mass market.


A different sensor is needed to enable Level-3 to Level-5 autonomy


According to the U.S. Energy Information Administration, Level-5 autonomy is defined as full automation, meaning: “The vehicle is capable of performing all driving functions under all conditions.” Performing in all conditions, then, includes the ability to operate autonomously in adverse weather conditions and at night. 


Delivering effective perception in these circumstances is particularly crucial for the development of autonomous vehicles, as these are among the most common causes for vehicular accidents.


The American Meteorological Society identifies adverse weather conditions as a primary cause of vehicular accidents, claiming 480,000-800,000 injuries, 7,000 deaths, and 22-25 billion (USD) in cost each year. Wildlife also proves a continuous threat for vehicles: The U.S. Department of Transportation Federal Highway Administration estimates that there are 1-2 million collisions between vehicles and large animals every year in the U.S. alone. Driving at night is another known danger, particularly when vehicles must coexist with pedestrians: The Insurance Institute for Highway Safety reports that pedestrian fatalities increase the most after the sun has gone down. In fact, over three-quarters of all pedestrian deaths happen at night.


However, the current sensors in use (radar, cameras, and lidar) do not presently have the sensing capabilities needed to provide adequate coverage in these circumstances. For this reason, many OEMs are now exploring FIR (far infrared) cameras as a promising technology to provide the sensing capabilities needed to achieve Levels 3, 4, and 5 vehicle autonomy.


FIR technology delivers the sensing capabilities that Level-5 autonomy demands


Far infrared (FIR) thermal sensors give vehicles complete, reliable detection of the road and its surroundings. It’s able to do this when other sensing solutions can’t, because of its distinct technical advantages.


A Different Band of the Electromagnetic Spectrum


For one, by accessing a different band of the electromagnetic spectrum, FIR sensors can retrieve an additional layer of information that other sensors can’t. Unlike radar and lidar sensors that both transmit and receive signals, a FIR camera passively collects signals by detecting the thermal energy that radiates from objects. By sensing this infrared spectrum that’s far above visible light, FIR cameras access a different band of the electromagnetic spectrum than other sensing technologies do. Thus, the FIR camera can generate a new layer of information, enabling it to detect objects that may not otherwise be perceptible to radar, cameras, or lidar.

Electromagnetic spectrum


In addition to reading an object’s thermal signature, FIR cameras also capture objects’ emissivity—the rate at which an object emits heat. Since every material has a different emissivity, a FIR camera can immediately detect and classify any object—living or non-living—in its path.




Passivity is another advantage that FIR has over other sensing solutions. For example, many autonomous vehicles currently use lidar and/or radar; these, however, are both active, energy-emitting modalities. As such, the lidar and/or radar installed and functioning on one vehicle may interfere with and upset that of another passing vehicle. Conversely, because FIR is a passive technology, it can work to detect and cover a vehicle’s surroundings without ever upsetting the sensors of other vehicles.


Invariant Images


FIR further trumps other sensing modalities, as it delivers an invariant image for lighting conditions: Its image perception is not compromised by the color of an object, an object's’ background, the direction of illumination, a multiplicity of light sources, specular reflections, nor many other image irregularities that may stem from variable lighting conditions in regular CMOS images. For instance, although CMOS cameras are usually quite good at detecting lanes and other road markings, they may struggle to accurately detect the drivable road area--even in daylight-- due to the high variance in the visual road appearance in CMOS images. The visual road appearance in a FIR image is much less variable; it retains similar characteristics in many different lighting conditions.


Both by accessing a new layer of information and by producing invariant images, FIR delivers intensely detailed images to the autonomous vehicle. This rich imagery, in turn, improves any feature of the AV that relies on machine perception, such as: on-road object detection, classification, intention recognition, tracking, distance estimation, semantic segmentation, and SLAM (simultaneous localization and mapping).


Unidirectional Light Property


Another reason the FIR spectrum is attractive for imaging in inclement weather is because of its unidirectional light property. In the visible spectrum, light must make two trips: First, it travels from the source (e.g. the sun or a car beam); then, after it has been reflected or scattered by said object, it travels to the camera. In the FIR spectrum, a significant signal component makes only one trip: It is emitted by the source and travels directly to the camera. Because the light only makes one trip when in the FIR spectrum and, therefore, gets attenuated only once, and there are significantly fewer light signals that get reflected or scattered by air particles towards the FIR camera. Consequently, a FIR sensing solution can produce high-fidelity images in inclement weather, as it is this reflected or scattered light that creates the cloudy/misty/hazy/dusty/unclear artifacts observed in the visible light spectrum.


Temperature Detection


A FIR camera’s impressive temperature detection adds further to its ability to produce high-quality images. Modern FIR cameras can record temperature differences below 0.05°C, allowing high-contrast imaging of live objects, whose temperatures often significantly stand out relative to their background. FIR cameras can detect pedestrians over 200 meters away (this has been recently demonstrated with VGA (640x480) FIR sensors and 17-degree field-of-view lenses).  FIR-based systems are also able to track objects, identify drivable road area, analyze pedestrian intention, and complete many other perception tasks that CMOS cameras can do only in ideal conditions--not in challenging weather or at night.


In addition to outperforming other sensing solutions in adverse weather conditions and at night, FIR cameras have other impressive sensing capabilities that make them the key to enabling Level-3 and up autonomy. For instance, when faced with dynamic lighting conditions, such as direct sunlight or oncoming headlights (which are common weaknesses for lidar and CMOS cameras), a FIR camera is still able to produce a high-quality image. This capability is necessary for autonomous vehicles to be able to navigate urban terrain. Consider a vehicle that is entering or exiting a tunnel: The lighting quickly changes from light to dark (or vice versa), momentarily blinding the vehicles’ CMOS cameras and/or lidar. A FIR camera, on the other hand, can provide continuously clear coverage of the vehicle’s surroundings.


FIR is a mature and proven technology


Automakers’ aggressive plans to deploy fully autonomous vehicles within the next five to ten years cannot be realized with the current sensors in use. To meet their goals and deliver safe, Level-5 autonomy to the mass market, automakers must turn to FIR technology—and it is curious as to why many have yet to do so.


FIR has been used for decades in other vertical industries (including military and aviation), making it a mature and proven technology with demonstrated, scalability for mass-market applications. But while FIR technology has matured in these other industries, it has been neglected by the automotive market—even though autonomous vehicles are in dire need of an accurate, reliable sensing solution. Experts postulate that FIR technology has remained out of the mass market auto industry as legacy companies have, until recently, reserved the technology exclusively for luxury brands.


The FIR Revolution


Now, however, FIR is in the midst of a revolution, and new sensor company, AdaSky, is leading the way. To achieve mass production and overcome installation challenges, Israeli startup AdaSky has developed Viper, a high-resolution thermal camera that passively collects FIR signals, converts it to a VGA video, and applies-deep learning computer vision algorithms to sense and analyze its surroundings.

Viper high-resolution thermal camera


Thermal cameras require lenses made from unique, transparent materials at the FIR spectrum. But the common optical materials that have been historically used for FIR, such as germanium, are very expensive and, thus, are not suitable for mass production. However, newer and advanced chalcogenide glass materials and molding manufacturing technology are now making thermal cameras affordable. Coupled with the most advanced detector fabrication techniques adopted from the semiconductor chip industry, this allows thermal imagers to have improved quality, a smaller size, and an affordable price.


Additionally, AdaSky’s Viper is unique in that it does not require a shutter (which other FIR cameras typically do), because it uses the startup’s shutterless, non-uniformity correction, and because it is the only camera to apply a dedicated system on-a-chip ISP, accompanied by one of the smallest detector pitch and wafer level packaging technologies. AdaSky can make Viper as small as Ø26mm and 44mm long with 30.4° FOV, enabling the startup to deliver FIR technology at a low price and in an optimal size to permit aesthetically-pleasing installation almost anywhere on a vehicle.


AdaSky’s Viper works together with state-of-the art machine-vision algorithms to enable better perception of vehicles’ surroundings. With multi-class object detection and classification, Viper enables vehicles to simultaneously detect and classify pedestrians, vehicles, trucks, bicycles, and motorcycles.


FIR technology, and Viper, specifically, is the only sensing solution capable of delivering the perception and coverage needed to facilitate Level-3 to Level 5 autonomy for the mass market. AdaSky has recently secured a $20 million investment from global automotive supplier, Sungwoo Hitech Co., Ltd, signifying the industry’s endorsement of FIR technology. As AdaSky continues to lead this FIR revolution, other automotive suppliers and OEMs are becoming cognizant of the important role FIR technology will play in bringing safe, fully-autonomous driving to the mass market.

Suggested Articles

Brain Corp. reported a sharp increase in autonomous robot usage in 2Q

Nvidia DGX accelerators helped train system from 150,000 chest X-rays with inference results in less than a second

One forecast from Cameron Chell: the best AI designers of the future won’t come from top universities