Near-Infrared Imaging Enables Machine-Vision Advances

Sensors Insights by Lindsay Grant

Machine vision is traditionally defined as electronic imaging used for the purposes of inspection, process control, and automatic guidance.1 In machine-vision applications, computers — rather than humans — use imaging technologies to capture images as input for the purposes of extracting and delivering information output. Beyond conventional industrial applications, machine-vision capabilities for advanced driver assist systems (ADAS), augmented and virtual reality (AR/VR) and smart security systems call for advanced digital-imaging capabilities that see better and farther in low-light, or no-light, conditions and require less battery power, all while not interfering with human activity.

Historically, machine-vision technologies have relied on a number of light sources to capture images, including fluorescent, quartz halogen, LED, metal halide (mercury) and xenon.2 When used alone, these sources require a lot of power and result in poor image quality. As such, they are not sufficient for use beyond traditional industrial applications.

AR/VR, security and ADAS driver-monitoring applications that rely on eye tracking, facial recognition and gesture control, as well as security systems and ADAS surround-view cameras that feature night vision, require illumination that goes beyond the visible light spectrum to achieve the desired outcomes. Over the past few years, advances in digital near-infrared (NIR) imaging have revolutionized these machine- and night-vision capabilities.

 

Why NIR Is Necessary For Current Machine-Vision Applications

NIR light actively illuminates objects or scenes outside of the visible light spectrum and allows the camera to see in low- or no-light situations beyond the capability of human vision. While low-level LEDs may still be required to augment NIR in some applications, they require less power and are less likely to interfere with the user. This is important for accurate eye tracking and gesture control, for example, in AR/VR applications, or for driver-monitoring systems. In the case of security-camera applications, NIR allows for intruders to be detected without their knowledge.

Additionally, NIR wavelengths produce more photons at night than the visible spectrum, making it ideal for night-vision applications. Take, for example, a comparison of two approaches to night vision in ADAS systems. In one, the auto manufacturer used a passive, far-infrared (FIR) system that registers images based on body heat and that displays a bright, negative image. While it can see what’s in its path up to 980 feet, the image isn’t clear, and it’s dependent upon the object emanating heat.

Alternatively, another manufacturer used NIR technology that produced a sharp, clear picture in the dark, as if it were illuminated by the vehicle’s high beams. Images can be picked up regardless of the object’s temperature. However, this NIR system has a maximum effective range of less than 600 feet.3

 

NIR Limitations

As much as NIR provides a marked improvement over alternative approaches in most cases, it is not without its challenges. The effectiveness range of an NIR imaging system is directly related to its sensitivity. At their best, current NIR sensor architectures achieve ≤ 800-nm sensitivity. Increasing sensitivity to 850 nm or more will also further extend the range.

The effective range of NIR light-based imaging is determined by two key measurements: quantum efficiency (QE) and modular transfer function (MTF). An imager’s QE represents the ratio of photons it captures that are converted to electrons. The higher the QE, the farther the NIR illumination can reach, and the brighter the image. A QE of 100% means all of the photons captured are converted to electrons, achieving the brightest possible image. Currently, the best NIR sensor technologies achieve only 58% QE.

MTF refers to the measurement of an image sensor’s ability to transfer contrast at a particular resolution from the object to the image.4 The higher the MTF, the sharper the image. MTF is impacted by the signal noise that comes from electrons bouncing out of pixels. Therefore, to maintain a stable MTF and to achieve sharp images, electrons need to remain inside the pixel.

Fig. 1: This simulated image comparison showcases low MTF vs. high MTF.
Fig. 1: This simulated image comparison showcases low MTF vs. high MTF.

 

Challenges With Existing Solutions

When used in nonvisible applications, the wavelengths of NIR light increase. As a result, the QE of the silicon drops off, and the photons work less efficiently within the crystal lattice. A longer depth of silicon is required to generate the same number of photons. Therefore, the conventional approach to increase QE is to simply use thicker silicon. This can increase the chance of photon absorption, offering higher QE and increased signal strength as compared with thinner silicon.

In the case of single-pixel detectors, using thicker silicon improves NIR QE to more than 90%. However, when the application calls for smaller pixels, increasing the silicon thickness to 100 µm, for example, causes photons to skip into the neighboring pixel, creating crosstalk, which in turn decreases MTF. The result is an imager that, while more sensitive to NIR illumination, has less resolution, creating a bright but blurry image.

One way to combat this is to use deep trench isolation (DTI) to create a barrier between the pixels. While standard DTI has been shown to improve MTF, it can also generate defects that corrupt the dark area of the image. This has created a conundrum for companies working to improve NIR illumination for machine-vision applications.

 

Fig. 2: The absorption structure’s shape causes the path of light to be longer inside of the silicon, rather than straight up and down.
Fig. 2: The absorption structure’s shape causes
the path of light to be longer inside of the silicon,
rather than straight up and down.

Technology Breakthroughs

Recently, however, there have been technology breakthroughs that remedy the problems of using thicker silicon alone to increase photon absorption. First, an extended DTI approach was developed using advanced 300-mm foundry processes to build a silicon oxide barrier between adjacent pixels, which results in a refractive index change between the oxide and the silicon, thereby creating optical confinement within the same pixel. Unlike conventional DTI, rather than the trench widening as it goes deeper, with extended DTI, the width remains narrow, helping to contain the photons.

Second, an absorption structure, similar to pyramid structures used in solar-cell processing, is implemented on the wafer surface to create a scattering optical layer. Careful implementation of this layer can prevent defects from occurring in the dark area of the image and also further increase the path length of the photons within the silicon. The structure’s shape causes the path of light to be longer inside of the silicon, rather than straight up and down. This impacts the path length of the photons by splitting up the optical wave path and scattering it. As a result, the light bounces around the structure like a ping-pong ball, increasing the probability of absorption.

Ensuring the precise angles of the structure is critical to the effectiveness of the scattering optical layer. If the angle is wrong, it could cause the photon to reflect into the next pixel, rather than back into the same pixel as intended

 

Conclusion

Through close collaboration with its foundry partner, OmniVision developed its Nyxel NIR technology, which solves the performance issues that often plague NIR development. By combining thicker silicon and extended DTI and managing the surface texture with an optical scattering layer, sensors with Nyxel technology achieve up to 3x QE improvements compared with previous-generation OmniVision sensors with NIR capabilities, resulting in NIR sensitivities of 850nm without degradation of other image-quality metrics.

The outcome is compelling. Sensors equipped with this technology offer better image quality with longer-distance image detection in extremely low-light conditions, and require less light-source input and power to operate, thereby addressing new requirements for advanced machine vision for AR/VR, ADAS and night-vision applications.

 

References

  1. B. Morey, Machine Vision Inspection Speeds Up Automotive Lines, May 2015, Advanced Manufacturing.org, http://advancedmanufacturing.org/machine-vision-inspection-speeds-automotive-lines/
  2. D. Martin, A Practical Guide to Machine Vision Lighting, January 2017, National Instruments. http://www.ni.com/white-paper/6901/en/#toc4
  3. J. Briggs, How In-dash Night-vision Systems Work, How Stuff Works, http://electronics.howstuffworks.com/gadgets/automotive/in-dash-night-vision-system3.htm
  4. Introduction to Modular Transfer Function, Edmunds, https://www.edmundoptics.com/resources/application-notes/optics/introduction-to-modulation-transfer-function/

 

About the author

Lindsay Grant joined OmniVision in 2016 as the Vice President of Process Engineering. Prior to joining OmniVision, he was Technical Director and Fellow at ST Microelectronics for the Image Sensors Division. Mr. Grant worked on the development of image sensor pixel and process technologies for 16 years, pioneering ST Microelectronics' research and development activities for single-photon avalanche diode devices. Prior to joining ST Microelectronics, He worked for 12 years at Seagate Technology in device and process engineering positions. Mr. Grant has authored over 50 technical papers in the image sensors field, and was Chairman of the London Image Sensors conference for six years from 2008 to 2014. Mr. Grant received his B.S. degree in Physics from St. Andrews University in 1984.