Image sensors, once a set of visual perception tools, are finding their way into new, more challenging, and constantly expanding applications. And artificial intelligence (AI) is poised to take the technology even further.
In fact, the global CMOS (Complementary Metal Oxide Semiconductor) image sensors market is expected to grow by $10.15 billion during the period 2020-2024, according to the market research firm Technavio.
What’s making this possible is that imaging technology itself is undergoing a revolution, accelerated by demands for smaller packaging and higher resolution.
To achieve those goals, pixel sizes must decrease along with the fill factor, and metal wiring reflection absorption requirements must be minimized.
Front-side illumination (FSI or FI) has matured, and production costs are getting lower. This has resulted in a higher yield, although higher resolution sensors now approaching 2 microns limit the light-sensitive area. Additional light guides added to microlenses have channeled as much light as possible to the limited photosensitive area, but 1 to 2 μm is the tipping point for pixel sizes.
Back-side illumination offers improved performance
Back-side illumination (BSI or BI) offers an interesting alternative as control electronics don’t limit the light-sensitive area. However, light travels further at lower quantum efficiency (a ratio between the number of photons hitting the sensor and the number of charge carriers collected at the terminal). The figure below illustrates back- and front-side illumination.
Manufacturers are adopting and adapting BSI with new sensor design architecture that improves sensitivity, signal-to-noise ratios, and dynamic range for intelligent security and surveillance cameras in low-light conditions.
Sony, for example, reduced the complexity and associated build costs in the 5-megapixel 1.75 µm BI CMOS sensor, introduced in 2009. The HTC EVO 4G Android smartphone, as well as Apple’s iPhone 4, used OmniVision Technologies BI.
Today, SmartSens Technology Co., LTD. has developed high-performance CMOS image sensing (CIS) proprietary designs for surveillance, automotive, machine vision, and consumer electronics such as sports cameras, drones, and automatic vacuums.
BI CMOS sensor performance can be vastly improved for high-speed applications in automotive, aviation, and manufacturing by employing “global” rather than “rolling” shutter designs. Rolling-shutter sensors drive costs down and provide good performance, but their artifacts are easily recognized in fast-moving objects and in unstable or moving cameras.
Unlike a rolling shutter that reads an image from top to bottom, the global shutter sensor exposes all pixels to an image at the same time, which generates more noise, more heat, and less dynamic range. Those shortfalls can be minimized but at dramatically higher costs. Nevertheless, the results under high-speed or low-light requirements are unmistakable.
AI accelerating adoption of image sensors
Artificial Intelligence (AI) and its subset machine learning can alter future behavior of intelligent things, which fall into three categories, all requiring “eyes”—robots, drones, and autonomous vehicles. In high-speed automotive applications, global shutters make it possible to accurately brake and avoid collisions and obstructions.
Autonomous or conventional vehicles call for technology with onboard multiple-exposure, ultra-high dynamic range automotive sensors that deliver superior images under high-speed difficult lighting conditions. Meeting these requirements is only possible with global shutter and high-speed BSI processing.
Machines or robots that spot flaws or defects in small parts in high-speed industrial situations for quality assurance, real-time maintenance, and to collect data for virtual-machine designs demand imaging using global shutter and BSI CMOS technology.
Engineers can then use the collected data to seed generative design software to explore permutations of design alternatives, a digital twin—a pairing of the virtual and physical—to discover what works and what doesn’t. Someday, this may take place within the machines themselves.
The startup formed by Silicon Valley veteran Andrew Ng, Landing.ai, has developed machine-vision tools for detecting microscopic defects in products like circuit boards applying a machine-learning algorithm using relatively few sample images. The computer “sees” and processes the information and learns from its observations.
Surveillance cameras using BSI CMOS can now record low-light/star-light with sensitivity ratings from 0.01 lux (clear, moonlit surroundings) to 0.001 lux (clear, moonless surroundings) with lower digital noise, making them suitable for city or warehouse security and observation.
Law enforcement agencies in large metropolitan areas increasingly rely on BSI CMOS image sensor technology to record and identify individuals in high-crime locations, often in low-light conditions. AI and deep-learning behavioral systems with live-face biometric data may someday help law enforcement stop crimes before they happen.
Other agencies—Fire, EMS, Rescue—are saving lives and guiding first responders by using surveillance drones equipped with BSI CMOS high-definition cameras. Search efforts for missing children or lost hikers can save valuable time and lives through scanning drones capable of recognizing specified human biometrics.
The Mars 2020 rover mission, part of NASA’s Mars Exploration Program, intends to explore the Red Planet extensively for at least a year (or until the wheels fall off!) The launch in July 2020 sends the 2020 Rover on a one-year journey. It is expected to land on Mars on February 18, 2021.
The instrumentation relies heavily on image sensors. Of the 23 cameras aboard, 16 will perform engineering and science tasks. The enhanced engineering cameras provide detailed color images of terrain for safe navigation, self-examination of hardware, guide sample-gathering, and solutions for piloting to areas designated as scientific targets.
None of this would be possible without BSI/FSI CMOS image sensors (20 megapixels, 5120 x 3840-pixel image) and integrated software. The Navcam reveals the contours of immediate and desired areas from its current location, making that data available to the rover and its team on Earth.
HazCams, four in front and two in the rear, detect obstacles and hazards, frequently stopping to grab stereo images of the terrain ahead to evaluate on its own the difficulty and possible alternate routes. In addition, they help guide robotic arms to collect samples and take measurements without consulting the rover team.
Of the science cams, SHERLOC and WATSON, mounted on the rover's robotic arm are especially interesting. SHERLOC searches for organics and minerals altered by water using spectrometers, a laser, and a camera. WATSON captures the “big” picture for SHERLOC’s detailed examination of mineral targets. WATSON views textures and structures of Martian rocks and surface layer of rocky debris and dust on a fine scale, but it can be used by the other arm-mounted instruments.
Nowhere can CMOS AI/ML technology be more beneficial than healthcare. The same technologies with machine learning that can find pinholes in circuit boards, evaluate Martian terrain, or identify printed characters automatically can help locate and diagnose diseases once overlooked by physicians.
Computer-aided diagnosis uses automated image analysis to extract a statistical inference and apply a pattern classifier to determine the category to which the extracted feature may belong. This is especially useful in mammography and certain cancers.
Clearly, image sensors—the “The eyes of the machine” are providing much more than pictures; They are giving us insight into new realms and realities.