How To Build An Eight-Bit Thermal Imaging Camera

How To Build An Eight-Bit Thermal Imaging Camera

Sensors Insights by Chris Best

Here in Arizona, there is growing public concern about wrong-way drivers on state highways. Drivers who are distracted or impaired sometimes end up on the wrong side of the road, putting other drivers’ lives in danger. To combat wrong-way driving, the Arizona Department of Transportation is constructing and testing a wrong-way driver thermal detection system, which is designed to detect wrong-way vehicles and alert other drivers and law enforcement officers of them. Thermal cameras are placed at freeway ramps and along the freeway itself, and when a wrong-way driver is detected, the thermal cameras track the vehicle, sending vehicle location data to law enforcement, and notifying other motorist through overhead message boards and illuminated signs with flashing lights.

A thermographic camera, also known as an infrared (IR) camera or thermal imaging camera, uses infrared radiation to create an image that we can see in the visible light spectrum. Thermal cameras were originally developed for military applications during the Korean War. Today, they are used in military, commercial, industrial and personal applications.

Often, these cameras are designed using state-of-the-art microprocessors, 16- or 32-bit microcontrollers (MCUs), or a combination of both. Because I work for Microchip Technology’s 8-bit application group, I wanted to see if it was possible to build an inexpensive, low-resolution thermal camera using an 8-bit microcontroller.

Fierce AI Week

Register today for Fierce AI Week - a free virtual event | August 10-12

Advances in AI and Machine Learning are adding an unprecedented level of intelligence to everything through capabilities such as speech processing and image & facial recognition. An essential event for design engineers and AI professionals, Engineering AI sessions during Fierce AI Week explore some of the most innovative real-world applications today, the technological advances that are accelerating adoption of AI and Machine Learning, and what the future holds for this game-changing technology.


Electromagnetic Radiation

To help understand how a thermal camera works, a good understanding of what electromagnetic and infrared radiation is needed. All normal matter emits electromagnetic radiation when its temperature is above absolute zero (-273.15°C). This radiation, also known as thermal radiation, represents the conversion of matter’s thermal energy into electromagnetic energy and may include both visible and infrared radiation.

Visible radiation, or visible light, is the electromagnetic radiation that is visible to the human eye, and it is typically defined as having wavelengths in the range of 400 to 700 nanometers (nm). Infrared radiation is invisible to the human eye and is defined as having wavelengths ranging from 700 nm to 1 millimeter (mm). Thermal radiation emitted by ordinary objects that are in thermodynamic equilibrium with their surrounding environments can be considered black-body radiation. Objects that are near room temperature (25°C) emit thermal radiation in the infrared spectrum.

Black-body objects are idealized physical objects that absorb all incident electromagnetic radiation, meaning that all radiation that interacts with an object is absorbed. Of course, in nature there are no ideal black-body objects – black holes are near-perfect black bodies since they absorb all radiation that falls into them but may not be in perfect thermodynamic equilibrium with its surrounding environment.

When a black body is in thermal equilibrium (constant temperature), the body emits black-body radiation according to Planck’s Law, which describes the distribution of the electromagnetic radiation’s power in terms of frequency components at a given temperature. In other words, a black-body object that is held at a constant temperature will emit radiation of a specific magnitude and frequency that is dependent only on the object’s temperature, not its shape or composition.

Real-world objects - because true black-body objects don’t physically exist - emit energy at a fraction of black-body objects. This fraction is known as an object’s emissivity and is used to determine the object’s actual effectiveness in emitting thermal radiation. An ideal black-body surface has an emissivity of ‘1’, meaning that all radiation that interacts with the surface is absorbed by the object. Polished silver, on the other hand, has an emissivity of ‘0.02’, which means that almost all the radiation is scattered or reflected from the surface and very little is absorbed.

Infrared radiation is a type of electromagnetic radiation that radiates in wavelengths between 700 nm and 1 mm. These wavelengths are invisible to the human eye; however, they can be felt as heat. For example, the sun emits roughly half of its energy as infrared radiation, and although we can’t see the radiation with the naked eyes, the heat can be felt simply by standing in sunlight.


Thermal Camera Components

The 8-bit thermal camera consists of the following three main hardware components:

  • Panasonic Grid-EYE infrared sensor
  • Varitronix COG-C144MVGI-08 graphic display LCD module
  • PIC18F27K42 8-bit microcontroller

Infrared detection is performed using the Grid-EYE sensor. The Grid-EYE is an 8 x 8 pixel (64 total) infrared array sensor designed using Micro-Electro-Mechanical Systems (MEMS) thermopile technology. The thermopile array consists of a series of free-standing thermocouples. Each thermocouple consists of two thin wires of different thermal materials. The two wires are joined together at one end, known as the hot junction, with the other ends connected to a heat sink.

The hot junction is connected to a very thin common IR absorption membrane, which is shared by all 64 thermocouples. If there is a difference in temperature between the two junctions, a tiny Electromotive Force (EMF) voltage is created, which can be measured and converted into temperature. This phenomenon is referred to as the Seebeck effect. The sensor communicates via the I2C bus operating at a maximum 400 kHz. The sensor also features an on-board gain amplifier, Analog-to-Digital Converter (ADC) and a thermistor (see figure 1).

Fig. 1:  The basic block diagram of the Grid-EYE sensor’s main internal components is shown here.
Fig. 1:  The basic block diagram of the Grid-EYE sensor’s main internal components is shown here.

The sensor begins its operation by absorbing infrared thermal energy across its 60° field of view. The IR energy passes through an integrated silicon lens that acts as an optical filter, allowing absorption of IR energy for wavelengths between 5 and 13 μm (far infrared region). Once the IR energy passes through the lens, it is absorbed by each of the thermopile array’s 64 sensing elements. Each of the sensing elements converts the IR energy it absorbed into an analog output signal.

The analog voltage is typically in the low millivolt range, which may be too small to accurately detect small changes in energy. To correct this, each sensing element’s analog output is passed through a gain amplifier, effectively increasing the resolution of each element. Once each signal is amplified, it is passed through the ADC where it is referenced against the on-board thermistor’s temperature value and converted into a 12-bit (11 bits + 1 sign bit) digital equivalent. Each of the 64 pixels has its own unique temperature register, which holds the converted digital temperature equivalent. These temperature registers can be read by a microcontroller over the I2C bus.

The LCD module features Color Super-Twist Nematic (CSTN) LCD technology, which uses passive-matrix addressing. In a CSTN LCD, row and column signals are used to directly address a pixel, and the pixel must maintain its ON/OFF state without the use of a switch or capacitor. Each visual pixel is divided into three physical sub-pixels, and each sub-pixel uses either a red, blue or green filter to display color. The display uses a white LED backlight whose light passes through each sub-pixel.

The intensity of each sub-pixel’s output is controlled by the display’s LCD driver, creating up to 65 thousand unique colors. The driver is a Samsung S6B3306 LCD driver, which is integrated into the display module. The driver simplifies the interface between a microcontroller and the display, which means that fewer connections are necessary.

The LCD is configured in 65k color mode. In 65k color mode, the 16-bit word is divided into the standard RGB565 color format. The RGB565 format is a 16-bit color scheme in which bits <15:11> (5 bits) define the red intensity, bits <10:5> (6 bits) define the green intensity, and bits <4:0> (5 bits) define the blue intensity (see figure 2). The RGB565 format gives an extra bit to the green color since human vision is more sensitive to the green wavelengths of the visible light spectrum.

Fig. 2: This is showing the 16-bit word divided into the standard RGB565 color format.
Fig. 2: This is showing the 16-bit word divided into the standard RGB565 color format.

The PIC18F27K42 microcontroller is used to read the temperature data from the sensor, perform the image processing, and transmit the color data to the LCD. The following peripherals were used in this camera:

  • Timer1
  • Direct Memory Access (DMA)
  • I2C
  • SPI

Timer1 is a 16-bit incrementing counter that is implemented in the thermal camera application to generate a 15-second delay. When the camera is first powered on and the Grid-EYE sensor has been configured for use, it requires a 15-second delay to stabilize. Rather than using a ‘delay’ function, which suspends program execution during the delay cycle, Timer1 can be used to do the same task. Since Timer1 operates in the background, code execution continues, allowing the core to focus on other tasks rather than suspending code execution for a 15-second ‘delay’ function.

The Direct Memory Access (DMA) module allows data transfer between the memory regions of the PIC microcontroller without any CPU intervention. The DMA eliminates the need for CPU handling of interrupts intended for tracking data transfers, allowing the CPU to carry out other tasks while transfers are taking place. The camera uses the DMA to transfer an image file, stored in program memory, to the LCD during the Grid-EYE sensor’s required 15-second stabilization delay.

The I2C module provides a synchronous serial interface between the microcontroller and other I2C-compatible devices. The I2C module is used to configure and read temperature data from the Grid-EYE sensor and operates at a bus speed of 100 kHz. Reading the sensor’s pixel data requires a block read of the pixel registers. Each pixel contains a 12-bit temperature value broken into two individual bytes, and since there are a total of 64 pixels, the I2C performs a block read of 128 bytes.

Luckily, the pixel data region is configured sequentially, meaning that the I2C can transmit a single slave address, followed by a single register address, but will receive all 128 bytes in a single transaction. After each pixel register is read, the sensor automatically points to the next register, so there is no need to start a new communication packet each time a pixel register is read.

The PIC18F27K42’s SPI module is used to configure and write color information to the LCD. The module is configured in transmit-only mode at a SCK speed of 8 MHz. Transmit-only configuration allows one-way transfers from the master to the slave device without the need for the master to read its SDI input. Each image frame is composed of 17,434 16-bit words, which means each frame will require the SPI to transmit 34,868 8-bit bytes for each frame.

As one can see, even saving one instruction cycle each time the SPI writes a byte of data would amount to 34,868 saved instructions, which means the SPI can write its data that much quicker. This helps prevent image lag from frame to frame.

Once the PIC microcontroller has read the temperature data from the sensor, it must perform image processing to create the image that is transmitted to the LCD. The image processing software uses the sensor data to create an image based on the 64 pixels contained in the sensor. If we were to observe this 64-pixel array on the 1.44-inch LCD, the image would be too small to see. To properly view the image, it must be expanded.

Linear interpolation is the process of finding an unknown value between two known values on a line. In other words, linear interpolation uses the information we already must fill in the missing information needed to expand the image. For this camera, the bilinear interpolation method is used.

In this case, software takes the values of four neighboring pixels, applies a scaling factor to each of the four pixels, and takes the average of the four scaled pixels and applies that value to the newly created pixel. The scaling factor depends on the distance the newly created pixel is from the original pixel; the further away the new pixel is, the smaller the scale factor (see Figure 3). Linear interpolation approximates an unknown value based on known values, but cannot ensure the calculated value to be exact. In other words, the unknown area between the two pixels may contain the edge of the object, and instead of creating the object’s ‘hard’ boundary, interpolation may cause the boundary to be less defined.

Fig. 3: An example of linear interpolation to expand an 8 x 8 data array into a 32 x 32 data array.
Fig. 3: An example of linear interpolation to expand an 8 x 8 data array into a 32 x 32 data array.

If you would like to build this camera yourself, please see the Microchip Application Note AN2773. AN2773 describes the camera components and operation in more detail. I have also posted the entire source code on Microchip’s MPLAB® Xpress Code Examples website.


About the author

Chris Best is an applications engineer for the MCU8 Applications business unit at Microchip Technology Inc., where he develops collateral for new products, such as technical briefs and application notes, demos and training material. Additionally, he supports high-level customer issues and is a class presenter at Microchip MASTERs Conference.

Best joined Microchip in 2013 and was the applications lead for the introduction of the PIC16F183xx microcontroller family. Prior to joining Microchip, he was at NMB Technologies, Inc. as a design engineer for 9 years. Best hold a Bachelor of Science degree in Electrical Engineering Technology from DeVry University and is based at Microchip’s corporate headquarters in Chandler, Arizona.

Suggested Articles

Silicon Labs is providing the BT module needed for detecting proximity with another Maggy device

Test automation won't fix everything, but can help, according to an automation engineer. Here are five problems to avoi to improve chances of success

Many of Nvidia’s competitors also use Arm designs, and are sure to object to the deal