Drone Uses Wi-Fi for 3-D Through-Wall Imaging

Researchers at UC Santa Barbara have given the first demonstration of three-dimensional imaging of objects through walls using ordinary wireless signal. “Our proposed approach has enabled unmanned aerial vehicles to image details through walls in 3-D with only Wi-Fi signals,” said Yasamin Mostofi, a professor of electrical and computer engineering at UCSB. “This approach utilizes only Wi-Fi RSSI measurements, does not require any prior measurements in the area of interest and does not need objects to move to be imaged.”


The proposed methodology and experimental results appeared in the Association for Computing Machinery/Institute of Electrical and Electronics Engineers International Conference on Information Processing in Sensor Networks (IPSN).

Free Newsletter

Like this article? Subscribe to FierceSensors!

The sensors industry is constantly changing as innovation runs the market’s trends. FierceSensors subscribers rely on our suite of newsletters as their must-read source for the latest news, developments and analysis impacting their world. Register today to get sensors news and updates delivered right to your inbox.


In their experiment, two autonomous octocopters take off and fly outside an enclosed, four-sided brick house whose interior is unknown to the drones. While in flight, one copter continuously transmits a Wi-Fi signal, the received power of which is measured by the other copter for the purpose of 3-D imaging.


After traversing a few proposed routes, the copters utilize the imaging methodology developed by the researchers to reveal the area behind the walls and generate 3-D high-resolution images of the objects inside. The 3-D image closely matches the actual area.


“High-resolution 3-D imaging through walls, such as brick walls or concrete walls, is very challenging, and the main motivation for the proposed approach,” said Chitra R. Karanam, the lead Ph.D. student on the project.


The researchers’ approach to enabling 3-D through-wall imaging utilizes four components. First, they proposed robotic paths that can capture the spatial variations in all the three dimensions as much as possible, while maintaining the efficiency of the operation.


They also modeled the three dimensional unknown area of interest as a Markov Random Field to capture the spatial dependencies, and utilized a graph-based belief propagation approach to update the imaging decision of each voxel (the smallest unit of a 3-D image) based on the decisions of the neighboring voxels.


Third, in order to approximate the interaction of the transmitted wave with the area of interest, they used a linear wave model.


Last, they took advantage of the compressibility of the information content to image the area with a very small number of Wi-Fi measurements (less than 4 percent).


For more info, CLICK HERE.

Suggested Articles

Neuron on silicon uses only 140 nanowatts of power

Intel also bought Nervana for AI in 2016

Semiconductor supplier Analog Devices Inc. has sued Xilinx for violating its converter patents.