Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /home/virtualw/public_html/Resources/Hosted/Resource.php on line 9
Seeing Through Walls: Heat Vision AR Leaves the Movies
Not a member yet? Register for full benefits!

Username
Password
Seeing Through Walls: Heat Vision AR Leaves the Movies

We have all seen that type of film: The suspect is in an old building full of derelict persons, and the authorities – be they good or bad – have to find them. So, out is whipped a fancy AR detector of some sort, usually heat vision based. This allows the operator to scan the building, floor by floor from the outside, looking for warm bodies that might be people. They can then direct enforcement agents to each location directly, so no possibility is missed.

It is a fantastic technique, as useful for search and rescue as it is for fugitive finding. The only problem has been it's complete and total fictitious nature.

Of course that is starting to change now, or this article would not exist. The real version of this technology has thrown up some interesting complications that highlight the difficulty of working with such systems, including an unexpected distortion of data. Still, the usefulness outweighs the problems -and there are some potential ways to mitigate the problems that do occur, as well.

A 3-D version of a spade (left) is first rendered in 2-D from the thermal infrared energy emitted by the object. The different colors represent areas of higher and lower temperature as measured by the sensor. The image on right is the optically reconstructed object. The gray ground plane was added to provide context.

The story starts with the work of a multidisciplinary research team from four separate universities: the Massachusetts Institute of Technology (MIT), Harvard University, the University of Wisconsin, and Rice University all four located in the US. Drawing inspiration from the naturally occuring erratic behaviour of photons zooming around and bouncing off objects and walls inside a room (and out of any apertures from that room), the researchers sought to find out if they could use that information to image what was inside that room, without actually entering.

You can think of it as raycasting – as in the technique to trace rays of light back to their source (and any objects they reflect off of) for lighting in VR – applied to our physical world. The basic science is essentially the same.

In fact as the lead author, Otkrist Gupta, an MIT graduate student put it, it is exactly the same: "Imagine photons as particles bouncing right off the walls and down a corridor and around a corner—the ones that hit an object are reflected back. When this happens, we can use the data about the time they take to move around and bounce back to get information about geometry."

The researchers used an advanced piece of optical equipment known as a streak camera. Typically used for photographing faster-than-real-time events in apparent slow motion, the streak camera can capture the shattering of a pane of glass, or the splash of water droplets in minute detail. They are even fast enough to capture explosions and nuclear processes taking thousandths of a second – that is why they were first developed. A high-end streak camera has a potential exposure rate of one billion frames per second. It is not exactly portable equipment, but ideal for this purpose.

Combined with an ultrafast laser – again pulsing on and off several hundred million times per second – to use as a photon source, the research team had everything they required to generate photons, fire them into an enclosed space from outside, and track the returning photon trails optically. In other words, everything required to scan a mostly enclosed space of any shape or size, and build up a picture of what is within, without actually entering that space.

There are many potential applications for this technology. Among the more simple and obvious are disaster recovery situations. "Say you have a house collapsing and need to know if anyone is inside, our technology would be useful. It's ideal for use in nearly any disaster-type situation, especially fires, in which you need to find out what's going on inside and around corners—but don't want to risk sending someone inside because of dangerous or hazardous conditions. You could use this technology to greatly reduce risking rescue workers' lives," Gupta points out.

It's also quite possible that the technology could be used as a form of non-invasive biomedical imaging to "see" what's going on beneath a patient's skin. That's what the researchers plan to investigate now.

This is a long-haul technology. It is very definitely an augmented reality system, but it is not one that is particularly portable in it's current form, nor particularly cost-effective at this current time. The researchers themselves admit it will be most likely a decade before practical uses are forthcoming, and as we all know, that really means two, at least. However, the basic fact that the technology works is by far the most important aspect. We are not exactly scanning apartment blocks from the outside, but we now know it is possible, and we know how to do it. We also have plenty to work on in the form of the pitfalls we are now aware of, and have to work around.

The Pitfalls

One of the wonderful things about these kinds of experiments, is you learn more from the failures than the successes. That the technique actually works is essential, but now that we have a handle on the limitations, we can begin to work around them. Consider the image below:

Three objects -- a disk, triangle, and square were used to test the acuity of the imaging technique. The left image reveals the objects as they would appear if directly sampled.

The middle image is reconstructed for collected photons and shows a distorted disk and a rounded square. The triangle was rendered most clearly.

The image on the right shows the objects as they would be seen from the side. Little spatial information is evident from this perspective.

This is the most severe problem, Because the path the protons take is so erratic, only a fraction of the ones sent, return to the receiver. As a result, any image they take, is often grossly distorted, and if the photons are reflected off at the wrong angle – sideways in this case – little recognisable information may be returned at all.

There are several ways to address this of course. One is to increase the duration of the photon exposure, building up several billion more frames of data. This comes with a high data storage and high information processing overhead – putting it out of the range of all but dedicated supercomputers for now.

Another is to compare the data from one image rendered, with another rendered a few minutes ago, and clean up the image by cross-comparing the two. This is fine if nothing moves in the interim. But in time-sensitive search and rescue uses, or in pursuit of a suspect, neither is likely to be an option.

A third option is to use multiple detectors from different positions and network them. Each would receive a portion of the leaking photons from different exits from the structure, and could be used to build up a more complete picture in 3D, by sharing data once each has processed it. This is the most viable option at this current time -0 until processing technology catches up – but does bear substantial additional equipment costs, as an additional streak camera is required for each vantage point.

A fourth possibility is to use an array of streak cameras from a single vantage point, but this may offer only negligible increase in resolution for the same costs as option three.

There is also a secondary concern in that if a second photon source is fired into the room, say from a second team using the same equipment, the second ultrafast laser beam will scramble the photon-return calculations for both teams, essentially reducing the image received into meaningless gibberish. Care would have to be taken that only one source device was in use at a time.

Selecting the right imaging technique

As with the VR version, occlusion is a major problem . Only instead of figuring out which objects occlude others, you are approaching from the opposite side of the coin – figuring out which imaging technique to use, to counter occluded or partially occluded shapes, and tell when occlusion is taking place. As each takes significant processing power to render, the right choice has to be made at the time, or valuable time may be lost.

A small wooden figure of a running man hidden from view is revealed by using two different imaging techniques. In the first, back projection is used to tease out the image from reflected photons of light. In the second, another technique known as sparse reconstruction brings the hidden object into view.

References

Seeing through walls: Laser system reconstructs objects hidden from sight

Reconstruction of hidden 3D shapes using diffuse reflections (Abstract)

Reconstruction of hidden 3D shapes using diffuse reflections (PDF, Open Access)

Staff Comments

 


.
Untitled Document .