Reconstructing Whole Worlds from Photos
This is a Printer Friendly Article on the Virtual Worldlets Network.
Return to the web-view version.
Author Information
Article by Virtual Worldlets Network
Copyright 18/09/2009
Additional Information

There has been a lot of work of late on reconstructing 3D environments from photographs. One space agency approach does however, leave all others in the dust. An approach which allows 3D terrain creation from thousands of photographs, in just a matter of minutes.

Using 3D stereoscopy, and the depth perception possible in such images, the new approach, developed by the Centre for Machine Perception of the Technical University of Prague, under the supervision of Tomas Pajdla, demands that all source images are stereoscopic, but other than that, works with any image source, terrestrial or otherworldly.

Because it combines multiple images into a single model, the resulting model, as one example is pictured above, can be viewed from any angle, and used as a terrain for a suitably configured virtual environment, giving astronomers a realistic and immersive impression of the landscape. It also indirectly allows such terrains to become actual VR terrain, converting it into a series of 3D models.

"The feeling of 'being right there' will give scientists a much better understanding of the images. The only input we need are the captured raw images and the internal camera calibration. After minutes of computation on a standard PC, a three dimensional model of the captured scene is obtained," said Dr Michal Havlena who presented the results of the work.

As to the technical details, the results are achieved by combining the twin digital elevation models (DEMs) produced by two cameras on the science vessel or rover - the two NASA explorer rovers, Spirit and Opportunity, are naturally stereoscopic. There are a great many processes around now that can combine separate stereoscopic images into a single image, and any one can be used with no difference in the outcome - provided the same specs are then used for all shots of the same terrain.

After this, a neural network determines the image order. If images are not sequenced properly, or there was a transmission error - common with deep space telemetry - the images may arrive out of order, and it's the AI's job to look for similarities in each image, and fit the jigsaw puzzle together. This is no mean feat, and requires considerable computing power, placing it out of the realm of home PCs for the moment. Of course if the images are in order, there is no need for this step.

Up to a thousand features on each image are detected and "translated" into visual words, then, starting from an arbitrary image, the following image is selected as such if it shares the highest number of visual words with the previous image. Again and again the process is repeated until all the simlar images are grouped together, and all the groups form a map of the area.

The second step of the process, is key. It is what is known as a 'structure-from-motion computation', and is a not quite so complex process similar to that carried out in modern AR, where the camera position and orientation for each perceived structure is worked out from analysing the structure in all images of its 'group', with five or more features that correspond between photographs. With a large number of photographs it is possible to work out accurately, the camera position and rotation from the object in one photograph.

Once this is achieved, that one can be used to translate the object and work out the exact camera position there. When that feature is done in all photographs, its easier to work out the next feature, and so on until the whole terrain is covered.

The last and most important step is the so-called "dense 3D model generation" of the captured scene, which essentially creates and fuses the Martian surface depth maps. To do, this the model uses the intensity disparities (parallaxes) present in images taken at two distinct camera positions, which were identified in the second step.

"The pipeline has already been used successfully to reconstruct a three dimensional model from nine images captured by the Phoenix Mars Lander, which were obtained just after performing some digging operation on the Mars surface," said Dr Havlena.

"The challenge is now to reconstruct larger parts of the surface of the red planet, captured by the Mars Exploration Rovers Spirit and Opportunity," concluded Dr Havlena.

It will be a while yet before the process is simplified enough for home use in mapping terrain from photos or filmstrip, but the basics are in place, and work, on a supercomputer at least.

The work was completed as part of the European Union ProVisG project.


Reconstruct Mars Automatically in Minutes!

PRoVisG (Planetary Robotics Vision Ground Processing)