Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /home/virtualw/public_html/Archive/IndividualNews.php on line 12
VWN News: 3-D Modeling Advance: A single photo can be reconstructed into a 3-D scene with Make3D
Untitled Document
Not a member yet? Register for full benefits!

Username
Password
 3-D Modeling Advance: A single photo can be reconstructed into a 3-D scene with Make3D

This story is from the category Graphics
Printer Friendly Version
Email to a Friend (currently Down)

 

 

Date posted: 07/03/2008

Building on the premise of Parallax mapping, in which 3D displacement of surfaces is faked by means of displacing textures both by creating a heightmap of their protrubance from 3D space and then calculating the angle of that protrubance relative to the angle the observer is looking. Mathematically that is the angle relative to the surface normal, but for the lay-person, it is phrased equally as well by saying the bricks in a wall all jut out the same distance, but as you move your head closer to the wall and look at the bricks further away, they seem to jut out more.

This was the first step in the process. Subsequent work has dealt with occlusion problems and incorrect silhouettes at the edges ? a wall suddenly looks very strange when bricks that stand out, have a flush corner.

Then, in May 2007, researchers at Carnegie Mellon University (CMU) launched Fotowoosh.


Example from Fotowoosh, showing a 3D rendering of a picture of an oncoming train.



Fotowoosh's Web-based system turns flat, two-dimensional images into 3-D scenes using machine-learning algorithms that subdivide a picture into areas representing ground, vertical surfaces, horizontal surfaces, and sky. These surfaces are then folded up into a crude, texturemapped 3-D model resembling a pop-up book illustration ?complete with gaps as you move round it. Not perfect, but a big step forwards.

Fotowoosh's algorithm is limited because it labels the orientation of surfaces as either horizontal or vertical, without taking into account such things as mountain slopes, rooftops, or even staircases. Thus, the depth of a scene has to be used to infer distances, and fill in gaps by best approximation.

Now, in an effort to do much better, Researchers at Stanford University have developed a Web service called Make3D that lets users turn a single two-dimensional image of an outdoor scene into an immersive 3-D model. This gives users the ability to easily create a more realistic visual representation of a photo - one that lets viewers fly around the scene.

To convert the still images into 3-D visualizations, Andrew Ng, an assistant professor of computer science, and Ashutosh Saxena, a doctoral student in computer science, developed a machine-learning algorithm that associates visual cues, such as colour, texture, and size, with certain depth values based on what they have learned from studying two-dimensional photos paired with 3-D data.

For example, says Ng, grass has a distinctive texture that makes it look very different close up than it does from far away. The algorithm learns that the progressive change in texture gives clues to the distance of a patch of grass.

Larry Davis, a professor and chair of the computer-science department at the University of Maryland, in College Park, says that turning a single image into a 3-D model has been a hard and mathematically complicated problem in computer vision, and that even though Make3D gets things wrong, it often produces remarkable results.

To build Make3D's algorithm, the Stanford researchers used a laser scanner to estimate the distance of every pixel or point in a two-dimensional image. That 3-D information was coupled with the image and reviewed by the algorithm so that it could learn to correlate visual properties in the image with depth values. For example, it will learn that a large blue pad is probably part of the sky and farther away, says Saxena. There are thousands of such visual properties that humans unconsciously use to determine depth. The Make3D algorithm [and neural network] learns these kinds of rules and processes images accordingly, he says.

Make3D can also take two or three images of the same location to create a 3-D model similar to Microsoft's Photosynth application. But Photosynth is a more expansive project that uses hundreds of images to reconstruct a scene, and when there are that many images to work with, computing the depth of scenes is not as mathematically complicated and is more accurate, says Hoiem. Make3D's focus is on processing single images for the general consumer, who might only take one image of a scene, says Ng.

Reference Links:


Make3D

Parallax Mapping Example

A New Dimension for Your Photos (Fotowoosh)

Fotowoosh Site

See the full Story via external site: www.technologyreview.com



Most recent stories in this category (Graphics):

07/02/2017: Complex 3D data on all devices

06/05/2014: U-M paleontologists unveil online showcase of 3-D fossil remains

17/03/2014: 3D X-ray Film: Rapid Movements in Real Time

07/02/2014: Modelling the Duynamics of the Skin

20/01/2014: CCNY Team Models Sudden Thickening of Complex Fluids

12/11/2013: Visualizing the past: Nondestructive imaging of ancient fossils

14/08/2013: Shadows and light: Dartmouth researchers develop new software to detect forged photos

05/08/2013: Seeing depth through a single lens