Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /home/virtualw/public_html/Archive/IndividualNews.php on line 12
VWN News: Seeing depth through a single lens
Untitled Document
Not a member yet? Register for full benefits!

Username
Password
 Seeing depth through a single lens

This story is from the category Graphics
Printer Friendly Version
Email to a Friend (currently Down)

 

 

Date posted: 05/08/2013

Researchers at the Harvard School of Engineering and Applied Sciences (SEAS) have developed a way for photographers and microscopists to create a 3D image through a single lens, without moving the camera.

Published in the journal Optics Letters, this improbable-sounding technology relies only on computation and mathematics—no unusual hardware or fancy lenses. The effect is the equivalent of seeing a stereo image with one eye closed.

That's easier said than done, as principal investigator Kenneth B. Crozier, John L. Loeb Associate Professor of the Natural Sciences, explains.

"If you close one eye, depth perception becomes difficult. Your eye can focus on one thing or another, but unless you also move your head from side to side, it's difficult to gain much sense of objects' relative distances," Crozier says. "If your viewpoint is fixed in one position, as a microscope would be, it's a challenging problem."

Offering a workaround, Crozier and graduate student Antony Orth essentially compute how the image would look if it were taken from a different angle. To do this, they rely on the clues encoded within the rays of light entering the camera.

"Arriving at each pixel, the light's coming at a certain angle, and that contains important information," explains Crozier. "Cameras have been developed with all kinds of new hardware—microlens arrays and absorbing masks—that can record the direction of the light, and that allows you to do some very interesting things, such as take a picture and focus it later, or change the perspective view. That's great, but the question we asked was, can we get some of that functionality with a regular camera, without adding any extra hardware?"

The key, they found, is to infer the angle of the light at each pixel, rather than directly measuring it (which standard image sensors and film would not be able to do). The team's solution is to take two images from the same camera position but focused at different depths. The slight differences between these two images provide enough information for a computer to mathematically create a brand-new image as if the camera had been moved to one side.

By stitching these two images together into an animation, Crozier and Orth provide a way for amateur photographers and microscopists alike to create the impression of a stereo image without the need for expensive hardware. They are calling their computational method "light-field moment imaging"—not to be confused with "light field cameras" (like the Lytro), which achieve similar effects using high-end hardware rather than computational processing.

See the full Story via external site: www.seas.harvard.edu



Most recent stories in this category (Graphics):

07/02/2017: Complex 3D data on all devices

06/05/2014: U-M paleontologists unveil online showcase of 3-D fossil remains

17/03/2014: 3D X-ray Film: Rapid Movements in Real Time

07/02/2014: Modelling the Duynamics of the Skin

20/01/2014: CCNY Team Models Sudden Thickening of Complex Fluids

12/11/2013: Visualizing the past: Nondestructive imaging of ancient fossils

14/08/2013: Shadows and light: Dartmouth researchers develop new software to detect forged photos

05/08/2013: Seeing depth through a single lens