Not a member yet? Register for full benefits!

Username
Password
The Omni-focus

There are many limitations of modern image capture equipment that separate cameras from the capability of the human eye. Cameras struggle to operate in a range of different lumens simultaneously. For example, if you try to take a picture of both a sunny outdoors environment and a dim room simultaneously, the brighter environment overpowers the duller one - its why we use flash and high powered lamps for stills and video respectively, to even out the light levels.

Another difficulty is the omni-focus. The human eye is continually scanning a scene, focussing and refocusing as it does so, so that objects both near and far are in focus all the time. When you sit at your PC and look out the window, the objects at your desk and the shelves around you are in focus, yet so are the trees beyond the window, and the clouds in the sky, all at once. This ability to omni-focus has always eluded cameras. The photographer or software program has to choose a focal length for best-fit.

Thus, when integrating photographic data into a VR, as in photos of the outside world, or a video stream, they look like an artificial display source, because they don't capture the visual data in the same way the human eye can.

The first harbingers of change on the focussing front arrived back in May this year, when the University of Toronto announced initial details of a video-camera system that, just like the human eye, was able to focus simultaneously on objects at multiple depths of focus.

How does it do this? Does it continually refocus at each different depth, the same way as a human eye does, then reassemble the pieces into a coherent image? Erm, no, it doesn't. What it does do is rather more clever, and much more easily suited to real-time display applications. In other words,, a better system than is used by the eye.

Left: UUniversity of Toronto's Omni-focus prototype (a) is compared to a standard cameera (b).

The matchstich in both examples is 20cm from the camera, whilst the rather hideous sculpture is six metres away.

With any normal camera (the Sony one was picked at random, any modern camera would do) the photographer has to choose a focal distance. In this case, the hideous sculpture was focused upon, and the nearer objects blur into incomprehensibility.

Had it been the other way round, it would have been a nicer photo, but there would have been blurring in the background instead.

The camera's inventor, Professor Keigo Iizuka, attempted to explain the camera's functionality by saying "the intensity of a point source decays with the inverse square of the distance of propagation. This variation with distance has proven to be large enough to provide depth mapping with high resolution. What's more, by using two point sources at different locations, the distance of the object can be determined without the influence of its surface texture."

In other words, what it is actually doing, is quite clever. Using a distance-map based roughly on the z-buffer routines of early VRs, the camera marks out a 3D representation of the area visible to its viewpoint, and identifies key points at different resolutions. In other words, like the eye, it runs the gamut of different focal lengths. But, unlike the eye, it does this only for one or two selected parts of the image. Once these elements are gauged by range, it uses them as markers to calculate in advance the likely focal length of other objects in the scene (are they going to be nearer or further away than marker B, if nearer, are they going to be in front or behind marker A, and so on).

This means that as the camera scans the image, it already has a very good idea in advance, what the focal length needs to be for each area. This saves a good deal of time in focusing the image, and means the software can work with a variety of existing camera types.

However, Omni-focus at the present time, for understandable reasons, works best with a specifically designed camera, which goes by the mouthful name of "Divergence-ratio Axi-vision Camera", or Div-cam. What makes the camera special, isn't much really; siply that it was designed with rapid focussing and panning in mind, in order to interface in the best possible way with the control software's algorithm. To do this it actually uses an array of separate mini-cameras, each able to focus independently, greatly speeding up the process compared to a single camera alone.

The result is then stitched together not unlike a panorama, and delivered as a single high definition frame. Either stand-alone, or as part of a motion stream.

Currently, further production of Div-cam is proceeding via Wilkes Associates, who coincidentally are also located in Toronto.

References

Omni-focus Video Camera to revolutionize industry

Professor Keigo Iizuka

Axi-vision Camera and 3D Television

High-definition real-time depth-mapping TV camera

Staff Comments

 


.
Untitled Document .