Warning: session_start(): open(/tmp/sess_9dv79n9mrjuvadgm64o9f30jt5, O_RDWR) failed: Disk quota exceeded (122) in /home/virtualw/public_html/session.php on line 1

Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at /home/virtualw/public_html/session.php:1) in /home/virtualw/public_html/session.php on line 1

Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /home/virtualw/public_html/Resources/Hosted/Resource.php on line 9
The Omni-focus
Not a member yet? Register for full benefits!

The Omni-focus

There are many limitations of modern image capture equipment that separate cameras from the capability of the human eye. Cameras struggle to operate in a range of different lumens simultaneously. For example, if you try to take a picture of both a sunny outdoors environment and a dim room simultaneously, the brighter environment overpowers the duller one - its why we use flash and high powered lamps for stills and video respectively, to even out the light levels.

Another difficulty is the omni-focus. The human eye is continually scanning a scene, focussing and refocusing as it does so, so that objects both near and far are in focus all the time. When you sit at your PC and look out the window, the objects at your desk and the shelves around you are in focus, yet so are the trees beyond the window, and the clouds in the sky, all at once. This ability to omni-focus has always eluded cameras. The photographer or software program has to choose a focal length for best-fit.

Thus, when integrating photographic data into a VR, as in photos of the outside world, or a video stream, they look like an artificial display source, because they don't capture the visual data in the same way the human eye can.

The first harbingers of change on the focussing front arrived back in May this year, when the University of Toronto announced initial details of a video-camera system that, just like the human eye, was able to focus simultaneously on objects at multiple depths of focus.

How does it do this? Does it continually refocus at each different depth, the same way as a human eye does, then reassemble the pieces into a coherent image? Erm, no, it doesn't. What it does do is rather more clever, and much more easily suited to real-time display applications. In other words,, a better system than is used by the eye.

Left: UUniversity of Toronto's Omni-focus prototype (a) is compared to a standard cameera (b).

The matchstich in both examples is 20cm from the camera, whilst the rather hideous sculpture is six metres away.

With any normal camera (the Sony one was picked at random, any modern camera would do) the photographer has to choose a focal distance. In this case, the hideous sculpture was focused upon, and the nearer objects blur into incomprehensibility.

Had it been the other way round, it would have been a nicer photo, but there would have been blurring in the background instead.

The camera's inventor, Professor Keigo Iizuka, attempted to explain the camera's functionality by saying "the intensity of a point source decays with the inverse square of the distance of propagation. This variation with distance has proven to be large enough to provide depth mapping with high resolution. What's more, by using two point sources at different locations, the distance of the object can be determined without the influence of its surface texture."

In other words, what it is actually doing, is quite clever. Using a distance-map based roughly on the z-buffer routines of early VRs, the camera marks out a 3D representation of the area visible to its viewpoint, and identifies key points at different resolutions. In other words, like the eye, it runs the gamut of different focal lengths. But, unlike the eye, it does this only for one or two selected parts of the image. Once these elements are gauged by range, it uses them as markers to calculate in advance the likely focal length of other objects in the scene (are they going to be nearer or further away than marker B, if nearer, are they going to be in front or behind marker A, and so on).

This means that as the camera scans the image, it already has a very good idea in advance, what the focal length needs to be for each area. This saves a good deal of time in focusing the image, and means the software can work with a variety of existing camera types.

However, Omni-focus at the present time, for understandable reasons, works best with a specifically designed camera, which goes by the mouthful name of "Divergence-ratio Axi-vision Camera", or Div-cam. What makes the camera special, isn't much really; siply that it was designed with rapid focussing and panning in mind, in order to interface in the best possible way with the control software's algorithm. To do this it actually uses an array of separate mini-cameras, each able to focus independently, greatly speeding up the process compared to a single camera alone.

The result is then stitched together not unlike a panorama, and delivered as a single high definition frame. Either stand-alone, or as part of a motion stream.

Currently, further production of Div-cam is proceeding via Wilkes Associates, who coincidentally are also located in Toronto.


Omni-focus Video Camera to revolutionize industry

Professor Keigo Iizuka

Axi-vision Camera and 3D Television

High-definition real-time depth-mapping TV camera

Staff Comments


Untitled Document .

Warning: Unknown: open(/tmp/sess_9dv79n9mrjuvadgm64o9f30jt5, O_RDWR) failed: Disk quota exceeded (122) in Unknown on line 0

Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/tmp) in Unknown on line 0