Not a member yet? Register for full benefits!

True 3D Endoscopy

Modern endoscopic techniques such as that used by the Da Vinci and similar systems allow complex surgery to take place without creating major incisions in the skin. A small hole is cut, into which inserts a tiny camera and light source along with a set of various injector and manipulator arms, which the surgeon manipulates from outside the patient via either a virtual reality or augmented reality system, using visual feedback from the camera, and tactile feedback from the implements.

It is a more difficult way of doing surgery than open surgery, as the surgeon cannot rely on their own eyes to see everything going on, but must instead rely on the technology. Chief among the advantages for the patient however, are shorter recovery times, and far less scarring.

Ideally what is required for better endoscopic and laproscopic surgical techniques, is a way of converting the camera images into natural 3D, so the surgeon's eyes can take over making sense of things from a 3D perspective, and the surgical vision systems can understand spatial relations better – which makes it easier for the tools to identify pressure build-ups and bleeds relative to the internal anatomy of the patient.

A viable method for doing this has been created this month, by researchers from the Fraunhofer Institute for Microelectronic Circuits and Systems IMS in Duisburg, Germany. They have managed to create a single camera that functions as a stereoscopic camera with the use of microlenses to redirect parts of the optics. This means the camera is no larger than usual, and can fit neatly inside the endoscopic probe with the light source, using the smallest footprint possible. After all, the surgical tools also have to travel through the tiny incision. The larger the endoscope, the less tools can actually fit into the space.

The surgeon carefully guides the endoscope through the patient’s nasal cavity to the operation zone. It is a delicate procedure for which the surgeon has to prepare in detail before commencing the actual intervention. Where are the blood vessels that need to be avoided, what is the exact location of the cancerous tissue, and to what depth must the surgeon cut through the brain tissue to expose the area of interest? The camera integrated in the slender endoscope tube enables the surgeon to see every detail in sharp 3D resolution – almost as if he were actually inside the patient’s brain. The stereoscopic vision provided by a 3D endoscope considerably simplifies the work of neurosurgeons and other specialists. They can navigate a safe path through the tissue without the risk of collateral damage, and the work can be accomplished faster.

The camera is a CMOS sensor, of the type usually integrated into SLR cameras. Those big, bulky cameras used by professional photographers to create stunning high-resolution images. The actual CMOS sensors are the same size as used in such cameras – each is a tiny, 18 x 24 mm in diameter.

“To make this possible, we developed special microlenses,” explains IMS project manager Dr. Sascha Weyers. The secret lies in the optical design of the CMOS sensors, in which a cylindrical microlens is placed in front of every two vertical lines of sensors in the pixel configuration. A superimposed lens captures the light falling on the microlenses, which focus it on the pixels. The special feature of this arrangement is that the lens has two apertures, “rather like the right and left eye” says Weyers. In other words: two beams of light are captured by the lenses – that arriving from the left passes through the “left eye” to be focused on the right-hand vertical line of sensors, and vice versa. The two light rays cross underneath the lens arrangement.
As a result, the CMOS sensor receives two sets of image data that are processed separately in the same way that the brain processes images coming from the left and right eye. The curvature of the CMOS additionally allows it to be compacted to fit into an endoscope measuring rather less than 18mm across.

The incoming data is of course split by an algorithm in the software driver into two separate video streams, one for each eye, that are fed into whichever control apparatus the surgeon is using. It is then up to the capabilities of whichever surgical system being used, to interpret the stereoscopic data correctly. Since the streams have been split, the surgical system receives two separate video streams, in a standard format. So, it is transparent to the system that one camera is actually being used rather than two.

It takes a special kind of microlens to ensure that the light rays are focused precisely on the sensor. In order to manufacture the lenses, the Fraunhofer engineers first had to calculate the optimum shape by means of simulations. To eliminate external factors, it had to be ensured that the lens was capable of clearly separating the right and left visual channels. In concrete terms this means ensuring that no more than five percent of the energy from one light ray is captured by the line of sensors serving the other channel – in signal transmission this is known as crosstalk.

The next task for the researchers was to adapt the conventional manufacturing process for microlenses to the requirements of the calculated lens shape. They also had to fulfil a number of requirements relating to the production of the miniature camera. They met the challenge, and the resulting chip is so small that it fits into a tube measuring no more than 7.5 millimetres in diameter. Together with the bundle of optical fibres that serves as the light source, the endoscope measures just 10 millimetres in diameter.


VR Interfaces: The DaVinci surgical robot

Microlenses for 3D endoscopes

Digital Camera Sensor Sizes

Staff Comments


Untitled Document .