Not a member yet? Register for full benefits!

Username
Password
Improving Robotic Surgery by integrating Augmented Reality Elements

Robotic surgery, typically involves devices such as the Da Vinci unit, in which a tiny incision is made into the patient, and a set of robotic tools on a boom are lowered into the cavity. These tools typically involve a scalpel, one, possibly two gripping arm attachments, maybe a hypodermic if the procedure calls for it, or a stapler to provide sutures, a suction device to remove excess blood and bodily fluids, and always always, always a camera with integrated light source.

This is the keystone of the process. Whilst the patient is on one side of the room hooked into the operating robotics, the surgeon is on the other (or sometimes in a different hospital entirely) staring at a video display screen that shows what the camera shows, whilst working with delicate movements, controls that activate the other tools that have been inserted into the patient, as well as additional controls to work the camera. This is literally their only guide to what is going on, the camera inside the patient. Frequently it is a stereoscopic camera which whilst slightly bulkier because it is essentially two cameras side by side, then allows proper depth perception, making the entire operation much easier on the surgeon.

Still, it is a very difficult environment to work in, and as the operation continues bodily fluids inevitably start to mask sections of anatomy, and it is quite easy to lose track of which way you are facing relative to other, unseen body parts.

A research team from John Hopkins university in the US, has tackled this problem, by borrowing heavily from elements of augmented reality.

Researcher Ali Uneri showing off the augmented reality guidance system.
Credit: Johns Hopkins Medicine

 

Augmented reality works by superimposing a virtual environment upon the physical one, by taking the input from a camera, adding the virtual data then outputting the combined result. As such a surgical robot is a logical choice for such augmentation, since the surgeon's entire view comes through a camera anyway. The problem preventing this seemingly perfect partnership has always been registration.

In augmented reality, it isn't as simple as just adding virtual data anywhere to a source video feed. Everything that is physically in the scene has to be registered by the AR process. It's not necessary for the computer to recognise what everything is, but it is necessary for it to recognise the size, shape, distance from the camera et al, of all the objects in the scene, so it can add virtual place holders to represent them when adding the virtual data in.

If this is not done, the virtual and physical data don't marry up, and you get registration errors, where physical objects are passing through virtual ones, in ways that make completely and utterly no sense. Worse, in a delicate process such as a surgery on a living patient, if the augmented reality elements aren't locked in place over the physical elements, the display augmentation goes from being a valuable tool for orientation and becomes downright dangerous, telling the surgeon other organs and elements are in places where in reality they may not be. Locking the two together again, comes down to registration of the physical objects.

If a surgeon is in danger of becoming disorientated in the environment inside the patient, so too is the augmented reality process, as everything starts to look the same after a while. To combat this, the current process has a second person in the operating room, a surgical technician whose job it is to manually look at the CT scan of the patient taken prior, and map out points to guide the surgeon on the patient's body. Guiding the manipulator arm, based on the CT data, and physically adjusting it as needed. It works, but it is prone to error and slow, because you are back to relying on human guestimation between two systems that themselves have no direct connection.

The research team at John Hopkins tackled this part of the process directly. If they could create an algorithm that could map the surgical points from the X-ray or CT scan directly, then transfer those points onto a VR map of the patient's body, and track the orientation and position of the robot arm at the same time, then that's all the data you would need to solve the registration problem.

The CT scan provides VR data matching where everything physically is inside the patient – ready made registration data. Matching the patient's physical body to the CT scans, provides context for this registration data so the system can orient the virtual map of the patient's innards to exactly match the outline of their physical place. Then by knowing the robot arm camera's exact position and orientation in 3D space relative to that anchored map, you know the exact positioning of every organ, nerve and blood channel relative to the camera at all times, regardless of how much they are covered by body fluids, or occluded by other body parts. This information can then be overlaid on the visual data sent to the surgeon, making their job considerably easier.

Straightforward in theory, but it all hinges on developing that algorithm to map the CT or X-ray data onto the patient's physical topology in real-time in the operating theatre. This is what the team are confident they have now done, and have published their algorithm in the journal Physics in Medicine and Biology to be assessed generally.

“Imaging in the operating room opens new possibilities for patient safety and high-precision surgical guidance,” says Jeffrey Siewerdsen, Ph.D., a professor of biomedical engineering in the Johns Hopkins University School of Medicine. “In this work, we devised an imaging method that could overcome traditional barriers in precision and work flow. Rather than adding complicated tracking systems and special markers to the already busy surgical scene, we realized a method in which the imaging system is the tracker and the patient is the marker.”

Their base was another algorithm the same team had previously developed for a similar purpose at a far smaller scale, an algorithm designed to separate out specific vertibrae in the spine, to aid surgeons conducting spinal surgery. Increasing the scale to the entire patient however, was something of a challenge.

“The breakthrough came when we discovered how much geometric information could be extracted from just one or two X-ray images of the patient,” says Ali Uneri, a graduate student in the Department of Computer Science in the Johns Hopkins University Whiting School of Engineering. “From just a single frame, we achieved better than 3 millimetres of accuracy, and with two frames acquired with a small angular separation, we could provide surgical navigation more accurately than a conventional tracker.”

Its not just an in theory accuracy either. The team have tested their system using a C-arm surgical robot, as it is the most commonly used type of surgical robot and is commonly found in many operating theatres, together with a group of cadavers. Their accuracy when performing laparoscopic surgery on the cadavers was within 2-3 mm at all times, well within the safe margin for all but the most delicate surgical procedures, and certainly enough to perform its actual task of orientating and guiding the surgeon as opposed to instructing them precisely where to cut. The surgeon's own skills take over at that point, but at least they know precisely where to look.

Ziya Gokaslan, M.D., a professor of neurosurgery at the Johns Hopkins University School of Medicine, is leading the translational research team. “We are already seeing how intraoperative imaging can be used to enhance work flow and improve patient safety,” he says. “Extending those methods to the task of surgical navigation is very promising, and it could open the availability of high-precision guidance to a broader spectrum of surgeries than previously available.”

References

Local

The DaVinci surgical robot

Da Vinci gains Gaze Assist (2008)

Dictionary Term: Augmented Reality

Dictionary Term: Spatially Adaptive Augmented Reality

Dictionary Term: Registration Problem

Elsewhere

New Guidance System Could Improve Minimally Invasive Surgery

Getting It Straight (The vertibrae alignment algorithm)

3D–2D registration for surgical guidance: effect of projection view angles on registration accuracy (Paper, Paywalled)

Staff Comments

 


.
Untitled Document .