“I used to sneak away to the arcade at seven years old to play pacman and asteroids; our goal with augmented reality really is to prove my mother wrong when she told me not to play video games as they would never be useful,” Brad Wood, National Institutes of Health, Bethesda, USA, jests.
He speaks to Interventional News about how technologies in gaming and graphics processing transformed into a vision for the future of interventional oncology, with augmented reality assisted procedures aimed to meet clinical needs with the goal of potentially improving patient outcomes.
Augmented reality (AR) is the superimposition of digital features into the real world – unlike virtual reality, which involves a completely simulated environment. In interventional oncology (IO), AR can be used to display deep internal anatomy on top of the patient’s skin. Here, Wood speaks of the pitfalls and promise of this technology, emphasising that an AR revolution is imminent.
What is the current status of augmented reality in the IO space?
Augmented reality is in the developmental phase, which is an exciting time because it has a lot of potential. The hard part now is figuring out exactly how it will be useful; proving a clinical utility is the next step. Several options are possible: from pre-procedural planning to intra-procedural navigation and monitoring, to post-procedural feedback and verification. I envision AR making its first in-road in pre-procedural planning, as it is easy to do and does not change the procedure at all. However, intra- and post-operative uses would be more impactful, in my opinion. Intraoperatively, the interventionist could use AR technology to gain information about the procedure in real time when the patient is on the table. Information is power, and knowing when you have a needle or catheter in the correct location in a patient is the most valuable clinical knowledge for me. After the procedure, AR could be used to monitor and verify outcomes. For example, AR could be used to visualise a pre-treatment and post-treatment tumour fused side by side, enabling a three dimensional comparison, and identification and display of tumor tissue at risk for under-treatment..
With all the excitement surrounding AR today, do you think any aspects of the technology are over-hyped?
All sexy ideas have the potential to be over-hyped, and AR is no exception. The development of surgical robotics was partially driven by billboards and hype, and not strictly science. The perception that it is definitely better to be operated on by a robot than a human allowed that technology to proliferate and penetrate practices perhaps before all of the major questions had been answered concerning when to use them, and when and how they help.
New technology always exists on a dynamic spectrum: it is a continuum between innovation on one side, and robust, evidence-based medicine on the other. To quote Aldous Huxley: “at their first appearance innovators have generally been persecuted, and always derided as fools and madmen.” The introduction of new technologies is a dynamic process, and there will always be a tension between experimental technological change and market acceptance. The best window of opportunity to study AR is now, right at the beginning of its clinical use. We want to define when to use it, how it is going to add value, and when it is going to be cost-effective. We are doing a clinical trial on the use of AR in biopsy and ablation, focusing on the monitoring and navigation of needles. We are also looking into software options, trying to figure out the practicalities of the technology: what it will look like in a clinical setting, when to use it during a procedure, where the information gaps are in current practice, and how to make it user-friendly and cost-effective.
What hardware would you need for AR?
It all depends on what you are doing. Most work is focused on goggles, of which a number are currently commercially available. The goggles are tracked, so it knows where the physician is looking at all times, and images can be superimposed over the physician’s field of view. However, in our initial experience, we think it is going to be more useful to use a smartphone or tablet that interfaces directly with the CT console or workstation, so would be part of the workflow. We can then hijack the smartphone’s gyroscope and camera to visualise the anatomy below the skin without the need for goggles. This sounds more user-friendly, ergonomic, and less disruptive to the workflow.
Outside of medicine, what can interventionists learn from other disciplines currently using AR?
Presently, we are taking technology developed for gaming and trying to apply it to a problem in IR or IO. There is a precedent for this: GPS and multi-modality navigation were both originally developed for purposes outside of the medical field. Nvidia hosts a computer science technology meeting which has been a purely graphical congress focusing solely on gaming graphics and processing. Recently, though, it has expanded to include all sorts of exciting new imaging science technologies: artificial intelligence systems, autonomous driving vehicles, and a host of medical applications, including augmented reality offerings. There is nothing wrong with borrowing technology from other disciplines; we just need to ensure that we have a plan for the use of AR in IR to maximise its potential within the clinical space in a cost-effective fashion. The window of opportunity is wide open now.