“Artificial intelligence [AI] is changing the face of healthcare for all stakeholders, from consumers to providers,” Stephen Hunt (University of Pennsylvania, Philadelphia, USA; co-founder of the Penn Image-guided Interventions Laboratory) told attendees of the virtual 2020 annual scientific meeting of the British Society of Interventional Radiology (BSIR; 1–3 December, online). Speaking during a “State of the Art” session on the future of interventional oncology (IO), Hunt spoke on how “the era of AI” will expand the horizons of interventional oncology.
According to Hunt, the growth of US Food and Drug Administration-approved AI tools in the coming decade “will be exponential”. He urges interventional radiologists to stay abreast of the developments by integrating AI concepts into medical education: “Users must be educated not only on the proper application of these AI tools and on their limitations,” he said, “but on the mechanisms for validating their efficacy. Foundational concepts in statistics and bioinformatics are critical to responsible adoption and use of AI. Collaboration with developers of these tools will improve their clinical relevance and usability to interventional radiology. IO educators and faculty should incorporate AI topics into their curricula and research.”
He defines AI simply: “Really, it is just using computers and software to perform tasks that normally require human intelligence to accomplish. That seems like a very broad definition, and it is—it is somewhat vague on purpose, because it encompasses such a large field.”
Machine learning is a concept integral to AI: it describes the process of a computer learning from experience as it goes through data sets. A machine learning algorithm will learn to make connections between data points or perform some sort of classifier task, such as image recognition, when fed data, and gets better at its designated job the more input it is fed.
“The real breakthroughs for interventional oncology will require assembling and curating larger datasets,” Hunt therefore explained to delegates.
In addition to a large amount of data and a machine learning algorithm (or another tool designed to perform sophisticated calculations, such as a neural network), AI also requires some technical expertise, he continued. The end user has to be able to interpret the output generated by the AI software in order to apply the information to clinical practice.
Hunt provided some examples of AI use in the interventional oncology space.
Intraprocedural spatial localisation
Firstly, he talked the BSIR audience through an example where an interventionalist brings in a patient with a gastrointestinal (GI) bleed. Whilst the imagined patient is lying on the angiography table, the interventional radiologist needs to identify the location of the bleed on a CT scan in order to then stop the blood flow. Hunt suggested that an algorithm could be used in image registration and segmentation: this would enable the fluoroscopy unit to automatically identify the area where it predicts the bleed would appear.
“This kind of technology already exists, it has just not been put to use in clinical practice,” Hunt said. “It has not really been deployed that way.
“In addition, you could take those images [from prior GI bleed angiography studies] and train an AI with them—we have done this in our lab, and other labs have done this too—so automatically you have a classifier that will show you where the bleed is on the image. It is able to learn from some prior database that we made of GI bleed images, and then we can try it on new cases.”
Clarifying this point specifically to Interventional News, Hunt elucidates: “The point is for your angio suite to automatically detect the bleed during angiography and ‘circle it’ or otherwise provide a target sign to point it out to your eye. That way the interventionalist can go to that spot to drop a coil.
“I think within the next five years you are going to see a lot of these tools from the larger venders, like Siemens and Philips.”
CT perfusion imaging during TACE
Giving another example, Hunt next discussed AI in CT perfusion imaging during transarterial chemoembolization (TACE).
“We always have this problem of, during a TACE, how do you know you have full treatment of the tumour? One of the ways would be to do CT perfusion on that patient at the time of TACE. We have one of these image hybrid rooms, we have all the technology there, we have CT perfusion on that CT scanner, which is in the hybrid room of the fluoroscopy CT unit, but it has not been integrated into the workflow, the technicians do not know how to use it—it is these kinds of problems that we have not yet worked out. This is an example where you can have that real-time critical feedback, [as the AI] tells you ‘hey, there is an area of tumour that you have missed’. The CT perfusion would give you that information.
Intraprocedural Augmented reality
Turning to an example he said is more in the research and development phase, Hunt then discussed the use of augmented reality intraprocedurally. The HoloLens, from Microsoft, is a set of goggles that the interventionalist could wear that allows them to see virtual projections superimposed over the real world.
There are various methods of registration when it comes to correctly superimposing the projected image on to a patient, which utilises AI. Hunt described how the physician could select a point on the patient’s skin, and a point in the tumour on the projection, and the HoloLens will allow the user to visualise a path that will give them an angle to register their ablation probes. This can be combined with the breathing cycle and cardiac gating, so probe advancement only takes place at the correct time in the cycle. Hunt highlighted work that IMACTIS and XACT are doing in this space, and encouraged listeners to look into these companies.