Are we there yet? Computational advancements in interventional radiology

Judy Wawira Gichoya computational advancements
Judy Wawira Gichoya

What computational advancements are set to transform interventional radiology (IR)? Interventional radiologist, Judy Wawira Gichoya (Emory University, Atlanta, USA) weighs in, providing a realistic outlook on developments so far.

The enthusiasm of interventional radiologists for new devices that have enhanced our clinical work reveals a notable gap when considering artificial intelligence (AI) as a significant component of our toolkit. During the 2024 Western Angiographic Society meeting (21–26 September, Kauai, USA), I surveyed hundreds of practicing interventional radiologists regarding their current or planned use of robotic technology. The prevailing sentiment can be summarised in one word: indifference. In my reflection during the ‘25 years of Computational Advances’ session at the 2025 Society of Interventional Oncology (SIO) annual scientific meeting (30 January–3 February, Las Vegas, USA), I contended that action should be the focus for every interventional radiologist moving forward.

Most AI tools available today are primarily designed for diagnostic radiology and are used by interventional radiologists incidentally. Some of us who participate in pulmonary embolism response teams (PERTs) have indirectly interacted with the triage algorithms that detect incidental pulmonary embolism. Conference exhibitions often mention AI to inform us about the latest advancements, but these systems are not commonly used in our daily work. In fact, after one year of using a percutaneous biopsy robot at Emory University’s IR department, in Atlanta, USA, the robotics company went bankrupt, and our practice has continued without “the robot”. Our residents are happier with their education without the robot.

While IR continues to be a lesser-known branch of radiology, primary care clinics and electronic medical record systems are advancing with AI integration. One such application is ambient listening, where AI transcribes and writes clinical notes, which has become essential for primary care providers in hospitals. The dream of every interventional radiologist who wants to have a working clinic is being achieved outside radiology supported through AI-generated drafts to patient messages, AI-assisted discharge summaries and differential diagnosis, and even end-to-end AI-assisted radiology interpretation. Yet, new data reflect the reality of IR—data showing the impact of workplace injuries from many years of wearing lead in rooms not designed for ergonomic flexibility, lack of evidence to show the value of IR in multidisciplinary care, continued workforce shortage and persistent IR deserts where tele-procedural interventions remain a dream.

The call to action begins with understanding the advancements of AI architectures with increased development of foundation models. These models are trained by compressing data like text, videos and pictures into a representation space and then generating content in response to a prompt. The self-supervised nature of learning, which can be thought of as ‘fill in the blanks’, allows for training of these models with minimal human input. Several models have been developed for radiology, and the same technology developed for IR would overcome prior limits of small datasets for AI development, capture multimodal data, and reduce annotation cost and effort. This model base, if properly designed, could then be used to power agents that can participate in tumour board discussions, assist with clinic and procedural documentation, provide infrastructure to develop digital twins for patient trajectory mapping and robot environment simulation.

In 2024, I teamed with authors for a two-part series article on AI for IR.1,2 The articles introduce a patient case for PE thrombectomy that is automatically flagged from the overnight list by an AI triage system. It subsequently touches on all possible points of digital transformation including patient consent and chatbot interaction, multisensory monitoring for physiologic and pain status, 3D printing of a personalised angiographic catheter, AI-generated orders and procedural documentation

While futuristic, I believe that the technological advances are getting there but need the input of the interventional radiologist to not continue to develop monolithic applications in isolation, but envision an enabled future that allows us to provide excellent care to all patients.

References

  1. Warren, B.E., Bilbily, A., Gichoya, J.W, et al. An Introductory Guide to Artificial Intelligence in Interventional Radiology: Part 1 Foundational Knowledge. Canadian Association of Radiologists Journal, 75(3), pp.558–567. doi: https://doi. org/10.1177/08465371241236376.
  2. Warren, B.E., Bilbily, A., Gichoya, J.W, et al. An Introductory Guide to Artificial Intelligence in Interventional Radiology: Part 2: Implementation Considerations and Harms. Canadian Association of Radiologists Journal, 75(3), pp.568–574. doi: https:// doi.org/10.1177/08465371241236377.

Judy Wawira Gichoya is an associate professor of interventional radiology and informatics at Emory University,

Disclosures: The author declared no relevant disclosures.

 


LEAVE A REPLY

Please enter your comment!
Please enter your name here