Artificial intelligence will cause “paradigm shift” in IR practice

6747
Artificial intelligence
Julius Chapiro, speaking at Spectrum

Clinicians are calling for increased collaboration between computer scientists, biomedical engineers and interventional radiologists as machine learning is posited to play a more prominent role in interventional radiology (IR) procedures, from informing the initial diagnosis, through to patient selection and intraprocedural guidance. In a recent primer published in The Journal of Vascular and Interventional Radiology (JVIR), Brian Letzen, Clinton Wang and Julius Chapiro, all of the Yale School of Medicine, New Haven, USA, outline the clinical applications of machine learning for IRs, and visualise a future where artificial intelligence (AI) enables the elevation of the discipline to become, in Chapiro’s words, “the epitome of personalised medicine”.

Letzen and colleagues outline their vision of AI in IR: “By integrating machine learning into diagnosis, treatment, and management, AI can empower physicians to provide the highest-quality personalised care in an efficient manner that meets the demands of modern clinical practice. Whereas medicine has traditionally focused on incremental hypothesis-based research, AI allows for a new paradigm where ‘big data’ can be rapidly analysed, uncovering new insights that may otherwise have required decades of prospective trials.”

The annual meeting of the Society of Interventional Radiology [SIR] this year features its first machine learning in interventional oncology session, and the BOLD-AIR summit initiated by Stanford and NYU (24 April, New York, USA) will tackle regulatory and ethical issues pertaining to data use for AI research.

In an editorial for this newspaper last year, Chapiro and Aaron Abajian, also at the Yale School of Medicine, New Haven, USA, detailed the results of an early experiment in applying AI and machine learning as a decision support system in interventional oncology to illustrate its potential to overcome rigid staging and scoring systems in the locoregional treatment of liver cancer.

When presenting his argument in favour of the imminent adoption of machine learning in a session dedicated to the topic at Spectrum (1–4 November, Miami, USA), Chapiro told conference attendees: “The age of AI is already here, and with it, IR has the chance to become the epitome of personalised medicine, and to be everything we want it to be as the fourth pillar of cancer care.”

Structuring large datasets for use in AI

AI makes use of large datasets, and the Society of Interventional Oncology (SIO) is currently working on a “high-quality, well annotated, multi-institutional” database for liver interventions, the first data repository to serve such research. According to Chapiro, such data is “difficult to obtain, especially for industry partners that are developing most such tools and instruments.” Chapiro adds that “It will be up to academic institutions and professional societies to collect such data and make it available for academic and industry-sponsored research.” He told Interventional News that other high-quality databases are likely to follow on from that being built by the SIO. Having high-quality, annotated data is an imperative for incorporating machine learning methods into any sort of research, Raul Uppot, of Massachusetts General Hospital in Boston, USA, explained to this newspaper. According to Uppot, this is the biggest challenge facing those advocating the implementation of AI into clinical practice. He comments: “AI requires a large dataset. If you are going to use AI and apply it to tumour board recommendations, or intraprocedural imaging guidance, or physician workflow, you need large quantities of data to feed into the machine in order for the software to analyse the information provided, to identify a pattern, and to then apply it to practice.

“Right now,” he continues, “we are collecting and structuring the data to feed into the deep learning software. The data we have now is mostly in an unstructured format, which is not useable for deep learning algorithms.”

Researchers at Massachusetts General Hospital, Uppot’s institution in Boston, USA, are exploring the potential use of natural language processing tools to help with this essential data structuring. Explaining this technology, Uppot says: “There are software tools that can read a report and identify key words to be pulled out that can then be used to create a structured report.” Ultimately, Uppot believes this structuring of data should be automated. Hypothesising to Interventional News, he comments: “I think the best application of deep learning is to use it to reach the point where the structured data you are collecting is constantly being fed into the machine, and the software is constantly evaluating the variables that are being put in to generate outcomes. This enables constant learning. Ultimately, that means that as the machine gets smarter, it could learn from and identify patterns retrospectively across, say, 30 years’ worth of data, and recognise trends impossible for humans to uncover.”

However, Uppot also stresses that he believes “a human dynamic” will always be essential. Using the example of tumour board recommendations, he describes how, if data of a specific patient is fed into a machine learning algorithm, and the software recommends an ablation, for example, there may be other factors that come into play that rely on human intervention. In this hypothetical case, he says “there may be a situation where the person cannot lay flat on the table, and therefore an ablation is not the best option. I think machine learning will be a supplementary add-on, which I think will be very useful, but will not replace radiologists.”

In fact, one positive use of AI Uppot foresees is the addition of an “objective voice” in the room when tumour boards convene to discuss the optimal treatment strategy for a particular patient. He sees the potential for a machine learning-generated recommendation to reduce “turf tensions” between IRs, radiation oncologists, and surgeons.

Using machine learning for HCC recommendations

Uppot went into greater detail on this use of machine learning at Spectrum. There, Uppot said that hepatocellular carcinoma (HCC) tumour board recommendations could benefit from the complex data processing AI enables. He commented: “Tumour board decisions carry great responsibility, and it is very important to be objective in these decisions. Machine learning is one way in which we can make all these decisions objectively amongst a group of physicians. Multidisciplinary team decisions could then be made by machine learning.” The idea for this use is not new, but the concept of tumour board recommendations being predicated on algorithms rather than expert opinion and discussion remains in the realm of prospective testing at single institutions.

Back in 2017, Uppot and colleagues fed a machine learning algorithm data from 76 HCC patients, ran a random forest algorithm, and then did prospective testing on the results to determine if the output of this AI system—ideal procedural choice—could rival radiologists’ assessment of the best treatment modality. The input data included a range of different patient, lesion and study characteristics, and after running the programme multiple times to create a decision tree analysis, the study investigators reported that the size of the tumour, its location, and the age of the patient were the most important determinants of the final procedural recommendation. Armed with this knowledge, the investigators then gave the machine learning algorithm new cases, to assess its ability to recommend the optimal treatment option. The recommendations that came out of the algorithm’s machinations largely tallied with those of the tumour board: in the first case Uppot described, for example, the predictive model made by the machine learning system recommended a bridge TACE 44% of the time for that particular patient, which matched the recommendation of the multidisciplinary tumour board. However, Uppot did also warn of limitations to the machine learning-based approach. In one case, the tumour board and the algorithm disagreed, with the radiologists favouring the use of stereotactic body radiation therapy (SBRT). The algorithm did not suggest any treatment involving radiation, but in this particular instance, the radiation programme was very strong in that institution, a factor not taken into account by the AI model.

Currently, Uppot and colleagues are collecting and structuring their data to replicate this test with a greater number of HCC training datasets. However, he iterates to Interventional News the importance of expanding the data set: “If we want this to be clinically useful and clinically viable, it needs thousands of cases, and not just from Massachusetts General Hospital, but from all of the major hospitals in the USA and around the world. In order for the machine learning software to be smart enough to make recommendations, it needs to be able to compute data independent of one institution.”

In addition to potentially providing objective tumour board recommendations, Chapiro posits a wide range of future clinical uses of AI. These include new imaging and combined clinical or radiological biomarkers for decision support and improved therapy choice, intraprocedural navigation support and improvements of intraprocedural cone beam CT image quality, applications for advanced image guidance in robotic-assisted ablations, and assessment tools for tumour response on imaging. He postulates that using supervised machine learning to predict treatment response to intra-arterial therapies for HCC will be “a first possible application” of this technology, saying, “I expect that some of it will penetrate preprocedural and intraprocedural imaging as soon as within two to three years.”

Telling of his own institution’s involvement in this field, Chapiro comments, “The Yale Interventional Oncology research group is part of a multi-disciplinary network at Yale in close collaboration with [the] Biomedical Engineering, the Yale Smilow Cancer Center and the Liver Center. We tackle the full breadth of clinical problems and apply AI-based algorithms, mostly machine learning and deep learning methodologies to find solutions for automated lesion detection, characterisation, diagnosis and outcome prediction after locoregional therapies such as TACE [transarterial chemoembolisation]. A lot of credit goes to our biomedical engineers who are truly leading the effort here and collaborate with us closely on this vision.” Most of this work is National Institute of Health-funded.

Marketing AI

“Ultimately, we must strongly advocate for AI from a patient’s perspective”, Chapiro explains to this newspaper. “In most of medicine, AI is being marketed and frankly overhyped for its capability to improve workflows and productivity. This is a one-sided narrative and tarnishes good intentions. Data ‘scandals’ and misunderstood knee-jerk reflexes from patient advocacy groups will be the logical consequence and it is our fault. We must explain to the patients that AI will first and foremost improve patient care, make it more affordable and certainly also improve therapy outcomes, help us avoid ill-informed clinical decisions and streamline medicine towards more personalised healthcare.”

When pitching ideas to translate AI research into clinical practice to General Electric, prior to their recent selling off of their healthcare division, Uppot noted that the company’s interest had been piqued largely by the prospect of outsourcing the “brain of an academic centre” to a community hospital which may have fewer resources or fewer specialised IRs.

At Spectrum, it was mentioned that a large proportion of machine learning companies working in the interventional oncology space were based in China or Israel. Chapiro notes that the geography and local culture “matter a lot” in terms of the development and use of AI in healthcare. He says: “The USA is behind in this respect and we may lose the global contest for the most effective and rapid implementation of those cutting-edge technologies for the benefit of patient care. China benefits from a strong centrally regulated agenda and a more ‘flexible’ approach to protected health information and access to it. In my opinion, this should not be a role model for the western world.

“Israel, however, punches above and beyond its demographic weight and has truly become the world power of AI. This is mostly due to a long-standing educational policy which facilitates the creation of the necessary manpower and the strong reliance on high-tech as part of the regional geopolitical needs. This results in a highly developed academic and military-industrial complex geared towards rapid development and implementation of cutting-edge technology, which in turn trickles down into the healthcare sector. Those circumstances meet a socialised healthcare system and flexible, but still ethically defensible, data policies. In order to compete and win, we [the USA] need to cut regulations and bureaucracy, and possibly even fund a centralised national data registry that would facilitate collection of high-quality data for AI research. The current administration already gave the right impulse by supporting AI research but words must be followed by investments. With the Israeli model by our side, America can still become the global leader in AI.”


LEAVE A REPLY

Please enter your comment!
Please enter your name here