Artificial intelligence will support clinical decision making in interventional oncology

18980
Aaron Abajian and Julius Chapiro

Aaron Abajian and Julius Chapiro, New Haven, USA, write about results from an early experiment in applying artificial intelligence (AI) and machine learning as a decision support system in interventional oncology to illustrate its potential to overcome rigid staging and scoring systems in the locoregional treatment of liver cancer.

Most patients with hepatocellular carcinoma (HCC) are diagnosed at intermediate to advanced stages of the disease and are no longer amenable to curative surgical or image-guided ablative therapies. In such cases, transarterial chemoembolization (TACE) is the only locoregional therapy that is fully endorsed by guidelines and therapeutic recommendations, such as the most recently updated Barcelona Clinic Liver Cancer (BCLC) staging system by the European Association for the Study of the Liver (EASL).1

Prognostication for therapeutic outcomes in HCC is mostly based on established clinical, laboratory and imaging data points. For practical reasons, such features are mostly organised in scoring systems to help guide therapeutic decisions. As an example, the model for end-stage liver disease (MELD) score, the Child-Pugh classification are commonly used to guide therapeutic decisions, transplant listing as well as to predict long-term outcomes following any intervention.2 Most such scoring systems are based on a regression of basic laboratory or clinical parameters (eg, MELD=dialysis status, creatinine, bilirubin, INR, and sodium; Child-Pugh=bilirubin, albumin, INR, ascites, encephalopathy). In order to arrive at a therapeutic recommendation, individual scoring systems are often combined with clinical performance scores (such as the ECOG performance status score) and imaging data on tumour size to build straight-forward decision support algorithms, such as the BCLC staging system. While such prognostication algorithms have introduced a certain level of standardisation for data collection, they are mostly based on statistically highly variable and cohort-dependent interpretation of retrospectively collected data. As such, there are currently more than 10 different, mostly regionally variable staging systems for HCC, each with slightly different therapeutic recommendations. This circumstance, in and by itself, indicates that none of them is sufficient to provide universally applicable answers for therapeutic decisions. Instead, most such systems merely group patients into rough categories, in some cases mixing highly heterogeneous patient groups into one class with similar treatment recommendations. Interestingly, the recently published BRIDGE study, a global survey on patterns of management and treatment of liver cancer, demonstrated a rampant non-adherence to guidelines across the globe.3 In addition, modern clinical patient workup generates an overwhelming, practically indigestible amount of disparate data such as laboratory and imaging parameters, which adds to the institutional and individual variabilities of multidisciplinary decision-making.

With growing needs for more patient-centred and individualised care, artificial intelligence solutions, specifically machine learning algorithms, may help make sense of the potentially nonsensical. Along the lines of our National Institutes of Health (NIH)-funded effort to introduce more quantitative and data-driven solutions for locoregional therapy of liver cancer, we experimented with the idea of a specific statistical application in machine learning to predict outcomes after TACE before the actual procedure.4 As such, we used AI/machine learning techniques to predict an outcome using observed baseline features. Our study included 36 patients with HCC treated with TACE. We used 25 individual features, including lab parameters, clinical performance scores and 3D quantitative imaging biomarkers for tumour enhancement (qEASL values) to train logistic regression (LR) and random forest (RF) models to predict patients as treatment responders or non-responders. The performance of each model was validated using a so called “leave-one-out” cross-validation where the entire dataset except for one patient is repeatedly used to confirm the applicability of the model to each patient. As a result, our models successfully predicted tumour response with an overall accuracy of 78% (sensitivity 62.5%, specificity 82.1%, positive predictive value 50%, negative predictive value 88.5%). The presence of cirrhosis, high volumes of contrast-enhancing, presumably viable tumour tissue on baseline MR imaging were among the strongest individual predictors. In addition, therapy with Lipiodol rather than drug-eluting beads was associated with a higher response rate.

Our model had several limitations, among others the small cohort size and only a limited set of features that were initially considered for the model. However, this early experiment in applying machine learning as a decision support system in interventional oncology illustrates the potential of such methodologies to overcome rigid staging and scoring systems. Ultimately, if trained on larger datasets, such systems may introduce an element of personalised care where current approaches show gaps.

AI could help overcome disadvantages of staging systems

The obvious disadvantage of currently available prognostication and staging systems is that simple models are limited in the amount of information that they can capture. For instance, it is unreasonable to expect a five-component model like the Child-Pugh to have a high accuracy in predicting the benefit of a complex and multifactorial outcome, such as tumour response to TACE. Tumour board participants routinely consider hundreds of pieces of information before arriving at a therapeutic decision which is ultimately based on their learned professional experience, having seen thousands of cases as a reference. An ideal model would therefore take all patient-centred data as input to predict response to TACE or any other therapy, after being trained on a vast number of previously treated patients with a similar condition. In the future, neural networks will be capable of incorporating much larger numbers of features and extract predictive patterns from data that were previously invisible. However, we are not quite there yet. In order to train an algorithm to “think” like a tumour board member, it must be exposed to thousands of datasets to learn from.

Interventional oncology has been practiced for almost a generation, and while individual clinical trials continue to be mostly underpowered, vast training data is available already. Our next step as a community of interventional oncologists should therefore be to organise, collect and store a well-characterised multi-institutional database which would enable us to introduce and study advanced machine learning based solutions to improve clinical decision making. Similar examples, such as the united network for organ sharing (in short UNOS) database exist and may inspire us to pool the resources necessary for success. Only then will artificial intelligence truly be given a chance to add value to the way we practice interventional oncology.

References

1 European Association for the Study of the Liver. Electronic address eee, European Association for the Study of the L: Easl clinical practice guidelines: Management of hepatocellular carcinoma. J Hepatol 2018.

2 Asrani SK, Kamath PS: Model for end-stage liver disease score and meld exceptions: 15 years later. Hepatol Int 2015;9:346–54.

3 Park JW, Chen M, Colombo M, Roberts LR, Schwartz M, Chen PJ, Kudo M, Johnson P, Wagner S, Orsini LS, Sherman M: Global patterns of hepatocellular carcinoma management from diagnosis to death: The bridge study. Liver Int 2015;35:2155–66.

4  Abajian A, Murali N, Savic LJ, et al: Predicting treatment response to intra-arterial therapies for hepatocellular carcinoma with the use of supervised machine learning-an artificial intelligence concept. J Vasc Interv Radiol 2018;29:850–7 e851.

Aaron Abajian is a diagnostic radiology resident at the Department of Radiology, University of Washington. He has no disclosures pertaining to this article.

Julius Chapiro is research faculty and interventional radiology resident, Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, USA. He has received research grant support from Philips Healthcare, Boston Scientific, Guerbet, Rolf W Günther Foundation, German-Israeli Foundation for Scientific Research and is a consultant to Guerbet, Eisai and Philips.


LEAVE A REPLY

Please enter your comment!
Please enter your name here