AI ethics session suggests hybrid approach the future for IO

582

AI ethicsA keynote session on the future of interventional oncology (IO) brought artificial intelligence (AI) to the forefront of discussion, highlighting both its potential and the ethical, legal, and regulatory challenges it presents. During his talk at the European Conference on Interventional Oncology (ECIO; 12–16 April, Rotterdam, The Netherlands), Gianpaolo Carrafiello (University of Milan, Milan, Italy) discussed the concept of “hybrid intelligence” as the likely future model, in which AI augments, rather than replaces, clinical expertise.

Beginning with a historical definition from AI pioneer John McCarthy, who described AI as “the science and engineering of making intelligent machines,” the speaker framed the discussion around AI’s emerging applications in IO. These include image analysis, segmentation, predictive modelling, decision support systems, and training for interventional radiologists. Particularly in education, AI has shown potential to help standardise and accelerate the training of new interventionists.

The session touched on the responsibility associated with deploying AI in clinical environments. “The decision to accept or override AI-generated recommendations requires careful justification,” Carrafiello explained, noting that regulation may demand well-documented reasoning from physicians who diverge from algorithmic suggestions.

Responsibility in AI use was broken down into three core elements: accountability, culpability, and liability. Carrafiello said that, while AI systems can be held accountable to the extent of explaining their processes and outputs, they cannot be held culpable, “as they lack consciousness, intent, or free will”. Legal liability, however, remains a more complex issue, often extending to developers, institutions, or clinicians depending on the case and regulatory framework.

The speaker identified transparency as a key principle in bridging these gaps. Carrafiello advocated for moving from a “black box” model— where internal AI decision-making is “opaque”, to a “glass box” approach where processes are “visible and understandable”.

“This is essential for building explainable AI systems that foster trust and accountability,” he said.

Yet, Carrafiello believes that transparency may come at a ‘cost’: “Simplifying models to enhance explainability can sometimes reduce predictive accuracy.” In healthcare, where outcomes may depend on precise treatment planning, this trade-off presents a practical challenge.

The legal frameworks guiding AI use in Europe was another focal point. The European Union’s Medical Device Regulation (MDR) currently classifies certain AI tools as medical devices, requiring CE marking unless developed exclusively for in-house use. Additionally, the 2021 EU Artificial Intelligence Act introduced a risk-based regulatory system, categorising AI in healthcare as “high risk” due to its potential impact on patient safety.

However, the speaker argued that these policies still fall short: “Current EU legislation does not sufficiently address the specific regulatory needs for AI in clinical specialities like interventional oncology,” Carrafiello noted. Gaps in legislation may hinder the development and deployment of AI tools in IO and leave gaps in regulation concerning post-market surveillance and algorithm bias, he added.

Expanding on bias in AI algorithms, Carrafiello pointed out that “AI algorithms may be less accurate for underrepresented or marginalised populations, reinforcing systemic inequities in access and outcomes,” and adding that this is particularly “troubling in oncology, where timely interventions can dramatically alter prognosis”.

The speaker then urged that interventional radiologists retain ultimate responsibility, explaining that “AI should be a support tool, not a decision-maker.” Carrafiello advocated for ongoing development in AI systems in IO, regular audits of AI-driven decisions, and transparent communication with patients about the use of AI in their care.

Carrafiello then suggested that hybrid intelligence, defined as the combination of human expertise with AI augmentation, will define the future of IO, however, interventionists that fail to adapt may fail to keep up, and will be left behind. “AI will not replace interventional radiologists, but those who use AI will replace those who don’t,” he said.


LEAVE A REPLY

Please enter your comment!
Please enter your name here