Back to news
Ethics Health

Explainable AI Emerges as a Trust Issue in Lung Cancer Diagnostics

In lung cancer diagnostics, the AI used is now undergoing a critical review, where the focus is not only on accuracy but also on explainability and reliability. An international research group dissects in a recent article how explainable AI is becoming part of cancer image analysis—and what problems it simultaneously reveals.

Deep neural networks used for interpreting cancer images can already compete with specialists in diagnostic accuracy. However, they are so complex that it is difficult for doctors to understand what the models behind the decisions are really focusing on. This obscures the division of responsibility and raises ethical questions about the use of AI in patient care.

The review brings together developments from recent years, where explainable methods have begun to be applied particularly to three tasks. One is weakly supervised lesion localization: the model tries to find tumors in lung images even when it has only been given a patient-specific diagnosis, not an exact tumor boundary image. The second is predictive modeling, which investigates how features found by AI are connected to disease progression. The third is survival analysis, which aims to link signs learned from image data to the patient's survival time.

According to researchers, explainability makes it easier for doctors to assess whether the model's decisions are medically credible and ethically acceptable. Thus, explainable AI can serve as a bridge between effective but opaque models and clinical practice.

Source: A critical review of explainable deep learning in lung cancer diagnosis, Artificial Intelligence Review.

This text was generated with AI assistance and may contain errors. Please verify details from the original source.

Original research: A critical review of explainable deep learning in lung cancer diagnosis
Publisher: Artificial Intelligence Review
Authors: Emmanouil Koutoulakis, Eleftherios Trivizakis, ... Kostas Marias
December 26, 2025
Read original →