Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Evaluating the quality of visual explanations on chest X-ray images for thorax diseases classification
Date
2024-01-01
Author
Rahimiaghdam, Shakiba
Alemdar, Hande
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
64
views
0
downloads
Cite This
Deep learning models are extensively used but often lack transparency due to their complex internal mechanics. To bridge this gap, the field of explainable AI (XAI) strives to make these models more interpretable. However, a significant obstacle in XAI is the absence of quantifiable metrics for evaluating explanation quality. Existing techniques, reliant on manual assessment or inadequate metrics, face limitations in scalability, reproducibility, and trustworthiness. Recognizing these issues, the current study specifically addresses the quality assessment of visual explanations in medical imaging, where interpretability profoundly influences diagnostic accuracy and trust in AI-assisted decisions. Introducing novel criteria such as informativeness, localization, coverage, multi-target capturing, and proportionality, this work presents a comprehensive method for the objective assessment of various explainability algorithms. These newly introduced criteria aid in identifying optimal evaluation metrics. The study expands the domain’s analytical toolkit by examining existing metrics, which have been prevalent in recent works for similar applications, and proposing new ones. Rigorous analysis led to selecting Jensen–Shannon divergence (JS_DIV) as the most effective metric for visual explanation quality. Applied to the multi-label, multi-class diagnosis of thoracic diseases using a trained classifier on the CheXpert dataset, local interpretable model-agnostic explanations (LIME) with diverse segmentation strategies interpret the classifier’s decisions. A qualitative analysis on an unseen subset of the VinDr-CXR dataset evaluates these metrics, confirming JS_DIV’s superiority. The subsequent quantitative analysis optimizes LIME’s hyper-parameters and benchmarks its performance across various segmentation algorithms, underscoring the utility of an objective assessment metric in practical applications.
Subject Keywords
Deep neural networks
,
Explainable artificial intelligence
,
Machine learning
,
Medical image classification
URI
https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85187109541&origin=inward
https://hdl.handle.net/11511/109008
Journal
Neural Computing and Applications
DOI
https://doi.org/10.1007/s00521-024-09587-0
Collections
Department of Computer Engineering, Article
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
S. Rahimiaghdam and H. Alemdar, “Evaluating the quality of visual explanations on chest X-ray images for thorax diseases classification,”
Neural Computing and Applications
, pp. 0–0, 2024, Accessed: 00, 2024. [Online]. Available: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85187109541&origin=inward.