Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
A concept-aware explainability method for convolutional neural networks
Download
s00138-024-01653-w.pdf
Date
2025-03-01
Author
Gurkan, Mustafa Kagan
Arica, Nafiz
Yarman Vural, Fatoş Tunay
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
8
views
1
downloads
Cite This
Although Convolutional Neural Networks (CNN) outperform the classical models in a wide range of Machine Vision applications, their restricted interpretability and their lack of comprehensibility in reasoning, generate many problems such as security, reliability, and safety. Consequently, there is a growing need for research to improve explainability and address their limitations. In this paper, we propose a concept-based method, called Concept-Aware Explainability (CAE) to provide a verbal explanation for the predictions of pre-trained CNN models. A new measure, called detection score mean, is introduced to quantify the relationship between the filters of the model and a set of pre-defined concepts. Based on the detection score mean values, we define sorted lists of Concept-Aware Filters (CAF) and Filter-Activating Concepts (FAC). These lists are used to generate explainability reports, where we can explain, analyze, and compare models in terms of the concepts embedded in the image. The proposed explainability method is compared to the state-of-the-art methods to explain Resnet18 and VGG16 models, pre-trained on ImageNet and Places365-Standard datasets. Two popular metrics, namely, the number of unique detectors and the number of detecting filters, are used to make a quantitative comparison. Superior performances are observed for the suggested CAE, when compared to Network Dissection (NetDis) (Bau et al., in: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2017), Net2Vec (Fong and Vedaldi, in: Paper presented at IEEE conference on computer vision and pattern recognition (CVPR), 2018), and CLIP-Dissect (CLIP-Dis) (Oikarinen and Weng, in: The 11th international conference on learning representations (ICLR), 2023) methods.
Subject Keywords
Concept-based explanation
,
Convolutional neural networks
,
Filter-concept association
,
Model comparison via explanations
URI
https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85217534232&origin=inward
https://hdl.handle.net/11511/113603
Journal
Machine Vision and Applications
DOI
https://doi.org/10.1007/s00138-024-01653-w
Collections
Department of Computer Engineering, Article
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
M. K. Gurkan, N. Arica, and F. T. Yarman Vural, “A concept-aware explainability method for convolutional neural networks,”
Machine Vision and Applications
, vol. 36, no. 2, pp. 0–0, 2025, Accessed: 00, 2025. [Online]. Available: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85217534232&origin=inward.