Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Cross-Modal Learning via Adversarial Loss and Covariate Shift for Enhanced Liver Segmentation
Date
2024-01-01
Author
Ozkan, Savas
SELVER, MUSTAFA ALPER
Baydar, Bora
Kavur, Ali Emre
Candemir, Cemre
Akar, Gözde
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
52
views
0
downloads
Cite This
Despite the widespread use of deep learning methods for semantic segmentation from single imaging modalities, their performance for exploiting multi-domain data still needs to improve. However, the decision-making process in radiology is often guided by data from multiple sources, such as pre-operative evaluation of living donated liver transplantation donors. In such cases, cross-modality performances of deep models become more important. Unfortunately, the domain-dependency of existing techniques limits their clinical acceptability, primarily confining their performance to individual domains. This issue is further formulated as a multi-source domain adaptation problem, which is an emerging field mainly due to the diverse pattern characteristics exhibited from cross-modality data. This paper presents a novel method that can learn robust representations from unpaired cross-modal (CT-MR) data by encapsulating distinct and shared patterns from multiple modalities. In our solution, the covariate shift property is maintained with structural modifications in our architecture. Also, an adversarial loss is adopted to boost the representation capacity. As a result, sparse and rich representations are obtained. Another superiority of our model is that no information about modalities is needed at the training or inference phase. Tests on unpaired CT and MR liver data obtained from the cross-modality task of the CHAOS grand challenge demonstrate that our approach achieves state-of-the-art results with a large margin in both individual metrics and overall scores.
Subject Keywords
Chaos
,
Computed tomography
,
Convolution
,
Cross-modal learning
,
CT
,
Feature extraction
,
Liver
,
liver
,
MR
,
semantic segmentation
,
Task analysis
,
Training
URI
https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85187323058&origin=inward
https://hdl.handle.net/11511/109359
Journal
IEEE Transactions on Emerging Topics in Computational Intelligence
DOI
https://doi.org/10.1109/tetci.2024.3369868
Collections
Department of Electrical and Electronics Engineering, Article
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
S. Ozkan, M. A. SELVER, B. Baydar, A. E. Kavur, C. Candemir, and G. Akar, “Cross-Modal Learning via Adversarial Loss and Covariate Shift for Enhanced Liver Segmentation,”
IEEE Transactions on Emerging Topics in Computational Intelligence
, pp. 0–0, 2024, Accessed: 00, 2024. [Online]. Available: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85187323058&origin=inward.