Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
DEEP LEARNING FOR BRAIN TUMOR CLASSIFICATION: ROBUSTNESS AGAINST ADVERSARIAL ATTACKS AND DEFENSIVE STRATEGIES
Download
Hafsa Khalid_Thesis.pdf
Date
2025-12-30
Author
Khalid, Hafsa
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
179
views
0
downloads
Cite This
In this study, a Customized Convolutional Neural Network (C-CNN) and two transfer learning–based models, ResNet-50 and EfficientNetV2S, are proposed for classifying three brain tumor types - meningioma, glioma, and pituitary, using the publicly available Figshare brain tumor MRI dataset. The robustness of the models is assessed under six adversarial attacks that reflect real-world degradations in clinical MRI data, including Fast Gradient Sign Method (FGSM), motion blur, partial occlusion, JPEG compression artifacts, Gaussian noise artifacts, and adversarial boundary noise. To reduce the vulnerability of deep learning models to adversarial perturbations, two defense strategies are implemented. The first, adversarial attack-driven data augmentation, incorporates adversarially attacked images into the clean training dataset to enhance robustness. The second, defensive distillation, transfers knowledge from a teacher model to a student model by learning softened probability outputs. On the clean dataset, C-CNN, ResNet-50, and EfficientNetV2S achieved test accuracies of 94.42%, 98.03%, and 85%, respectively, using a 70% training, 15% validation, and 15% testing split. However, all models exhibited substantial performance degradation under adversarial attacks, demonstrating their limited robustness and the need for effective defense mechanisms. After applying the defense strategies, model performance was evaluated using two approaches. In the first, clean and attacked images were randomly distributed across all datasets. Under this setting, adversarial data augmentation achieved test accuracies of 97.15%, 98.30%, and 88.23% for C-CNN, ResNet-50, and EfficientNetV2S, respectively, while defensive distillation achieved 95.09%, 97.63%, and 84.95%. In the second approach, a fixed 16:84 ratio of attacked to clean images was maintained. Here, data augmentation accuracies decreased to 95.82% for C-CNN and 97.63% for ResNet-50, while EfficientNetV2S improved to 85.94%. Defensive distillation resulted in accuracies of 94.36%, 97.51%, and 84.18%, respectively.
Subject Keywords
Adversarial Attacks
,
Artificial Intelligence (AI)
,
Brain tumor (BT)
,
Customized Convolutional Neural Network (C-CNN)
,
Deep Learning (DL)
URI
https://hdl.handle.net/11511/118756
Collections
Northern Cyprus Campus, Thesis
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
H. Khalid, “DEEP LEARNING FOR BRAIN TUMOR CLASSIFICATION: ROBUSTNESS AGAINST ADVERSARIAL ATTACKS AND DEFENSIVE STRATEGIES,” M.S. - Master of Science, Middle East Technical University, 2025.