Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Attack Independent Perceptual Improvement of Adversarial Examples
Download
index.pdf
Date
2022-12-23
Author
Karlı, Berat Tuna
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
209
views
152
downloads
Cite This
Deep Neural networks (DNNs) are used in a variety of domains with great success, however, it has been proven that these networks are vulnerable to additive non-arbitrary perturbations. Regarding this fact, several attack and defense mechanisms have been developed; nevertheless, adding crafted perturbations has a negative effect on the perceptual quality of images. This study aims to improve the perceptual quality of adversarial examples independent of attack type and the integration of two attack agnostic techniques is proposed for this purpose. The primary technique, Normalized Variance Weighting, aims to improve the perceptual quality of adversarial attacks by applying a variance map to intensify the perturbations in the high-variance zones. This method could be applied to existing adversarial attacks without any additional overhead except a matrix multiplication. The secondary technique, the Minimization Method, minimizes the perceptual distance of the successful adversarial example to improve its perceptual quality. This technique could be applied to adversarial samples generated using any type of adversarial attack. Since the primary method is applied during the attack and the secondary method is applied after the attack, these two separate methods could be used together in an integrated adversarial attack setting. It is shown that adversarial examples generated from the integration of these methods exhibit the best perceptual quality measured in terms of the LPIPS metric.
Subject Keywords
Deep learning
,
Adversarial attacks
,
Perceptual quality
,
Image classification
URI
https://hdl.handle.net/11511/101257
Collections
Graduate School of Informatics, Thesis
Suggestions
OpenMETU
Core
Improving Perceptual Quality of Spatially Transformed Adversarial Examples
Aydın, Ayberk; Temizel, Alptekin; Department of Modeling and Simulation (2022-8)
Deep neural networks are known to be vulnerable to additive adversarial perturbations. The amount of these additive perturbations are generally quantified using Lp metrics over the difference between adversarial and benign examples. However, even when the measured perturbations are small, they tend to be noticeable by human observers since Lp distance metrics are not representative of human perception. Spatially transformed examples work by distorting pixel locations instead of applying an additive perturba...
MetaLabelNet: Learning to Generate Soft-Labels From Noisy-Labels
Algan, Gorkem; Ulusoy, İlkay (2022-01-01)
Real-world datasets commonly have noisy labels, which negatively affects the performance of deep neural networks (DNNs). In order to address this problem, we propose a label noise robust learning algorithm, in which the base classifier is trained on soft-labels that are produced according to a meta-objective. In each iteration, before conventional training, the meta-training loop updates soft-labels so that resulting gradients updates on the base classifier would yield minimum loss on meta-data. Soft-labels...
HYPERSPECTRAL CLASSIFICATION USING STACKED AUTOENCODERS WITH DEEP LEARNING
Özdemir, Ataman; Cetin, C. Yasemin Yardimci (2014-06-27)
In this study, stacked autoencoders which are widely utilized in deep learning research are applied to remote sensing domain for hyperspectral classification. High dimensional hyperspectral data is an excellent candidate for deep learning methods. However, there are no works in literature that focuses on such deep learning approaches for hyperspectral imagery. This study aims to fill this gap by utilizing stacked autoencoders. Experiments are conducted on the Pavia University scene. Using stacked autoencode...
Improved Knowledge Distillation with Dynamic Network Pruning
Şener, Eren; Akbaş, Emre (2022-9-30)
Deploying convolutional neural networks to mobile or embedded devices is often prohibited by limited memory and computational resources. This is particularly problematic for the most successful networks, which tend to be very large and require long inference times. Many alternative approaches have been developed for compressing neural networks based on pruning, regularization, quantization or distillation. In this paper, we propose the “Knowledge Distillation with Dynamic Pruning” (KDDP), which trains a dyn...
Deep Learning-Based Hybrid Approach for Phase Retrieval
IŞIL, ÇAĞATAY; Öktem, Sevinç Figen; KOÇ, AYKUT (2019-06-24)
We develop a phase retrieval algorithm that utilizes the hybrid-input-output (HIO) algorithm with a deep neural network (DNN). The DNN architecture, which is trained to remove the artifacts of HIO, is used iteratively with HIO to improve the reconstructions. The results demonstrate the effectiveness of the approach with little additional cost.
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
B. T. Karlı, “Attack Independent Perceptual Improvement of Adversarial Examples,” M.S. - Master of Science, Middle East Technical University, 2022.