Attack Independent Perceptual Improvement of Adversarial Examples

Download
2022-12-23
Karlı, Berat Tuna
Deep Neural networks (DNNs) are used in a variety of domains with great success, however, it has been proven that these networks are vulnerable to additive non-arbitrary perturbations. Regarding this fact, several attack and defense mechanisms have been developed; nevertheless, adding crafted perturbations has a negative effect on the perceptual quality of images. This study aims to improve the perceptual quality of adversarial examples independent of attack type and the integration of two attack agnostic techniques is proposed for this purpose. The primary technique, Normalized Variance Weighting, aims to improve the perceptual quality of adversarial attacks by applying a variance map to intensify the perturbations in the high-variance zones. This method could be applied to existing adversarial attacks without any additional overhead except a matrix multiplication. The secondary technique, the Minimization Method, minimizes the perceptual distance of the successful adversarial example to improve its perceptual quality. This technique could be applied to adversarial samples generated using any type of adversarial attack. Since the primary method is applied during the attack and the secondary method is applied after the attack, these two separate methods could be used together in an integrated adversarial attack setting. It is shown that adversarial examples generated from the integration of these methods exhibit the best perceptual quality measured in terms of the LPIPS metric.

Suggestions

Improving Perceptual Quality of Spatially Transformed Adversarial Examples
Aydın, Ayberk; Temizel, Alptekin; Department of Modeling and Simulation (2022-8)
Deep neural networks are known to be vulnerable to additive adversarial perturbations. The amount of these additive perturbations are generally quantified using Lp metrics over the difference between adversarial and benign examples. However, even when the measured perturbations are small, they tend to be noticeable by human observers since Lp distance metrics are not representative of human perception. Spatially transformed examples work by distorting pixel locations instead of applying an additive perturba...
MetaLabelNet: Learning to Generate Soft-Labels From Noisy-Labels
Algan, Gorkem; Ulusoy, İlkay (2022-01-01)
Real-world datasets commonly have noisy labels, which negatively affects the performance of deep neural networks (DNNs). In order to address this problem, we propose a label noise robust learning algorithm, in which the base classifier is trained on soft-labels that are produced according to a meta-objective. In each iteration, before conventional training, the meta-training loop updates soft-labels so that resulting gradients updates on the base classifier would yield minimum loss on meta-data. Soft-labels...
HYPERSPECTRAL CLASSIFICATION USING STACKED AUTOENCODERS WITH DEEP LEARNING
Özdemir, Ataman; Cetin, C. Yasemin Yardimci (2014-06-27)
In this study, stacked autoencoders which are widely utilized in deep learning research are applied to remote sensing domain for hyperspectral classification. High dimensional hyperspectral data is an excellent candidate for deep learning methods. However, there are no works in literature that focuses on such deep learning approaches for hyperspectral imagery. This study aims to fill this gap by utilizing stacked autoencoders. Experiments are conducted on the Pavia University scene. Using stacked autoencode...
Deep Learning-Based Hybrid Approach for Phase Retrieval
IŞIL, ÇAĞATAY; Öktem, Sevinç Figen; KOÇ, AYKUT (2019-06-24)
We develop a phase retrieval algorithm that utilizes the hybrid-input-output (HIO) algorithm with a deep neural network (DNN). The DNN architecture, which is trained to remove the artifacts of HIO, is used iteratively with HIO to improve the reconstructions. The results demonstrate the effectiveness of the approach with little additional cost.
Imperceptible Adversarial Examples by Spatial Chroma-Shift
Aydın, Ayberk; Sen, Deniz; Karli, Berat Tuna; Hanoglu, Oguz; Temizel, Alptekin (2021-10-20)
Deep Neural Networks have been shown to be vulnerable to various kinds of adversarial perturbations. In addition to widely studied additive noise based perturbations, adversarial examples can also be created by applying a per pixel spatial drift on input images. While spatial transformation based adversarial examples look more natural to human observers due to absence of additive noise, they still possess visible distortions caused by spatial transformations. Since the human vision is more sensitive to the ...
Citation Formats
B. T. Karlı, “Attack Independent Perceptual Improvement of Adversarial Examples,” M.S. - Master of Science, Middle East Technical University, 2022.