Improving Perceptual Quality of Spatially Transformed Adversarial Examples

Download
2022-8
Aydın, Ayberk
Deep neural networks are known to be vulnerable to additive adversarial perturbations. The amount of these additive perturbations are generally quantified using Lp metrics over the difference between adversarial and benign examples. However, even when the measured perturbations are small, they tend to be noticeable by human observers since Lp distance metrics are not representative of human perception. Spatially transformed examples work by distorting pixel locations instead of applying an additive perturbation or altering the pixel values directly, which produces adversarial examples with improved visual quality. However, the perturbation made by spatial transformations produce visible non-smooth distortions on luminance channels and needs a smoothness regularization over the applied flow field in order to improve the visual quality. On the other hand, humans are less sensitive to changes in chrominance component of visual media and such as resolution loss or pixel shifts in a constrained neighborhood. Motivated by these observations, we propose a novel variation of spatially transformed adversarial examples that creates adversarial examples by applying spatial transformations to chrominance channels of perceptual colorspaces such as Y CbCr and CIELAB to generate adversarial examples with high perceptual quality. Moreover, we find that the visual quality of these examples could be further improved by limiting the magnitude of applied spatial transformations. In a targeted white-box attack setting, the proposed method is able to obtain competitive fooling rates and experimental evaluations show that the proposed method has favorable results in terms of approximate perceptual distance between benign and adversarial images.

Suggestions

Imperceptible Adversarial Examples by Spatial Chroma-Shift
Aydın, Ayberk; Sen, Deniz; Karli, Berat Tuna; Hanoglu, Oguz; Temizel, Alptekin (2021-10-20)
Deep Neural Networks have been shown to be vulnerable to various kinds of adversarial perturbations. In addition to widely studied additive noise based perturbations, adversarial examples can also be created by applying a per pixel spatial drift on input images. While spatial transformation based adversarial examples look more natural to human observers due to absence of additive noise, they still possess visible distortions caused by spatial transformations. Since the human vision is more sensitive to the ...
Attack Independent Perceptual Improvement of Adversarial Examples
Karlı, Berat Tuna; Temizel, Alptekin; Department of Information Systems (2022-12-23)
Deep Neural networks (DNNs) are used in a variety of domains with great success, however, it has been proven that these networks are vulnerable to additive non-arbitrary perturbations. Regarding this fact, several attack and defense mechanisms have been developed; nevertheless, adding crafted perturbations has a negative effect on the perceptual quality of images. This study aims to improve the perceptual quality of adversarial examples independent of attack type and the integration of two attack agnostic t...
Deep Learning-Based Hybrid Approach for Phase Retrieval
IŞIL, ÇAĞATAY; Öktem, Sevinç Figen; KOÇ, AYKUT (2019-06-24)
We develop a phase retrieval algorithm that utilizes the hybrid-input-output (HIO) algorithm with a deep neural network (DNN). The DNN architecture, which is trained to remove the artifacts of HIO, is used iteratively with HIO to improve the reconstructions. The results demonstrate the effectiveness of the approach with little additional cost.
MetaLabelNet: Learning to Generate Soft-Labels From Noisy-Labels
Algan, Gorkem; Ulusoy, İlkay (2022-01-01)
Real-world datasets commonly have noisy labels, which negatively affects the performance of deep neural networks (DNNs). In order to address this problem, we propose a label noise robust learning algorithm, in which the base classifier is trained on soft-labels that are produced according to a meta-objective. In each iteration, before conventional training, the meta-training loop updates soft-labels so that resulting gradients updates on the base classifier would yield minimum loss on meta-data. Soft-labels...
Improved Knowledge Distillation with Dynamic Network Pruning
Şener, Eren; Akbaş, Emre (2022-9-30)
Deploying convolutional neural networks to mobile or embedded devices is often prohibited by limited memory and computational resources. This is particularly problematic for the most successful networks, which tend to be very large and require long inference times. Many alternative approaches have been developed for compressing neural networks based on pruning, regularization, quantization or distillation. In this paper, we propose the “Knowledge Distillation with Dynamic Pruning” (KDDP), which trains a dyn...
Citation Formats
A. Aydın, “Improving Perceptual Quality of Spatially Transformed Adversarial Examples,” M.S. - Master of Science, Middle East Technical University, 2022.