Universal adversarial perturbations using alternating loss functions

Şen, Deniz
Deep learning models have been the main choice for image classification, however, recently it has been shown that even the most successful models are vulnerable to adversarial attacks. Unlike image-dependent attacks, universal adversarial perturbations can generate an adversarial example when added to any image. These perturbations are usually generated to fool the whole dataset and most successful attacks can reach 100% fooling rate, however they cannot be controlled to stabilize around a desired fooling rate. This thesis proposes 3 algorithms (Batch Alternating Loss, Epoch-Batch Alternating Loss, Progressive Alternating Loss) that utilize alternating loss scheme where the loss function is selected at each iteration to be either adversarial or norm loss based on some condition. Progressive Alternating Loss has been the best performing attack in terms of the fooling rate stabilization and Lp norm. Furthermore, training-time spatial filtering was applied to each of these proposed attacks to reduce the artefact-like perturbations which naturally form around the center, which was shown to be successful for L2 attacks.


Improving Perceptual Quality of Spatially Transformed Adversarial Examples
Aydın, Ayberk; Temizel, Alptekin; Department of Modeling and Simulation (2022-8)
Deep neural networks are known to be vulnerable to additive adversarial perturbations. The amount of these additive perturbations are generally quantified using Lp metrics over the difference between adversarial and benign examples. However, even when the measured perturbations are small, they tend to be noticeable by human observers since Lp distance metrics are not representative of human perception. Spatially transformed examples work by distorting pixel locations instead of applying an additive perturba...
Generation and modification of 3D models with deep neural networks
Öngün, Cihan; Temizel, Alptekin; Department of Information Systems (2021-9)
Artificial intelligence (AI) and particularly deep neural networks (DNN) have become very hot topics in the recent years and they have been shown to be successful in problems such as detection, recognition and segmentation. More recently DNNs have started to be popular in data generation problems by the invention of Generative Adversarial Networks (GAN). Using GANs, various types of data such as audio, image or 3D models could be generated. In this thesis, we aim to propose a system that creates artificial...
Improving classification performance of endoscopic images with generative data augmentation
Çağlar, Ümit Mert; Temizel, Alptekin; Department of Modeling and Simulation (2022-2-8)
The performance of a supervised deep learning model is highly dependent on the quality and variety of the images in the training dataset. In some applications, it may be impossible to obtain more images. Data augmentation methods have been proven to be successful in increasing the performance of deep learning models with limited data. Recent improvements on Generative Adversarial Networks (GAN) algorithms and structures resulted in improved image quality and diversity and made GAN training possible with lim...
Vehicle detection on small scale data by generative data augmentation
Kumdakcı, Hilmi; Temizel, Alptekin; Department of Modeling and Simulation (2021-2-03)
Scarcity of training data is one of the prominent problems for deep neural networks,which commonly require high amounts of data to display their potential. Data aug-mentation techniques are frequently applied during the pre-training and training phasesof deep neural networks to overcome the problem of having insufficient data for train-ing. These techniques aim to increase a neural network’s generalization performanceon unseen data by increasing the number of training samples and provide a more rep-resenta...
Enforcing Causality and Passivity of Neural Network Models of Broadband S-Parameters
Torun, Hakki M.; Durgun, Ahmet Cemal; Aygun, Kemal; Swaminathan, Madhavan (2019-10-01)
© 2019 IEEE.This paper proposes a method to ensure that S-Parameters generated using neural network (NN) models are physically consistent and can be safely used in subsequent time-domain simulations. This is achieved by introducing causality and passivity enforcement layers as the last two layers of the NN, while minimizing their computational overhead to the training and inference of the NN model. Proposed technique is demonstrated on learning the mapping from 13 dimensional geometrical parameters of a dif...
Citation Formats
D. Şen, “Universal adversarial perturbations using alternating loss functions,” M.S. - Master of Science, Middle East Technical University, 2022.