Perceptual quality preserving adversarial attacks

Download
2019
Aksoy, Bilgin
Deep learning is used in various succesful computer vision applications such as image classification. Deep neural networks (DNN) especially convolutional neural networks have reached above human level accuracy rates for image classification tasks. While DNNs have solved the image classification task and enabled its use in many practical applications, recent research has unveiled some properties which could degrade their performance. Adversarial images are samples that are intentionally modified by adding non-random noise to deceive deep learning systems. Even the-state-of-the-art networks fail classifying these adversarial images to the corresponding class. They are widely used in applications such as CAPTHAs to help distinguish legitimate human users from bots. However, the noise introduced during the adversarial image generation process degrades the perceptual quality and introduces artificial colors; making it also difficult for humans to classify images and recognize objects. This thesis proposes a method that enables generation of adversarial images while preserving their perceptual quality. The proposed method is attack type agnostic and could be used in association with the existing attacks in the literature. Experiments show that the generated adversarial images have lower Euclidean distances to their originals while maintaining the same adversarial attack performance. Distances are reduced by 0.0315% to 29.6% with an average reduction of 17.8% over the different attack and network types.
Citation Formats
B. Aksoy, “Perceptual quality preserving adversarial attacks,” Thesis (M.S.) -- Graduate School of Informatics. Modeling and Simulation., Middle East Technical University, 2019.