Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Universal adversarial perturbations using alternating loss functions
Download
deniz_sen_tez.pdf
Date
2022-8-23
Author
Şen, Deniz
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
150
views
182
downloads
Cite This
Deep learning models have been the main choice for image classification, however, recently it has been shown that even the most successful models are vulnerable to adversarial attacks. Unlike image-dependent attacks, universal adversarial perturbations can generate an adversarial example when added to any image. These perturbations are usually generated to fool the whole dataset and most successful attacks can reach 100% fooling rate, however they cannot be controlled to stabilize around a desired fooling rate. This thesis proposes 3 algorithms (Batch Alternating Loss, Epoch-Batch Alternating Loss, Progressive Alternating Loss) that utilize alternating loss scheme where the loss function is selected at each iteration to be either adversarial or norm loss based on some condition. Progressive Alternating Loss has been the best performing attack in terms of the fooling rate stabilization and Lp norm. Furthermore, training-time spatial filtering was applied to each of these proposed attacks to reduce the artefact-like perturbations which naturally form around the center, which was shown to be successful for L2 attacks.
Subject Keywords
Adversarial attack
,
Universal adversarial perturbations
,
Alternating loss
URI
https://hdl.handle.net/11511/98790
Collections
Graduate School of Informatics, Thesis
Suggestions
OpenMETU
Core
How robust are discriminatively trained zero-shot learning models?
Yucel, Mehmet Kerim; Cinbiş, Ramazan Gökberk; DUYGULU ŞAHİN, PINAR (2022-3-01)
Data shift robustness has been primarily investigated from a fully supervised perspective, and robustness of zero shot learning (ZSL) models have been largely neglected. In this paper, we present novel analyses on the robustness of discriminative ZSL to image corruptions. We subject several ZSL models to a large set of common corruptions and defenses. In order to realize the corruption analysis, we curate and release the first ZSL corruption robustness datasets SUN-C, CUB-C and AWA2-C. We analyse our result...
Improving Perceptual Quality of Spatially Transformed Adversarial Examples
Aydın, Ayberk; Temizel, Alptekin; Department of Modeling and Simulation (2022-8)
Deep neural networks are known to be vulnerable to additive adversarial perturbations. The amount of these additive perturbations are generally quantified using Lp metrics over the difference between adversarial and benign examples. However, even when the measured perturbations are small, they tend to be noticeable by human observers since Lp distance metrics are not representative of human perception. Spatially transformed examples work by distorting pixel locations instead of applying an additive perturba...
Generation and modification of 3D models with deep neural networks
Öngün, Cihan; Temizel, Alptekin; Department of Information Systems (2021-9)
Artificial intelligence (AI) and particularly deep neural networks (DNN) have become very hot topics in the recent years and they have been shown to be successful in problems such as detection, recognition and segmentation. More recently DNNs have started to be popular in data generation problems by the invention of Generative Adversarial Networks (GAN). Using GANs, various types of data such as audio, image or 3D models could be generated. In this thesis, we aim to propose a system that creates artificial...
Improving classification performance of endoscopic images with generative data augmentation
Çağlar, Ümit Mert; Temizel, Alptekin; Department of Modeling and Simulation (2022-2-8)
The performance of a supervised deep learning model is highly dependent on the quality and variety of the images in the training dataset. In some applications, it may be impossible to obtain more images. Data augmentation methods have been proven to be successful in increasing the performance of deep learning models with limited data. Recent improvements on Generative Adversarial Networks (GAN) algorithms and structures resulted in improved image quality and diversity and made GAN training possible with lim...
Imperceptible Adversarial Examples by Spatial Chroma-Shift
Aydın, Ayberk; Sen, Deniz; Karli, Berat Tuna; Hanoglu, Oguz; Temizel, Alptekin (2021-10-20)
Deep Neural Networks have been shown to be vulnerable to various kinds of adversarial perturbations. In addition to widely studied additive noise based perturbations, adversarial examples can also be created by applying a per pixel spatial drift on input images. While spatial transformation based adversarial examples look more natural to human observers due to absence of additive noise, they still possess visible distortions caused by spatial transformations. Since the human vision is more sensitive to the ...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
D. Şen, “Universal adversarial perturbations using alternating loss functions,” M.S. - Master of Science, Middle East Technical University, 2022.