Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
GANILLA: Generative adversarial networks for image to illustration translation
Download
index.pdf
Date
2020-03-01
Author
Hicsonmez, Samet
Samet, Nermin
Akbaş, Emre
DUYGULU ŞAHİN, PINAR
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
269
views
613
downloads
Cite This
In this paper, we explore illustrations in children's books as a new domain in unpaired image-to-image translation. We show that although the current state-of-the-art image-to-image translation models successfully transfer either the style or the content, they fail to transfer both at the sametime. We propose a new generator network to address this issue and show that the resulting network strikes a better balance between style and content. There are no well-defined or agreed-upon evaluation metrics for unpaired image-to-image translation. So far, the success of image translation models has been based on subjective, qualitative visual comparison on a limited number of images. To address this problem, we propose a new framework for the quantitative evaluation of image-to-illustration models, where both content and style are taken into account using separate classifiers. In this new evaluation framework, our proposed model performs better than the current state-of-the-art models on the illustrations dataset. Our code and pretrained models can be found at https://github.com/giddyyupp/ganilla.
Subject Keywords
Computer Vision and Pattern Recognition
,
Generative Adversarial Networks
,
Image to Image Translation
,
Illustrations
,
Style Transfer
URI
https://hdl.handle.net/11511/34821
Journal
IMAGE AND VISION COMPUTING
DOI
https://doi.org/10.1016/j.imavis.2020.103886
Collections
Department of Computer Engineering, Article
Suggestions
OpenMETU
Core
Dealing with in-class challenges: Pre-service teacher cognitions and influences
Çimen, Şeyda Selen; Daloğlu, Ayşegül (2019-01-01)
© 2019 JLLS and the Authors - Published by JLLS.This study explores cognitions of pre-service English language teachers in relation to dealing with most commonly experienced in-class challenges in foreign language teaching and the influences that shape their cognitions. Adopting qualitative research design, a case study was conducted to provide an account of pre-service English language teachers’ cognitions. Data for this study were collected in two main stages. The first stage involved collection of the ba...
Data-driven image captioning via salient region discovery
Kilickaya, Mert; Akkuş, Burak Kerim; Çakıcı, Ruket; Erdem, Aykut; Erdem, Erkut; İKİZLER CİNBİŞ, NAZLI (Institution of Engineering and Technology (IET), 2017-09-01)
n the past few years, automatically generating descriptions for images has attracted a lot of attention in computer vision and natural language processing research. Among the existing approaches, data-driven methods have been proven to be highly effective. These methods compare the given image against a large set of training images to determine a set of relevant images, then generate a description using the associated captions. In this study, the authors propose to integrate an object-based semantic image r...
Shape : representation, description, similarity and recognition
Arıca, Nafiz; Yarman Vural, Fatoş Tunay; Department of Computer Engineering (2003)
In this thesis, we study the shape analysis problem and propose new methods for shape description, similarity and recognition. Firstly, we introduce a new shape descriptor in a two-step method. In the first step, the 2-D shape information is mapped into a set of 1-D functions. The mapping is based on the beams, which are originated from a boundary point, connecting that point with the rest of the points on the boundary. At each point, the angle between a pair of beams is taken as a random variable to define...
Multi-way, multilingual neural machine translation
Firat, Orhan; Cho, Kyunghyun; Sankaran, Baskaran; Yarman Vural, Fatoş Tunay; Bengio, Yoshua (2017-09-01)
We propose multi-way, multilingual neural machine translation. The proposed approach enables a single neural translation model to translate between multiple languages, with a number of parameters that grows only linearly with the number of languages. This is made possible by having a single attention mechanism that is shared across all language pairs. We train the proposed multi-way, multilingual model on ten language pairs from WMT'15 simultaneously and observe clear performance improvements over models tr...
KINSHIPGAN: SYNTHESIZING OF KINSHIP FACES FROM FAMILY PHOTOS BY REGULARIZING A DEEP FACE NETWORK
Ozkan, Savas; Ozkan, Akin (2018-10-10)
In this paper, we propose a kinship generator network that can synthesize a possible child face by analyzing his/her parent's photo. For this purpose, we focus on to handle the scarcity of kinship datasets throughout the paper by proposing novel solutions in particular. To extract robust features, we integrate a pre-trained face model to the kinship face generator. Moreover, the generator network is regularized with an additional face dataset and adversarial loss to decrease the overfitting of the limited s...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
S. Hicsonmez, N. Samet, E. Akbaş, and P. DUYGULU ŞAHİN, “GANILLA: Generative adversarial networks for image to illustration translation,”
IMAGE AND VISION COMPUTING
, pp. 0–0, 2020, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/34821.