GANILLA: Generative adversarial networks for image to illustration translation

Download
2020-03-01
Hicsonmez, Samet
Samet, Nermin
Akbaş, Emre
DUYGULU ŞAHİN, PINAR
In this paper, we explore illustrations in children's books as a new domain in unpaired image-to-image translation. We show that although the current state-of-the-art image-to-image translation models successfully transfer either the style or the content, they fail to transfer both at the sametime. We propose a new generator network to address this issue and show that the resulting network strikes a better balance between style and content. There are no well-defined or agreed-upon evaluation metrics for unpaired image-to-image translation. So far, the success of image translation models has been based on subjective, qualitative visual comparison on a limited number of images. To address this problem, we propose a new framework for the quantitative evaluation of image-to-illustration models, where both content and style are taken into account using separate classifiers. In this new evaluation framework, our proposed model performs better than the current state-of-the-art models on the illustrations dataset. Our code and pretrained models can be found at https://github.com/giddyyupp/ganilla.
IMAGE AND VISION COMPUTING

Suggestions

Dealing with in-class challenges: Pre-service teacher cognitions and influences
Çimen, Şeyda Selen; Daloğlu, Ayşegül (2019-01-01)
© 2019 JLLS and the Authors - Published by JLLS.This study explores cognitions of pre-service English language teachers in relation to dealing with most commonly experienced in-class challenges in foreign language teaching and the influences that shape their cognitions. Adopting qualitative research design, a case study was conducted to provide an account of pre-service English language teachers’ cognitions. Data for this study were collected in two main stages. The first stage involved collection of the ba...
Data-driven image captioning via salient region discovery
Kilickaya, Mert; Akkuş, Burak Kerim; Çakıcı, Ruket; Erdem, Aykut; Erdem, Erkut; İKİZLER CİNBİŞ, NAZLI (Institution of Engineering and Technology (IET), 2017-09-01)
n the past few years, automatically generating descriptions for images has attracted a lot of attention in computer vision and natural language processing research. Among the existing approaches, data-driven methods have been proven to be highly effective. These methods compare the given image against a large set of training images to determine a set of relevant images, then generate a description using the associated captions. In this study, the authors propose to integrate an object-based semantic image r...
Shape : representation, description, similarity and recognition
Arıca, Nafiz; Yarman Vural, Fatoş Tunay; Department of Computer Engineering (2003)
In this thesis, we study the shape analysis problem and propose new methods for shape description, similarity and recognition. Firstly, we introduce a new shape descriptor in a two-step method. In the first step, the 2-D shape information is mapped into a set of 1-D functions. The mapping is based on the beams, which are originated from a boundary point, connecting that point with the rest of the points on the boundary. At each point, the angle between a pair of beams is taken as a random variable to define...
Multi-way, multilingual neural machine translation
Firat, Orhan; Cho, Kyunghyun; Sankaran, Baskaran; Yarman Vural, Fatoş Tunay; Bengio, Yoshua (2017-09-01)
We propose multi-way, multilingual neural machine translation. The proposed approach enables a single neural translation model to translate between multiple languages, with a number of parameters that grows only linearly with the number of languages. This is made possible by having a single attention mechanism that is shared across all language pairs. We train the proposed multi-way, multilingual model on ten language pairs from WMT'15 simultaneously and observe clear performance improvements over models tr...
KINSHIPGAN: SYNTHESIZING OF KINSHIP FACES FROM FAMILY PHOTOS BY REGULARIZING A DEEP FACE NETWORK
Ozkan, Savas; Ozkan, Akin (2018-10-10)
In this paper, we propose a kinship generator network that can synthesize a possible child face by analyzing his/her parent's photo. For this purpose, we focus on to handle the scarcity of kinship datasets throughout the paper by proposing novel solutions in particular. To extract robust features, we integrate a pre-trained face model to the kinship face generator. Moreover, the generator network is regularized with an additional face dataset and adversarial loss to decrease the overfitting of the limited s...
Citation Formats
S. Hicsonmez, N. Samet, E. Akbaş, and P. DUYGULU ŞAHİN, “GANILLA: Generative adversarial networks for image to illustration translation,” IMAGE AND VISION COMPUTING, pp. 0–0, 2020, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/34821.