Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
WAIT: Feature warping for animation to illustration video translation using GANs
Date
2025-07-07
Author
Hicsonmez, Samet
Samet, Nermin
Samet, Fidan
Bakir, Oguz
Akbaş, Emre
DUYGULU ŞAHİN, PINAR
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
25
views
0
downloads
Cite This
In this paper, we explore a new domain for video-to-video translation. Motivated by the availability of animation movies that are adopted from illustrated books for children, we aim to stylize these videos with the style of the original illustrations. Current state-of-the-art video-to-video translation models rely on having a video sequence or a single style image to stylize an input video. We introduce a new problem for video stylizing where an unordered set of images are used. This is a challenging task for two reasons: (i) we do not have the advantage of temporal consistency as in video sequences; (ii) it is more difficult to obtain consistent styles for video frames from a set of unordered images compared to using a single image. Most of the video-to-video translation methods are built on an image-to-image translation model, and integrate additional networks such as optical flow, or temporal predictors to capture temporal relations. These additional networks make the model training and inference complicated and slow down the process. To ensure temporal coherency in video-to-video style transfer, we propose a new generator network with feature warping layers which overcomes the limitations of the previous methods. We show the effectiveness of our method on three datasets both qualitatively and quantitatively. Code and pretrained models are available at https://github.com/giddyyupp/wait.
Subject Keywords
GANs
,
Illustrations
,
Video stylization
,
Video to video translation
,
Vision for art
URI
https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=105001490764&origin=inward
https://hdl.handle.net/11511/114300
Journal
Neurocomputing
DOI
https://doi.org/10.1016/j.neucom.2025.130108
Collections
Department of Computer Engineering, Article
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
S. Hicsonmez, N. Samet, F. Samet, O. Bakir, E. Akbaş, and P. DUYGULU ŞAHİN, “WAIT: Feature warping for animation to illustration video translation using GANs,”
Neurocomputing
, vol. 637, pp. 0–0, 2025, Accessed: 00, 2025. [Online]. Available: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=105001490764&origin=inward.