Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Character generation through self supervised vectorization
Download
index.pdf
Date
2022-2-11
Author
Gökçen, Gökçeoğlu
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
206
views
228
downloads
Cite This
Humans learn visual concepts rapidly and flexibly from few samples. However, this kind of learning is a challenge for the prevalent machine learning methodologies. Currently, high performing deep learning models and algorithms depend on large amounts of data and they are task-specific. In this study, we focus on the generative aspects of visual concept learning in the domain of handwritten characters. We develop an unsupervised approach that can be generalized to multiple tasks using a small number of samples. We present a drawing agent that operates on stroke-level representation of images. At each time step, the agent first assesses the current canvas and decides whether to stop or keep drawing. When a `draw’ decision is made, the agent outputs a program indicating the stroke to be drawn. As a result, it produces a final raster image by drawing the strokes on a canvas, using a minimal number of strokes and dynamically deciding when to stop. We train our agent through reinforcement learning on handwritten character datasets for unconditional generation and parsing (reconstruction) tasks. We utilize our parsing agent for exemplar generation and type conditioned concept generation without any further training. We present successful results on all three generation tasks and the parsing task. Crucially, we do not need any stroke-level or vector supervision; we only use raster images for training.
Subject Keywords
Human concept learning
,
Reinforcement learning
,
Self-supervised learning
,
Vectorization
URI
https://hdl.handle.net/11511/96298
Collections
Graduate School of Informatics, Thesis
Suggestions
OpenMETU
Core
Effect of human prior knowledge on game success and comparison with reinforcement learning
Hasanoğlu, Mert.; Çakır, Murat Perit; Department of Cognitive Sciences (2019)
This study aims to find out the effect of prior knowledge on the success of humans in a non-rewarding game environment, and then to compare human performance with a reinforcement learning method in an effort to observe to what extent this method can be brought closer to human behavior and performance with the data obtained. For this purpose, different versions of a simple 2D game were used, and data were collected from 32 participants. At the end of the experiment, it is concluded that prior knowledge, such...
Language learning from the perspective of nonlinear dynamic systems
Hohenberger, Annette Edeltraud; Peltzer-Karpf, Annemarie (Walter de Gruyter GmbH, 2009-01-01)
This article outlines a nonlinear dynamic systems approach to language learning on the basis of developmental cognitive neuroscience. Language learning, on this view, is a process of experience-dependent shaping and selection of broadly defined domain-general and domain-specific genetic predispositions. The central concept of development is (neuro) cognitive,e growth in terms of self-organization. Linguistic structure-building is synergetic and emergent insofar as the acquisition of a critical mass of eleme...
Learning semi-supervised nonlinear embeddings for domain-adaptive pattern recognition
Vural, Elif (null; 2019-05-20)
We study the problem of learning nonlinear data embeddings in order to obtain representations for efficient and domain-invariant recognition of visual patterns. Given observations of a training set of patterns from different classes in two different domains, we propose a method to learn a nonlinear mapping of the data samples from different domains into a common domain. The nonlinear mapping is learnt such that the class means of different domains are mapped to nearby points in the common domain in order to...
Compact Frequency Memory for Reinforcement Learning with Hidden States.
Polat, Faruk; Cilden, Erkin (2019-10-28)
Memory-based reinforcement learning approaches keep track of past experiences of the agent in environments with hidden states. This may require extensive use of memory that limits the practice of these methods in a real-life problem. The motivation behind this study is the observation that less frequent transitions provide more reliable information about the current state of the agent in ambiguous environments. In this work, a selective memory approach based on the frequencies of transitions is proposed to ...
Recursive Compositional Reinforcement Learning for Continuous Control Sürekli Kontrol Uygulamalari için Özyinelemeli Bileşimsel Pekiştirmeli Öǧrenme
Tanik, Guven Orkun; Ertekin Bolelli, Şeyda (2022-01-01)
Compositional and temporal abstraction is the key to improving learning and planning in reinforcement learning. Modern real-world control problems call for continuous control domains and robust, sample efficient and explainable control frameworks. We are presenting a framework for recursively composing control skills to solve compositional and progressively complex tasks. The framework promotes reuse of skills, and as a result quickly adaptable to new tasks. The decision-tree can be observed, providing insi...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
G. Gökçen, “Character generation through self supervised vectorization,” M.S. - Master of Science, Middle East Technical University, 2022.