Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Closed-form sample probing for training generative models in zero-shot learning
Download
10447944.pdf
Date
2022-2-10
Author
Çetin, Samet
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
580
views
316
downloads
Cite This
Generative modeling based approaches have led to significant advances in generalized zero-shot learning over the past few-years. These approaches typically aim to learn a conditional generator that synthesizes training samples of classes conditioned on class embeddings, such as attribute based class definitions. The final zero-shot learning model can then be obtained by training a supervised classification model over the real and/or synthesized training samples of seen and unseen classes, combined. Therefore, naturally, the generative model ideally needs to produce not only relevant samples, but also those that are sufficiently informative for classifier training purposes. However, existing approaches rely on approximations or heuristics to enforce the generator to produce class-specific samples. In this thesis, we propose a principled approach that shows how to directly maximize the value of training examples for zero-shot model training purposes, by inferring and evaluating the closed-form ZSL models at each generative model training step, which we call sample probing. This approach provides a way to validate the quality of generated samples in an end-to-end manner, where the generator receives feedback directly based on the prediction made on the real samples of unseen classes. Our experimental results show that sample probing improves the recognition results when integrated into state-of-the-art baselines.
Subject Keywords
Generalized zero-shot learning
,
Meta learning
,
Generative models
,
Sample probing
URI
https://hdl.handle.net/11511/96237
Collections
Graduate School of Natural and Applied Sciences, Thesis
Suggestions
OpenMETU
Core
Closed-form sample probing for learning generative models in Zero-shot Learning
Çetin, Samet; Baran, Orhun Buğra; Cinbiş, Ramazan Gökberk (2022-04-25)
Generative model based approaches have led to significant advances in zero-shot learning (ZSL) over the past few years. These approaches typically aim to learn a conditional generator that synthesizes training samples of classes conditioned on class definitions. The final zero-shot learning model is then obtained by training a supervised classification model over the real and/or synthesized training samples of seen and unseen classes, combined. Therefore, naturally, the generative model needs to produce not...
A Recurrent and Meta-learned Model of Weakly Supervised Object Localization
Sariyildiz, Mert Bulent; Sumbul, Gencer; Cinbiş, Ramazan Gökberk (2022-01-01)
The object localization and detection has improved greatly over the past decade, thanks to developments in deep learning based representations and localization models. However, a major bottleneck remains at the reliance on fully-supervised datasets, which can be difficult to gather in many real-world scenarios. In this work, we focus on the problem of weakly-supervised localization, where the goal is to localize instances of objects based on simple image-level class annotations. In particular, instead of en...
Out-of-Sample Generalizations for Supervised Manifold Learning for Classification
Vural, Elif (2016-03-01)
Supervised manifold learning methods for data classification map high-dimensional data samples to a lower dimensional domain in a structure-preserving way while increasing the separation between different classes. Most manifold learning methods compute the embedding only of the initially available data; however, the generalization of the embedding to novel points, i.e., the out-of-sample extension problem, becomes especially important in classification applications. In this paper, we propose a semi-supervis...
Attention mechanisms for semantic few-shot learning
Baran, Orhun Buğra; Cinbiş, Ramazan Gökberk; İkizler-Cinbiş, Nazlı; Department of Computer Engineering (2021-9-1)
One of the fundamental difficulties in contemporary supervised learning approaches is the dependency on labelled examples. Most state-of-the-art deep architectures, in particular, tend to perform poorly in the absence of large-scale annotated training sets. In many practical problems, however, it is not feasible to construct sufficiently large training sets, especially in problems involving sensitive information or consisting of a large set of fine-grained classes. One of the main topics in machine learning...
Generalized zero-shot object recognition withoutclass-attribute relations
Er, Müslüm; Cinbiş, Ramazan Gökberk; Department of Computer Engineering (2021-2-11)
Over the last decade, great improvements have been achieved in image classifica-tion performances following the advances in supervised deep learning approaches.These supervised approaches, however, typically require substantial amounts of la-beled training examples. Collecting and annotating such examples is a cumbersomeand error-prone task, especially when a large number of classes needs to be spanned.One of the promising approaches towards overcoming this limitation of supervisedrecognition techniques i...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
S. Çetin, “Closed-form sample probing for training generative models in zero-shot learning,” M.S. - Master of Science, Middle East Technical University, 2022.