Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Closed-form sample probing for learning generative models in Zero-shot Learning
Download
index.pdf
Date
2022-04-25
Author
Çetin, Samet
Baran, Orhun Buğra
Cinbiş, Ramazan Gökberk
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
229
views
91
downloads
Cite This
Generative model based approaches have led to significant advances in zero-shot learning (ZSL) over the past few years. These approaches typically aim to learn a conditional generator that synthesizes training samples of classes conditioned on class definitions. The final zero-shot learning model is then obtained by training a supervised classification model over the real and/or synthesized training samples of seen and unseen classes, combined. Therefore, naturally, the generative model needs to produce not only relevant samples, but also those that are sufficiently rich for classifier training purposes, which is handled by various heuristics in existing works. In this paper, we introduce a principled approach for training generative models {\em directly} for training data generation purposes. Our main observation is that the use of closed-form models opens doors to end-to-end training thanks to the differentiability of the solvers. In our approach, at each generative model update step, we fit a task-specific closed-form ZSL model from generated samples, and measure its loss on novel samples all within the compute graph, a procedure that we refer to as {\em sample probing}. In this manner, the generator receives feedback directly based on the value of its samples for model training purposes. Our experimental results show that the proposed sample probing approach improves the ZSL results even when integrated into state-of-the-art generative models.
URI
https://hdl.handle.net/11511/98077
Conference Name
International Conference on Learning Representations
Collections
Department of Computer Engineering, Conference / Seminar
Suggestions
OpenMETU
Core
Closed-form sample probing for training generative models in zero-shot learning
Çetin, Samet; Cinbiş, Ramazan Gökberk; Department of Computer Engineering (2022-2-10)
Generative modeling based approaches have led to significant advances in generalized zero-shot learning over the past few-years. These approaches typically aim to learn a conditional generator that synthesizes training samples of classes conditioned on class embeddings, such as attribute based class definitions. The final zero-shot learning model can then be obtained by training a supervised classification model over the real and/or synthesized training samples of seen and unseen classes, combined. Therefor...
Competing labels: a heuristic approach to pseudo-labeling in deep semi-supervised learning
Bayrak, Hamdi Burak; Ertekin Bolelli, Şeyda; Yücel, Hamdullah; Department of Scientific Computing (2022-2-10)
Semi-supervised learning is one of the dominantly utilized approaches to reduce the reliance of deep learning models on large-scale labeled data. One mostly used method of this approach is pseudo-labeling. However, pseudo-labeling, especially its originally proposed form tends to remarkably suffer from noisy training when the assigned labels are false. In order to mitigate this problem, in our work, we investigate the gradient sent to the neural network and propose a heuristic method, called competing label...
Novel multiobjective TLBO algorithms for the feature subset selection problem
Kiziloz, Hakan Ezgi; Deniz, Ayca; Dokeroglu, Tansel; Coşar, Ahmet (2018-09-06)
Teaching Learning Based Optimization (TLBO) is a new metaheuristic that has been successfully applied to several intractable optimization problems in recent years. In this study, we propose a set of novel multiobjective TLBO algorithms combined with supervised machine learning techniques for the solution of Feature Subset Selection (FSS) in Binary Classification Problems (FSS-BCP). Selecting the minimum number of features while not compromising the accuracy of the results in FSS-BCP is a multiobjective opti...
TIMED AUTOMATA ROBUSTNESS ANALYSIS VIA MODEL CHECKING
Bendik, Jaroslav; Sencan, Ahmet; Aydın Göl, Ebru; Cerna, Ivana (2022-01-01)
Timed automata (TA) have been widely adopted as a suitable formalism to model time-critical systems. Furthermore, contemporary model-checking tools allow the designer to check whether a TA complies with a system specification. However, the exact timing constants are often uncertain during the design phase. Consequently, the designer is often able to build a TA with a correct structure, however, the timing constants need to be tuned to satisfy the specification. Moreover, even if the TA initially satisfies t...
Learning with infinitely many kernels via semi-infinite programming
Oezoeguer-Akyuez, Suereyya; Weber, Gerhard Wilhelm (2008-05-23)
In recent years, learning methods are desirable because of their reliability and efficiency in real-world problems. We propose a novel method to find infinitely many kernel combinations for learning problems with the help of infinite and semi-infinite optimization regarding all elements in kernel space. This will provide to study variations of combinations of kernels when considering heterogeneous data in real-world applications. Looking at all infinitesimally fine convex combinations of the kernels from th...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
S. Çetin, O. B. Baran, and R. G. Cinbiş, “Closed-form sample probing for learning generative models in Zero-shot Learning,” presented at the International Conference on Learning Representations, 2022, Accessed: 00, 2022. [Online]. Available: https://hdl.handle.net/11511/98077.