Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Feature enhancement with deep generative models in deep bayesian active learning
Download
index.pdf
Date
2022-9
Author
Duymuş, Pınar Ezgi
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
310
views
115
downloads
Cite This
Data-intensive models emerge as new advances in Deep Learning take place. However, access to annotated datasets with many data points is not constantly prevalent. This situation emphasizes the need for Active Learning to select the least possible amount of data without compromising the accuracy of the classifier models. Recent advancements occur in Deep Bayesian Active Learning (DBAL), which means incorporating uncertainty of model parameters into a Deep Network. In this work, we present an algorithm that improves the accuracy of a DBAL model in an image classification task. We utilize the representation power of Deep Generative Models by employing their feature extraction capabilities. We obtain improved feature space representation of input data referred to as a latent vector by training a generative model. Instead of using the entire image space in the active learning setting, we demonstrate that utilizing latent space provides better data point selection for the active learning problem, hence obtaining higher accuracy. Furthermore, this study compares different generative models in terms of the ability to capture better feature representation. The informativeness of the data points defines how well an active learning algorithm performs. Therefore, capturing the latent space representation of a data point by extracting the highest information value possible is a significant contribution. We provide comparisons and experiments on different kinds of Generative Models, namely Vanilla Variational Autoencoders (VAEs), Maximum Mean Discrepancy Variational Autoencoders (MMDVAE) and Bidirectional Generative Adversarial Networks (BiGANs). Additionally, Bayesian Active Learning suffers from the Mode- Collapse problem. In order to ease that, we propose a diversity-based query algorithm to enhance the diversity of active points and improve the accuracy of the algorithm.
Subject Keywords
Bayesian active learning
,
Deep generative models
,
Feature learning
,
Latent space representation
,
Mode-collapse problem
URI
https://hdl.handle.net/11511/99446
Collections
Graduate School of Natural and Applied Sciences, Thesis
Suggestions
OpenMETU
Core
Closed-form sample probing for training generative models in zero-shot learning
Çetin, Samet; Cinbiş, Ramazan Gökberk; Department of Computer Engineering (2022-2-10)
Generative modeling based approaches have led to significant advances in generalized zero-shot learning over the past few-years. These approaches typically aim to learn a conditional generator that synthesizes training samples of classes conditioned on class embeddings, such as attribute based class definitions. The final zero-shot learning model can then be obtained by training a supervised classification model over the real and/or synthesized training samples of seen and unseen classes, combined. Therefor...
Enhanced Deep Learning with Improved Feature Subspace Separation
Parlaktuna, Mustafa; Sekmen, Ali; Koku, Ahmet Buğra; Abdul Malek, Ayad (2018-09-30)
This research proposes a new deep convolutional network architecture that improves the feature subspace separation. In training, the system considers M classes of input sets {C-i}(i=1)(M) and M deep convolutional networks {DNi}(i=1)(M) whose filter and other parameters are randomly initialized. For each input class C-i, Convolutional Neural Network generates a set of features F-i. Then, a local subspace S-i is matched for each set F-i. This is followed with a full training of the deep convolutional network ...
Deep Learning-Based Hybrid Approach for Phase Retrieval
IŞIL, ÇAĞATAY; Öktem, Sevinç Figen; KOÇ, AYKUT (2019-06-24)
We develop a phase retrieval algorithm that utilizes the hybrid-input-output (HIO) algorithm with a deep neural network (DNN). The DNN architecture, which is trained to remove the artifacts of HIO, is used iteratively with HIO to improve the reconstructions. The results demonstrate the effectiveness of the approach with little additional cost.
Feature Dimensionality Reduction with Variational Autoencoders in Deep Bayesian Active Learning
Ertekin Bolelli, Şeyda (2021-06-09)
Data annotation for training of supervised learning algorithms has been a very costly procedure. The aim of deep active learning methodologies is to acquire the highest performance in supervised deep learning models by annotating as few data points as possible. As the feature space of data grows, the application of linear models in active learning settings has become insufficient. Therefore, Deep Bayesian Active Learning methodology which represents model uncertainty has been widely studied. In this paper, ...
Constraints on a noncommutative physics scale with neutrino-electron scattering
Bilmiş, Selçuk; Li, H. B.; Li, J.; Liao, H. Y.; Lin, S. T.; Singh, V.; Wong, H. T.; Yildirim, I. O.; Yue, Q.; Zeyrek, Mehmet Tevfik (2012-04-19)
Neutrino-electron scatterings (nu - e) are purely leptonic processes with robust standard model predictions. Their measurements can therefore provide constraints to physics beyond the standard model. Noncommutative (NC) field theories modify space-time commutation relations, and allow neutrino electromagnetic couplings at the tree level. Their contribution to the neutrino-electron scattering cross section was derived. Constraints were placed on the NC scale parameter Lambda(NC) from nu - e experiments with ...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
P. E. Duymuş, “Feature enhancement with deep generative models in deep bayesian active learning,” M.S. - Master of Science, Middle East Technical University, 2022.