Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Generalized variational autoencoders for learning disentangled representation
Date
2025-10-14
Author
Moğultay, Hazal
KALKAN, SİNAN
Vural, Fatos T. Yarman
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
13376
views
0
downloads
Cite This
The major goal of disentangled representation learning is to form representation space, which independently captures the underlying sources of variation, responsible for generating the data. A pioneering approach is suggested under the group of methods based on Autoencoders (AE), such as, Variational Autoencoders (VAE), β-Variational Autoencoders (β-VAE), σ-VAE, Control-VAE, Dynamic-VAE and Learnable-VAE (L-VAE). These methods incorporate a disentanglement term, mostly expressed as a Kullback–Leibler Divergence and several hyperparameters and regularization terms in the loss function. These methods assume an equal disentanglement degree for the sources in different dimensions of the representation by using an empirically adjusted and fixed β parameter (β=0 or ≥1) across all dimensions. However, given the unobservable nature of the data-generating process and potential entanglements of different sources, we expect that distinct dimensions of the learned representation should exhibit varying degrees of disentanglement. In this study, we generalize the variational autoencoders and its variants by introducing a set of flexible weight functions and regularization terms for different dimensions. This generalization enables us to disentangle each latent dimension by learning the weight function of each dimension, independently. We also propose a special case of generalized VAE, called Multidimensional Learnable Variational Autoencoders, (mdL-VAE), which provide a better disentanglement-reconstruction trade-off without empirically tuning the hyperparameters of the loss function. The learned weight functions of mdL-VAE provide us useful insights about the degree of entanglement among the underlying factors of variation.
Subject Keywords
Disentangled representation learning
,
Disentanglement measure
,
Hyperparameter optimization
,
Variational autoencoders
URI
https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=105009697655&origin=inward
https://hdl.handle.net/11511/115236
Journal
Neurocomputing
DOI
https://doi.org/10.1016/j.neucom.2025.130752
Collections
Department of Computer Engineering, Article
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
H. Moğultay, S. KALKAN, and F. T. Y. Vural, “Generalized variational autoencoders for learning disentangled representation,”
Neurocomputing
, vol. 650, pp. 0–0, 2025, Accessed: 00, 2025. [Online]. Available: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=105009697655&origin=inward.