Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Multi-Modal Learning With Generalizable Nonlinear Dimensionality Reduction
Date
2019-08-26
Author
KAYA, SEMİH
Vural, Elif
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
226
views
0
downloads
Cite This
In practical machine learning settings, there often exist relations or links between data from different modalities. The goal of multimodal learning algorithms is to efficiently use the information available in different modalities to solve multi-modal classification or retrieval problems. In this study, we propose a multi-modal supervised representation learning algorithm based on nonlinear dimensionality reduction. Nonlinear embeddings often yield more flexible representations compared to linear counterparts especially in case of high dissimilarity between the data geometries in different modalities. Based on recent performance bounds on nonlinear dimensionality reduction, we propose an optimization objective aiming to improve the intra- and inter-modal within-class compactness and between-class separation, as well as the Lipschitz regularity of the interpolator that generalizes the embedding to the whole data space. Experiments in multi-view face recognition and image-text retrieval applications show that the proposed method yields promising performance in comparison with state-of-the-art multi-modal learning methods.
Subject Keywords
Cross-modal learning
,
Multi-view learning
,
Cross-modal retrieval
,
Nonlinear embeddings
,
RBF interpolators
URI
https://hdl.handle.net/11511/42920
DOI
https://doi.org/10.1109/icip.2019.8803196
Collections
Department of Electrical and Electronics Engineering, Conference / Seminar
Suggestions
OpenMETU
Core
Cross-modal Representation Learning with Nonlinear Dimensionality Reduction
KAYA, SEMİH; Vural, Elif (2019-08-22)
In many problems in machine learning there exist relations between data collections from different modalities. The purpose of multi-modal learning algorithms is to efficiently use the information present in different modalities when solving multi-modal retrieval problems. In this work, a multi-modal representation learning algorithm is proposed, which is based on nonlinear dimensionality reduction. Compared to linear dimensionality reduction methods, nonlinear methods provide more flexible representations e...
Domain adaptation on graphs by learning graph topologies: theoretical analysis and an algorithm
Vural, Elif (The Scientific and Technological Research Council of Turkey, 2019-01-01)
Traditional machine learning algorithms assume that the training and test data have the same distribution, while this assumption does not necessarily hold in real applications. Domain adaptation methods take into account the deviations in data distribution. In this work, we study the problem of domain adaptation on graphs. We consider a source graph and a target graph constructed with samples drawn from data manifolds. We study the problem of estimating the unknown class labels on the target graph using the...
On numerical optimization theory of infinite kernel learning
Ozogur-Akyuz, S.; Weber, Gerhard Wilhelm (2010-10-01)
In Machine Learning algorithms, one of the crucial issues is the representation of the data. As the given data source become heterogeneous and the data are large-scale, multiple kernel methods help to classify "nonlinear data". Nevertheless, the finite combinations of kernels are limited up to a finite choice. In order to overcome this discrepancy, a novel method of "infinite" kernel combinations is proposed with the help of infinite and semi-infinite programming regarding all elements in kernel space. Look...
MODELLING OF KERNEL MACHINES BY INFINITE AND SEMI-INFINITE PROGRAMMING
Ozogur-Akyuz, S.; Weber, Gerhard Wilhelm (2009-06-03)
In Machine Learning (ML) algorithms, one of the crucial issues is the representation of the data. As the data become heterogeneous and large-scale, single kernel methods become insufficient to classify nonlinear data. The finite combinations of kernels are limited up to a finite choice. In order to overcome this discrepancy, we propose a novel method of "infinite" kernel combinations for learning problems with the help of infinite and semi-infinite programming regarding all elements in kernel space. Looking...
Multiobjective evolutionary feature subset selection algorithm for binary classification
Deniz Kızılöz, Firdevsi Ayça; Coşar, Ahmet; Dökeroğlu, Tansel; Department of Computer Engineering (2016)
This thesis investigates the performance of multiobjective feature subset selection (FSS) algorithms combined with the state-of-the-art machine learning techniques for binary classification problem. Recent studies try to improve the accuracy of classification by including all of the features in the dataset, neglecting to determine the best performing subset of features. However, for some problems, the number of features may reach thousands, which will cause too much computation power to be consumed during t...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
S. KAYA and E. Vural, “Multi-Modal Learning With Generalizable Nonlinear Dimensionality Reduction,” 2019, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/42920.