Out-of-Sample Generalizations for Supervised Manifold Learning for Classification

Download
2016-03-01
Supervised manifold learning methods for data classification map high-dimensional data samples to a lower dimensional domain in a structure-preserving way while increasing the separation between different classes. Most manifold learning methods compute the embedding only of the initially available data; however, the generalization of the embedding to novel points, i.e., the out-of-sample extension problem, becomes especially important in classification applications. In this paper, we propose a semi-supervised method for building an interpolation function that provides an out-of-sample extension for general supervised manifold learning algorithms studied in the context of classification. The proposed algorithm computes a radial basis function interpolator that minimizes an objective function consisting of the total embedding error of unlabeled test samples, defined as their distance to the embeddings of the manifolds of their own class, as well as a regularization term that controls the smoothness of the interpolation function in a direction-dependent way. The class labels of test data and the interpolation function parameters are estimated jointly with an iterative process. Experimental results on face and object images demonstrate the potential of the proposed out-of-sample extension algorithm for the classification of manifold-modeled data sets.
IEEE TRANSACTIONS ON IMAGE PROCESSING

Suggestions

A Study of the Classification of Low-Dimensional Data with Supervised Manifold Learning
Vural, Elif (2018-01-01)
Supervised manifold learning methods learn data representations by preserving the geometric structure of data while enhancing the separation between data samples from different classes. In this work, we propose a theoretical study of supervised manifold learning for classification. We consider nonlinear dimensionality reduction algorithms that yield linearly separable embeddings of training data and present generalization bounds for this type of algorithms. A necessary condition for satisfactory generalizat...
Learning Smooth Pattern Transformation Manifolds
Vural, Elif (2013-04-01)
Manifold models provide low-dimensional representations that are useful for processing and analyzing data in a transformation-invariant way. In this paper, we study the problem of learning smooth pattern transformation manifolds from image sets that represent observations of geometrically transformed signals. To construct a manifold, we build a representative pattern whose transformations accurately fit various input images. We examine two objectives of the manifold-building problem, namely, approximation a...
Approximation of pattern transformation manifolds with parametric dictionaries
Vural, Elif (2011-07-12)
The construction of low-dimensional models explaining high-dimensional signal observations provides concise and efficient data representations. In this paper, we focus on pattern transformation manifold models generated by in-plane geometric transformations of 2D visual patterns. We propose a method for computing a manifold by building a representative pattern such that its transformation manifold accurately fits a set of given observations. We present a solution for the progressive construction of the repr...
Learning semi-supervised nonlinear embeddings for domain-adaptive pattern recognition
Vural, Elif (null; 2019-05-20)
We study the problem of learning nonlinear data embeddings in order to obtain representations for efficient and domain-invariant recognition of visual patterns. Given observations of a training set of patterns from different classes in two different domains, we propose a method to learn a nonlinear mapping of the data samples from different domains into a common domain. The nonlinear mapping is learnt such that the class means of different domains are mapped to nearby points in the common domain in order to...
Distance-based discretization of parametric signal manifolds
Vural, Elif (2010-06-28)
The characterization of signals and images in manifolds often lead to efficient dimensionality reduction algorithms based on manifold distance computation for analysis or classification tasks. We propose in this paper a method for the discretization of signal manifolds given in a parametric form. We present an iterative algorithm for the selection of samples on the manifold that permits to minimize the average error in the manifold distance computation. Experimental results with image appearance manifolds d...
Citation Formats
E. Vural, “Out-of-Sample Generalizations for Supervised Manifold Learning for Classification,” IEEE TRANSACTIONS ON IMAGE PROCESSING, pp. 1410–1424, 2016, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/39397.