Multi-modal learning with generalizable nonlinear dimensionality reduction

Kaya, Semih
Thanks to significant advancements in information technologies, people can acquire various types of data from the universe. This data may include multiple features in different domains. Widespread machine learning methods benefit from distinctive features of data to reach desired outputs. Numerous studies demonstrate that machine learning algorithms that make use of multi-modal representations of data have more potential than methods with single modal structure. This potential comes from the mutual agreement of modalities and the existence of additional information. In this thesis, we introduce a multi-modal supervised learning algorithm to represent the data in lower dimensions. We intend to increase within-class similarity and between-class discrimination for intra- and inter-modal exemplars by a generalizable nonlinear interpolator, which satisfies Lipschitz continuity. In order to measure the performance of the proposed supervised learning algorithm, we have conducted several multi-modal face recognition and image-text retrieval experiments on frequently used multi-modal data sets in the literature and achieved quite satisfactory classification and retrieval accuracy in comparison with existing multi-modal learning approaches. These experimental findings suggest that the incorporation of the generalizability of the embedding to the whole ambient space and unseen test data in the learning objective yields promising performance gains in multi-modal representation learning.