Dictionary learning for efficient classification with 1-sparse representations

Download
2018
Engin, Ege
Sparse representations have the goal of expressing a given signal as a linear combination of a small number of signals that capture well its characteristics. Dictionary models allowing sparse representations have proven to be quite useful for the treatment and analysis of data in recent years. In particular, the learning of dictionaries in a manner adapted to the characteristics of each data class in a supervised learning problem and representing the data with the learned dictionaries significantly improve the accuracy of classifiers. However, large dictionary sizes and the complexity of the computation of sparse representations may limit the applicability of these methods especially over platforms with limited storage and computational resources. In this thesis, we study the problem of supervised dictionary learning for fast and efficient classification of test samples. In order to achieve low computational complexity and efficient usage of memory, our method learns analytically represented supervised dictionaries that allow an accurate classification of test samples based on 1-sparse representations. We adopt a representation of dictionary atoms in a two-dimensional analytical basis, where the atoms are learned with respect to an objective involving their distance to the samples from the same class and different classes, as well as an incoherence term encouraging the variability between dictionary atoms. The performance of the proposed method is evaluated with experiments on different image datasets. The comparison of the method to reference supervised and unsupervised dictionary learning methods suggests that it provides satisfactory classification performance under 1-sparse signal representations.
Citation Formats
E. Engin, “Dictionary learning for efficient classification with 1-sparse representations,” M.S. - Master of Science, 2018.