Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
On numerical optimization theory of infinite kernel learning
Date
2010-10-01
Author
Ozogur-Akyuz, S.
Weber, Gerhard Wilhelm
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
224
views
0
downloads
Cite This
In Machine Learning algorithms, one of the crucial issues is the representation of the data. As the given data source become heterogeneous and the data are large-scale, multiple kernel methods help to classify "nonlinear data". Nevertheless, the finite combinations of kernels are limited up to a finite choice. In order to overcome this discrepancy, a novel method of "infinite" kernel combinations is proposed with the help of infinite and semi-infinite programming regarding all elements in kernel space. Looking at all infinitesimally fine convex combinations of the kernels from the infinite kernel set, the margin is maximized subject to an infinite number of constraints with a compact index set and an additional (Riemann-Stieltjes) integral constraint due to the combinations. After a parametrization in the space of probability measures, it becomes semi-infinite. We adapt well-known numerical methods to our infinite kernel learning model and analyze the existence of solutions and convergence for the given algorithms. We implement our new algorithm called "infinite" kernel learning (IKL) on heterogenous data sets by using exchange method and conceptual reduction method, which are well known numerical techniques from solve semi-infinite programming. The results show that our IKL approach improves the classifaction accuracy efficiently on heterogeneous data compared to classical one-kernel approaches.
Subject Keywords
Machine Learning
,
Infinite Kernel Learning
,
Semi-Infinite Optimization
,
Infinite Programming
,
Support Vector Machines
,
Continuous Optimization
,
Discretization
,
Exchange Method
,
Conceptual Reduction
,
Triangulation
URI
https://hdl.handle.net/11511/56360
Journal
JOURNAL OF GLOBAL OPTIMIZATION
DOI
https://doi.org/10.1007/s10898-009-9488-x
Collections
Graduate School of Applied Mathematics, Article
Suggestions
OpenMETU
Core
MODELLING OF KERNEL MACHINES BY INFINITE AND SEMI-INFINITE PROGRAMMING
Ozogur-Akyuz, S.; Weber, Gerhard Wilhelm (2009-06-03)
In Machine Learning (ML) algorithms, one of the crucial issues is the representation of the data. As the data become heterogeneous and large-scale, single kernel methods become insufficient to classify nonlinear data. The finite combinations of kernels are limited up to a finite choice. In order to overcome this discrepancy, we propose a novel method of "infinite" kernel combinations for learning problems with the help of infinite and semi-infinite programming regarding all elements in kernel space. Looking...
Machine Learning over Encrypted Data With Fully Homomorphic Encyption
Kahya, Ayşegül; Cenk, Murat; Department of Cryptography (2022-8-26)
When machine learning algorithms train on a large data set, the result will be more realistic. Big data, distribution of big data, and the study of learning algorithms on distributed data are popular research topics of today. Encryption is a basic need, especially when storing data with a high degree of confidentiality, such as medical data. Classical encryption methods cannot meet this need because when texts encrypted with classical encryption methods are distributed, and the distributed data set is decry...
Adapted Infinite Kernel Learning by Multi-Local Algorithm
Akyuz, Sureyya Ozogur; Ustunkar, Gurkan; Weber, Gerhard Wilhelm (2016-05-01)
The interplay of machine learning (ML) and optimization methods is an emerging field of artificial intelligence. Both ML and optimization are concerned with modeling of systems related to real-world problems. Parameter selection for classification models is an important task for ML algorithms. In statistical learning theory, cross-validation (CV) which is the most well-known model selection method can be very time consuming for large data sets. One of the recent model selection techniques developed for supp...
On Equivalence Relationships Between Classification and Ranking Algorithms
Ertekin Bolelli, Şeyda (2011-10-01)
We demonstrate that there are machine learning algorithms that can achieve success for two separate tasks simultaneously, namely the tasks of classification and bipartite ranking. This means that advantages gained from solving one task can be carried over to the other task, such as the ability to obtain conditional density estimates, and an order-of-magnitude reduction in computational time for training the algorithm. It also means that some algorithms are robust to the choice of evaluation metric used; the...
Cross-modal Representation Learning with Nonlinear Dimensionality Reduction
KAYA, SEMİH; Vural, Elif (2019-08-22)
In many problems in machine learning there exist relations between data collections from different modalities. The purpose of multi-modal learning algorithms is to efficiently use the information present in different modalities when solving multi-modal retrieval problems. In this work, a multi-modal representation learning algorithm is proposed, which is based on nonlinear dimensionality reduction. Compared to linear dimensionality reduction methods, nonlinear methods provide more flexible representations e...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
S. Ozogur-Akyuz and G. W. Weber, “On numerical optimization theory of infinite kernel learning,”
JOURNAL OF GLOBAL OPTIMIZATION
, pp. 215–239, 2010, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/56360.