Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Robust feature space separation for deep convolutional neural network training
Date
2021-11-01
Author
Sekmen, Ali
Parlaktuna, Mustafa
Abdul-Malek, Ayad
Erdemir, Erdem
Koku, Ahmet Buğra
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
112
views
0
downloads
Cite This
This paper introduces two deep convolutional neural network training techniques that lead to more robust feature subspace separation in comparison to traditional training. Assume that dataset has M labels. The first method creates M deep convolutional neural networks called {DCNNi}i=1M" role="presentation">{DCNNi}Mi=1. Each of the networks DCNNi" role="presentation">DCNNi is composed of a convolutional neural network (CNNi" role="presentation">CNNi) and a fully connected neural network (FCNNi" role="presentation">FCNNi). In training, a set of projection matrices {Pi}i=1M" role="presentation">{Pi}Mi=1 are created and adaptively updated as representations for feature subspaces {Si}i=1M" role="presentation">{Si}Mi=1. A rejection value is computed for each training based on its projections on feature subspaces. Each FCNNi" role="presentation">FCNNi acts as a binary classifier with a cost function whose main parameter is rejection values. A threshold value ti" role="presentation">ti is determined for ith" role="presentation">ith network DCNNi" role="presentation">DCNNi. A testing strategy utilizing {ti}i=1M" role="presentation">{ti}Mi=1 is also introduced. The second method creates a single DCNN and it computes a cost function whose parameters depend on subspace separations using the geodesic distance on the Grasmannian manifold of subspaces Si" role="presentation">Si and the sum of all remaining subspaces {Sj}j=1,j≠iM" role="presentation">{Sj}Mj=1,j≠i. The proposed methods are tested using multiple network topologies. It is shown that while the first method works better for smaller networks, the second method performs better for complex architectures.
URI
https://link.springer.com/article/10.1007/s44163-021-00013-1
https://hdl.handle.net/11511/100237
Journal
Discover Artificial Intelligence
DOI
https://doi.org/10.1007/s44163-021-00013-1
Collections
Department of Mechanical Engineering, Article
Suggestions
OpenMETU
Core
A temporal neural network model for constructing connectionist expert system knowledge bases
Alpaslan, Ferda Nur (Elsevier BV, 1996-04-01)
This paper introduces a temporal feedforward neural network model that can be applied to a number of neural network application areas, including connectionist expert systems. The neural network model has a multi-layer structure, i.e. the number of layers is not limited. Also, the model has the flexibility of defining output nodes in any layer. This is especially important for connectionist expert system applications.
Robust background normalization method for one-channel microarrays
AKAL, TÜLAY; Purutçuoğlu Gazi, Vilda; Weber, Gerhard-Wilhelm (Walter de Gruyter GmbH, 2017-04-01)
Background: Microarray technology, aims to measure the amount of changes in transcripted messages for each gene by RNA via quantifying the colour intensity on the arrays. But due to the different experimental conditions, these measurements can include both systematic and random erroneous signals. For this reason, we present a novel gene expression index, called multi-RGX (Multiple-probe Robust Gene Expression Index) for one-channel microarrays.
Efficient User Grouping for Hybrid Beamforming in Single Carrier Wideband Massive MIMO Channels
Kilcioglu, Emre; Güvensen, Gökhan Muzaffer (2021-01-01)
In this paper, three types of user grouping algorithms in which our own performance metric is utilized are investigated for single carrier downlink wideband spatially correlated massive MIMO channels by using hybrid beamforming structure motivated by the joint spatial division and multiplexing (JSDM) framework. The user grouping procedure consists of two stages. Internally, our own metric called as the achievable information rate (AIR) is calculated given a user grouping input by considering both inter-grou...
FAST CONSTRAINT GRAPH GENERATION ALGORITHMS FOR VLSI LAYOUT COMPACTION
TORUNOGLU, IH; ASKAR, M (1994-04-14)
Three new fast constraint graph generation algorithms, PPSS-1D, PPSS-1Dk and PPSS-2D, are presented for VLSI layout compaction. The algorithms are based on parallel plane sweep shadowing (PPSS). The PPSS-1D algorithm improves the time spent on searching processes from O(N/spl circ/1.5) to O(G*N) with extra O(G) memory where G is independent of N. PPSS-1Dk, the successor to PPSS-1D, eliminates the possibility of generation of unnecessary constraints using extra O(k*G) memory. PPSS-2D improves the O(NlogN) so...
A linear approximation for training Recurrent Random Neural Networks
Halıcı, Uğur (1998-01-01)
In this paper, a linear approximation for Gelenbe's Learning Algorithm developed for training Recurrent Random Neural Networks (RRNN) is proposed. Gelenbe's learning algorithm uses gradient descent of a quadratic error function in which the main computational effort is for obtaining the inverse of an n-by-n matrix. In this paper, the inverse of this matrix is approximated with a linear term and the efficiency of the approximated algorithm is examined when RRNN is trained as autoassociative memory.
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
A. Sekmen, M. Parlaktuna, A. Abdul-Malek, E. Erdemir, and A. B. Koku, “Robust feature space separation for deep convolutional neural network training,”
Discover Artificial Intelligence
, vol. 1, no. 12, pp. 1–11, 2021, Accessed: 00, 2022. [Online]. Available: https://link.springer.com/article/10.1007/s44163-021-00013-1.