Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
A linear approximation for training Recurrent Random Neural Networks
Date
1998-01-01
Author
Halıcı, Uğur
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
216
views
0
downloads
Cite This
In this paper, a linear approximation for Gelenbe's Learning Algorithm developed for training Recurrent Random Neural Networks (RRNN) is proposed. Gelenbe's learning algorithm uses gradient descent of a quadratic error function in which the main computational effort is for obtaining the inverse of an n-by-n matrix. In this paper, the inverse of this matrix is approximated with a linear term and the efficiency of the approximated algorithm is examined when RRNN is trained as autoassociative memory.
URI
https://hdl.handle.net/11511/52868
Conference Name
13th International Symposium on Computer and Information Sciences (ISCIS 98)
Collections
Department of Electrical and Electronics Engineering, Conference / Seminar
Suggestions
OpenMETU
Core
A Modified Parallel Learning Vector Quantization Algorithm for Real-Time Hardware Applications
Alkim, Erdem; AKLEYLEK, SEDAT; KILIÇ, ERDAL (2017-10-01)
In this study a modified learning vector quantization (LVQ) algorithm is proposed. For this purpose, relevance LVQ (RLVQ) algorithm is effciently combined with a reinforcement mechanism. In this mechanism, it is shown that the proposed algorithm is not affected constantly by both relevance-irrelevance input dimensions and the winning of the same neuron. Hardware design of the proposed scheme is also given to illustrate the performance of the algorithm. The proposed algorithm is compared to the corresponding...
A temporal neural network model for constructing connectionist expert system knowledge bases
Alpaslan, Ferda Nur (Elsevier BV, 1996-04-01)
This paper introduces a temporal feedforward neural network model that can be applied to a number of neural network application areas, including connectionist expert systems. The neural network model has a multi-layer structure, i.e. the number of layers is not limited. Also, the model has the flexibility of defining output nodes in any layer. This is especially important for connectionist expert system applications.
A 2-D unsteady Navier-Stokes solution method with overlapping/overset moving grids
Tuncer, İsmail Hakkı (1996-01-01)
A simple, robust numerical algorithm to localize intergrid boundary points and to interpolate unsteady solution variables across 2-D, overset/overlapping, structured computational grids is presented. Overset/ overlapping grids are allowed to move in time relative to each other. The intergrid boundary points are localized in terms of three grid points on the donor grid by a directional search algorithm. The final parameters of the search algorithm give the interpolation weights at the interpolation point. Th...
A 2-0 navier-stokes solution method with overset moving grids
Tuncer, İsmail Hakkı (1996-01-01)
A simple, robust numerical algorithm to localize moving boundary points and to interpolate uniteady solution variables across 2-D, arbitrarily overset computational grids is presented. Overset grids are allowed to move in time relative to each other. The intergrid boundary points are localized in terms of three grid points on the donor grid by a directional search algorithm. The parameters of the search algorithm give the interpolation weights at the localized boundary point. The method is independent of nu...
An evolutionary algorithm for multiple criteria problems
Soylu, Banu; Köksalan, Murat; Department of Industrial Engineering (2007)
In this thesis, we develop an evolutionary algorithm for approximating the Pareto frontier of multi-objective continuous and combinatorial optimization problems. The algorithm tries to evolve the population of solutions towards the Pareto frontier and distribute it over the frontier in order to maintain a well-spread representation. The fitness score of each solution is computed with a Tchebycheff distance function and non-dominating sorting approach. Each solution chooses its own favorable weights accordin...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
U. Halıcı, “A linear approximation for training Recurrent Random Neural Networks,” 1998, vol. 53, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/52868.