A linear approximation for training Recurrent Random Neural Networks

1998-01-01
In this paper, a linear approximation for Gelenbe's Learning Algorithm developed for training Recurrent Random Neural Networks (RRNN) is proposed. Gelenbe's learning algorithm uses gradient descent of a quadratic error function in which the main computational effort is for obtaining the inverse of an n-by-n matrix. In this paper, the inverse of this matrix is approximated with a linear term and the efficiency of the approximated algorithm is examined when RRNN is trained as autoassociative memory.
13th International Symposium on Computer and Information Sciences (ISCIS 98)

Suggestions

A Modified Parallel Learning Vector Quantization Algorithm for Real-Time Hardware Applications
Alkim, Erdem; AKLEYLEK, SEDAT; KILIÇ, ERDAL (2017-10-01)
In this study a modified learning vector quantization (LVQ) algorithm is proposed. For this purpose, relevance LVQ (RLVQ) algorithm is effciently combined with a reinforcement mechanism. In this mechanism, it is shown that the proposed algorithm is not affected constantly by both relevance-irrelevance input dimensions and the winning of the same neuron. Hardware design of the proposed scheme is also given to illustrate the performance of the algorithm. The proposed algorithm is compared to the corresponding...
A temporal neural network model for constructing connectionist expert system knowledge bases
Alpaslan, Ferda Nur (Elsevier BV, 1996-04-01)
This paper introduces a temporal feedforward neural network model that can be applied to a number of neural network application areas, including connectionist expert systems. The neural network model has a multi-layer structure, i.e. the number of layers is not limited. Also, the model has the flexibility of defining output nodes in any layer. This is especially important for connectionist expert system applications.
A 2-D unsteady Navier-Stokes solution method with overlapping/overset moving grids
Tuncer, İsmail Hakkı (1996-01-01)
A simple, robust numerical algorithm to localize intergrid boundary points and to interpolate unsteady solution variables across 2-D, overset/overlapping, structured computational grids is presented. Overset/ overlapping grids are allowed to move in time relative to each other. The intergrid boundary points are localized in terms of three grid points on the donor grid by a directional search algorithm. The final parameters of the search algorithm give the interpolation weights at the interpolation point. Th...
A 2-0 navier-stokes solution method with overset moving grids
Tuncer, İsmail Hakkı (1996-01-01)
A simple, robust numerical algorithm to localize moving boundary points and to interpolate uniteady solution variables across 2-D, arbitrarily overset computational grids is presented. Overset grids are allowed to move in time relative to each other. The intergrid boundary points are localized in terms of three grid points on the donor grid by a directional search algorithm. The parameters of the search algorithm give the interpolation weights at the localized boundary point. The method is independent of nu...
An evolutionary algorithm for multiple criteria problems
Soylu, Banu; Köksalan, Murat; Department of Industrial Engineering (2007)
In this thesis, we develop an evolutionary algorithm for approximating the Pareto frontier of multi-objective continuous and combinatorial optimization problems. The algorithm tries to evolve the population of solutions towards the Pareto frontier and distribute it over the frontier in order to maintain a well-spread representation. The fitness score of each solution is computed with a Tchebycheff distance function and non-dominating sorting approach. Each solution chooses its own favorable weights accordin...
Citation Formats
U. Halıcı, “A linear approximation for training Recurrent Random Neural Networks,” 1998, vol. 53, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/52868.