Hide/Show Apps

A linear approximation for training Recurrent Random Neural Networks

1998-01-01
Halıcı, Uğur
Karaoz, E
In this paper, a linear approximation for Gelenbe's Learning Algorithm developed for training Recurrent Random Neural Networks (RRNN) is proposed. Gelenbe's learning algorithm uses gradient descent of a quadratic error function in which the main computational effort is for obtaining the inverse of an n-by-n matrix. In this paper, the inverse of this matrix is approximated with a linear term and the efficiency of the approximated algorithm is examined when RRNN is trained as autoassociative memory.