Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Reinforcement learning with internal expectation in the random neural networks for cascaded decisions
Date
2001-10-16
Author
Halıcı, Uğur
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
154
views
0
downloads
Cite This
The reinforcement learning scheme proposed in Halici (J. Biosystems 40 (1997) 83) for the random neural network (RNN) (Neural Computation 1 (1989) 502) is based on reward and performs well for stationary environments. However, when the environment is not stationary it suffers from getting stuck to the previously learned action and extinction is not possible. To overcome the problem, the reinforcement scheme is extended in Halici (Eur. J. Oper. Res., 126(2000) 288) by introducing a new weight update rule (E-rule) which takes into consideration the internal expectation of reinforcement. Although the E-rule is proposed for the RNN, it can be used for training learning automata or other intelligent systems based on reinforcement learning. This paper looks into the behavior of the learning scheme with internal expectation for the environments where the reinforcement is obtained after a sequence of cascaded decisions. The simulation results have shown that the RNN learns well and extinction is possible even for the cases with several decision steps and with hundreds of possible decision paths.
Subject Keywords
General Biochemistry, Genetics and Molecular Biology
,
Modelling and Simulation
,
Statistics and Probability
,
Applied Mathematics
,
General Medicine
URI
https://hdl.handle.net/11511/49314
Journal
BioSystems
DOI
https://doi.org/10.1016/s0303-2647(01)00144-7
Collections
Department of Electrical and Electronics Engineering, Article
Suggestions
OpenMETU
Core
Reinforcement learning with internal expectation for the random neural network
Halıcı, Uğur (Elsevier BV, 2000-10-01)
The reinforcement learning scheme proposed in Halici (1977) (Halici, U., 1997. Journal of Biosystems 40 (1/2), 83-91) for the random neural network (Gelenbe, E., 1989b. Neural Computation 1 (4), 502-510) is based on reward and performs well for stationary environments. However: when the environment is not stationary it suffers from getting stuck to the previously learned action and extinction is not possible. In this paper, the reinforcement learning scheme is extended by introducing a weight update rule wh...
Domain-Structured Chaos in a Hopfield Neural Network
Akhmet, Marat (World Scientific Pub Co Pte Lt, 2019-12-30)
In this paper, we provide a new method for constructing chaotic Hopfield neural networks. Our approach is based on structuring the domain to form a special set through the discrete evolution of the network state variables. In the chaotic regime, the formed set is invariant under the system governing the dynamics of the neural network. The approach can be viewed as an extension of the unimodality technique for one-dimensional map, thereby generating chaos from higher-dimensional systems. We show that the dis...
Reinforcement learning control for autorotation of a simple point-mass helicopter model
Kopşa, Kadircan; Kutay, Ali Türker; Department of Aerospace Engineering (2018)
This study presents an application of an actor-critic reinforcement learning method to a simple point-mass model helicopter guidance problem during autorotation. A point-mass model of an OH-58A helicopter in autorotation was built. A reinforcement learning agent was trained by a model-free asynchronous actor-critic algorithm, where training episodes were parallelized on a multi-core CPU. Objective of the training was defined as achieving near-zero horizontal and vertical kinetic energies at the instant of t...
An integrative explanation of action
Benı, Majıd Davoody (Elsevier BV, 2020-12-01)
The Predictive Processing Theory (PP) and its foundational Free Energy Principle (FEP) provide a unifying theoretical groundwork that subsumes theories of perception, cognition, and action. Recently Colin Klein (2018) contends that PP-FEP cannot explain adaptive action with the same force that they deal with perceptions. In his answer to the objection, Clark (2020) points out that FEP explains action, desire and motivation on the basis of minimisation of energy. I argue that this answer begs the question of...
Reinforcement learning using potential field for role assignment in a multi-robot two-team game
Fidan, Özgül; Erkmen, İsmet; Department of Electrical and Electronics Engineering (2004)
In this work, reinforcement learning algorithms are studied with the help of potential field methods, using robosoccer simulators as test beds. Reinforcement Learning (RL) is a framework for general problem solving where an agent can learn through experience. The soccer game is selected as the problem domain a way of experimenting multi-agent team behaviors because of its popularity and complexity.
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
U. Halıcı, “Reinforcement learning with internal expectation in the random neural networks for cascaded decisions,”
BioSystems
, pp. 21–34, 2001, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/49314.