Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Multiagent reinforcement learning using function approximation
Date
2000-11-01
Author
Abul, O
Polat, Faruk
Alhajj, R
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
233
views
0
downloads
Cite This
Learning in a partially observable and nonstationary environment is still one of the challenging problems In the area of multiagent (MA) learning. Reinforcement learning is a generic method that suits the needs of MA learning in many aspects. This paper presents two new multiagent based domain independent coordination mechanisms for reinforcement learning; multiple agents do not require explicit communication among themselves to learn coordinated behavior. The first coordination mechanism Is perceptual coordination mechanism, where other agents are included in state descriptions and coordination information is Learned from state transitions. The second is observing coordination mechanism, which also includes other agents in state descriptions and additionally the rewards of nearby agents are observed from the environment. The observed rewards and agent's own reward are used to construct an optimal policy. This way, the latter mechanism tends to increase region-wide joint rewards. The selected experimented domain is adversarial food-collecting world (AFCW), which can be configured both as single and multiagent environments, Function approximation and generalization techniques are used because of the huge state space. Experimental results show the effectiveness of these mechanisms.
Subject Keywords
Control and Systems Engineering
,
Human-Computer Interaction
,
Electrical and Electronic Engineering
,
Software
,
Information Systems
,
Computer Science Applications
URI
https://hdl.handle.net/11511/36505
Journal
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS
DOI
https://doi.org/10.1109/5326.897075
Collections
Department of Computer Engineering, Article
Suggestions
OpenMETU
Core
Positive impact of state similarity on reinforcement learning performance
Girgin, Sertan; Polat, Faruk; Alhaj, Reda (Institute of Electrical and Electronics Engineers (IEEE), 2007-10-01)
In this paper, we propose a novel approach to identify states with similar subpolicies and show how they can be integrated into the reinforcement learning framework to improve learning performance. The method utilizes a specialized tree structure to identify common action sequences of states, which are derived from possible optimal policies, and defines a similarity function between two states based on the number of such sequences. Using this similarity function, updates on the action-value function of a st...
Free gait generation with reinforcement learning for a six-legged robot
Erden, Mustafa Suphi; Leblebicioğlu, Mehmet Kemal (Elsevier BV, 2008-03-31)
In this paper the problem of free gait generation and adaptability with reinforcement learning are addressed for a six-legged robot. Using the developed free gait generation algorithm the robot maintains to generate stable gaits according to the commanded velocity. The reinforcement learning scheme incorporated into the free gait generation makes the robot choose more stable states and develop a continuous walking pattern with a larger average stability margin. While walking in normal conditions with no ext...
A pattern classification approach for boosting with genetic algorithms
Yalabık, Ismet; Yarman Vural, Fatoş Tunay; Üçoluk, Göktürk; Şehitoğlu, Onur Tolga (2007-11-09)
Ensemble learning is a multiple-classifier machine learning approach which produces collections and ensembles statistical classifiers to build up more accurate classifier than the individual classifiers. Bagging, boosting and voting methods are the basic examples of ensemble learning. In this study, a novel boosting technique targeting to solve partial problems of AdaBoost, a well-known boosting algorithm, is proposed. The proposed system finds an elegant way of boosting a bunch of classifiers successively ...
Social argumentation in online synchronous communication
Alagoz, Esra (Springer Science and Business Media LLC, 2013-12-01)
The ability to argue well is a valuable skill for students in both formal and informal learning environments. While many studies have explored the argumentative practices in formal environments and some researchers have developed tools to enhance the argumentative skills, the social argumentation that is occurring in informal spaces has yet to be broadly investigated. The challenges associated with observing and capturing the interactions in authentic settings can be identified as the main reasons for this ...
FRACTAL SET-THEORETIC ANALYSIS OF PERFORMANCE LOSSES FOR TUNING TRAINING DATA IN LEARNING-SYSTEMS
Erkmen, Aydan Müşerref (1992-08-28)
This paper focuses on the evaluation of learning performance in intelligent dynamic processes with supervised learning. Learning dynamics are characterized by basins of attraction generated by state transitions in control space (statespace + parameter space). State uncertainty is modelled as a cellular control space, namely the cell space. Learning performance losses are related to nonseparable basins of attractions with fuzzy boundaries and to their erosions under parameter changes. Basins erosions are ana...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
O. Abul, F. Polat, and R. Alhajj, “Multiagent reinforcement learning using function approximation,”
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS
, pp. 485–497, 2000, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/36505.