Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Relational-Grid-World: A Novel Relational Reasoning Environment and An Agent Model for Relational Information Extraction
Date
2020-07-01
Author
Kucuksubasi, Faruk
Sürer, Elif
Metadata
Show full item record
Item Usage Stats
186
views
0
downloads
Cite This
Reinforcement learning (RL) agents are often designed specifically for a particular problem and they generally have uninterpretable working processes. Statistical methods-based agent algorithms can be improved in terms of generalizability and interpretability using symbolic Artificial Intelligence (AI) tools such as logic programming. In this study, we present a model-free RL architecture that is supported with explicit relational representations of the environmental objects. For the first time, we use the PrediNet network architecture in a dynamic decision-making problem rather than image-based tasks, and Multi-Head Dot-Product Attention Network (MHDPA) as a baseline for performance comparisons. We tested two networks in two environments ---i.e., the baseline Box-World environment and our novel environment, Relational-Grid-World (RGW). With the procedurally generated RGW environment, which is complex in terms of visual perceptions and combinatorial selections, it is easy to measure the relational representation performance of the RL agents. The experiments were carried out using different configurations of the environment so that the presented module and the environment were compared with the baselines. We reached similar policy optimization performance results with the PrediNet architecture and MHDPA; additionally, we achieved to extract the propositional representation explicitly ---which makes the agent's statistical policy logic more interpretable and tractable. This flexibility in the agent's policy provides convenience for designing non-task-specific agent architectures. The main contributions of this study are two-fold ---an RL agent that can explicitly perform relational reasoning, and a new environment that measures the relational reasoning capabilities of RL agents.
URI
https://arxiv.org/abs/2007.05961
https://hdl.handle.net/11511/70900
Collections
Graduate School of Informatics, Article
Suggestions
OpenMETU
Core
EXTRACTING EXPLICIT RELATIONAL INFORMATION FROM A NEW RELATIONAL REASONING TESTBED WITH A LEARNING AGENT
Küçüksubaşı, Faruk; Sürer, Elif; Department of Modeling and Simulation (2021-7-29)
In recent studies, reinforcement learning (RL) agents work in ways that are specialized according to the tasks, and most of the time, their decision-making logic is not interpretable. By using symbolic artificial intelligence techniques like logic programming, statistical methods-based agent algorithms can be enhanced in terms of generalizability and interpretability. In this study, the PrediNet architecture is used for the first time in an RL problem, and in order to perform benchmarking, the multi-head do...
Online collaboration: Collaborative behavior patterns and factors affecting globally distributed team performance
Serce, Fatma Cemile; Swigger, Kathleen; Alpaslan, Ferda Nur; Brazile, Robert; Dafoulas, George; Lopez, Victor (2011-01-01)
Studying the collaborative behavior of online learning teams and how this behavior is related to communication mode and task type is a complex process. Research about small group learning suggests that a higher percentage of social interactions occur in synchronous rather than asynchronous mode, and that students spend more time in task-oriented interaction in asynchronous discussions than in synchronous mode. This study analyzed the collaborative interaction patterns of global software development learning...
Reward Shaping for Efficient Exploration and Acceleration of Learning in Reinforcement Learning
Bal, Melis İlayda; İyigün, Cem; Polat, Faruk; Department of Operational Research (2022-7-21)
In a Reinforcement Learning task, a learning agent needs to extract useful information about its uncertain environment in an efficient way during the interaction process to successfully complete the task. Through strategic exploration, the agent acquires sufficient information to adjust its behavior to act intelligently as it interacts with the environment. Therefore, efficient exploration plays a key role in the learning efficiency of Reinforcement Learning tasks. Due to the delayed-feedback nature of Rein...
EMDD-RL: faster subgoal identification with diverse density in reinforcement learning
Sunel, Saim; Polat, Faruk; Department of Computer Engineering (2021-1-15)
Diverse Density (DD) algorithm is a well-known multiple instance learning method, also known to be effective to automatically identify sub-goals and improve Reinforcement Learning (RL). Expectation-Maximization Diverse Density (EMDD) improves DD in terms of both speed and accuracy. This study adapts EMDD to automatically identify subgoals for RL which is shown to perform significantly faster (3 to 10 times) than its predecessor, without sacrificing solution quality. The performance of the proposed method na...
Reinforcement learning using potential field for role assignment in a multi-robot two-team game
Fidan, Özgül; Erkmen, İsmet; Department of Electrical and Electronics Engineering (2004)
In this work, reinforcement learning algorithms are studied with the help of potential field methods, using robosoccer simulators as test beds. Reinforcement Learning (RL) is a framework for general problem solving where an agent can learn through experience. The soccer game is selected as the problem domain a way of experimenting multi-agent team behaviors because of its popularity and complexity.
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
F. Kucuksubasi and E. Sürer, “Relational-Grid-World: A Novel Relational Reasoning Environment and An Agent Model for Relational Information Extraction,” 2020, Accessed: 00, 2021. [Online]. Available: https://arxiv.org/abs/2007.05961.