EXTRACTING EXPLICIT RELATIONAL INFORMATION FROM A NEW RELATIONAL REASONING TESTBED WITH A LEARNING AGENT

2021-7-29
Küçüksubaşı, Faruk
In recent studies, reinforcement learning (RL) agents work in ways that are specialized according to the tasks, and most of the time, their decision-making logic is not interpretable. By using symbolic artificial intelligence techniques like logic programming, statistical methods-based agent algorithms can be enhanced in terms of generalizability and interpretability. In this study, the PrediNet architecture is used for the first time in an RL problem, and in order to perform benchmarking, the multi-head dot-product attention network (MHDPA) was used. By using the PrediNet module, relational information among the objects in the environment can be extracted explicitly. This information is in a form that can be processed in logic programming tools, and the network becomes more interpretable. In order to measure the relational information extraction performances of these two methods, a new test environment, relational-grid-world (RGW), is developed. RGW environment can be generated procedurally from objects with different features, pushing the agent to make complex combinatorial selections in this environment. In the performed tests and the RGW environment, a baseline environment called Box-World is used for comparing both environments and networks separately. The results show that both MHDPA and PrediNet architecture have similar performances in both environments, and the RGW environment is able to measure the relational reasoning capacity of the networks.

Suggestions

Relational-Grid-World: A Novel Relational Reasoning Environment and An Agent Model for Relational Information Extraction
Kucuksubasi, Faruk; Sürer, Elif (2020-07-01)
Reinforcement learning (RL) agents are often designed specifically for a particular problem and they generally have uninterpretable working processes. Statistical methods-based agent algorithms can be improved in terms of generalizability and interpretability using symbolic Artificial Intelligence (AI) tools such as logic programming. In this study, we present a model-free RL architecture that is supported with explicit relational representations of the environmental objects. For the first time, we use the ...
Using Generative Adversarial Nets on Atari Games for Feature Extraction in Deep Reinforcement Learning
Aydın, Ayberk; Sürer, Elif (2020-04-01)
Deep Reinforcement Learning (DRL) has been suc-cessfully applied in several research domains such as robotnavigation and automated video game playing. However, thesemethods require excessive computation and interaction with theenvironment, so enhancements on sample efficiency are required.The main reason for this requirement is that sparse and delayedrewards do not provide an effective supervision for representationlearning of deep neural networks. In this study, Proximal Policy...
Using Multi-Agent Reinforcement Learning in Auction Simulations
Kanmaz, Medet; Sürer, Elif (2020-04-01)
Game theory has been developed by scientists as a theory of strategic interaction among players who are supposed to be perfectly rational. These strategic interactions might have been presented in an auction, a business negotiation, a chess game, or even in a political conflict aroused between different agents. In this study, the strategic (rational) agents created by reinforcement learning algorithms are supposed to be bidder agents in various types of auction mechanisms such as British Auction, Sealed Bid...
Using chains of bottleneck transitions to decompose and solve reinforcement learning tasks with hidden states
Aydın, Hüseyin; Çilden, Erkin; Polat, Faruk (2022-08-01)
Reinforcement learning is known to underperform in large and ambiguous problem domains under partial observability. In such cases, a proper decomposition of the task can improve and accelerate the learning process. Even ambiguous and complex problems that are not solvable by conventional methods turn out to be easier to handle by using a convenient problem decomposition, followed by the incorporation of machine learning methods for the sub-problems. Like in most real-life problems, the decomposition of a ta...
Improving reinforcement learning using distinctive clues of the environment
Demir, Alper; Polat, Faruk; Department of Computer Engineering (2019)
Effective decomposition and abstraction has been shown to improve the performance of Reinforcement Learning. An agent can use the clues from the environment to either partition the problem into sub-problems or get informed about its progress in a given task. In a fully observable environment such clues may come from subgoals while in a partially observable environment they may be provided by unique experiences. The contribution of this thesis is two fold; first improvements over automatic subgoal identifica...
Citation Formats
F. Küçüksubaşı, “EXTRACTING EXPLICIT RELATIONAL INFORMATION FROM A NEW RELATIONAL REASONING TESTBED WITH A LEARNING AGENT,” M.S. - Master of Science, Middle East Technical University, 2021.