Recursive Compositional Reinforcement Learning for Continuous Control Sürekli Kontrol Uygulamalari için Özyinelemeli Bileşimsel Pekiştirmeli Öǧrenme

Tanik, Guven Orkun
Ertekin Bolelli, Şeyda
Compositional and temporal abstraction is the key to improving learning and planning in reinforcement learning. Modern real-world control problems call for continuous control domains and robust, sample efficient and explainable control frameworks. We are presenting a framework for recursively composing control skills to solve compositional and progressively complex tasks. The framework promotes reuse of skills, and as a result quickly adaptable to new tasks. The decision-tree can be observed, providing insight to the agents' behavior. Furthermore, the skills can be transferred, modified or trained independently, which can simplify reward shaping and increase training speeds considerably.
30th Signal Processing and Communications Applications Conference, SIU 2022


Bipedal Robot Walking by Reinforcement Learning in Partially Observed Environment
Özalp, Uğurcan; Uğur, Ömür; Department of Scientific Computing (2021-8-27)
Deep Reinforcement Learning methods on mechanical control have been successfully applied in many environments and used instead of traditional optimal and adaptive control methods for some complex problems. However, Deep Reinforcement Learning algorithms do still have some challenges. One is to control on partially observable environments. When an agent is not informed well of the environment, it must recover information from the past observations. In this thesis, walking of Bipedal Walker Hardcore (Open...
Improving reinforcement learning using distinctive clues of the environment
Demir, Alper; Polat, Faruk; Department of Computer Engineering (2019)
Effective decomposition and abstraction has been shown to improve the performance of Reinforcement Learning. An agent can use the clues from the environment to either partition the problem into sub-problems or get informed about its progress in a given task. In a fully observable environment such clues may come from subgoals while in a partially observable environment they may be provided by unique experiences. The contribution of this thesis is two fold; first improvements over automatic subgoal identifica...
Toward Generalization of Automated Temporal Abstraction to Partially Observable Reinforcement Learning
Cilden, Erkin; Polat, Faruk (2015-08-01)
Temporal abstraction for reinforcement learning (RL) aims to decrease learning time by making use of repeated sub-policy patterns in the learning task. Automatic extraction of abstractions during RL process is difficult but has many challenges such as dealing with the curse of dimensionality. Various studies have explored the subject under the assumption that the problem domain is fully observable by the learning agent. Learning abstractions for partially observable RL is a relatively less explored area. In...
2LRL: a two-level multi-agent reinforcement learning algorithm with communication
Erus, Guray; Polat, Faruk; Say, Bilge; Department of Cognitive Sciences (2002)
Learning is a key element of an "intelligent" computational system. In Multi- agent Systems (MASs), learning involves acquisition of a cooperative behavior in order to satisfy the joint goals. Reinforcement Learning (RL) is a promising unsupervised machine learning technique inspired from the earlier studies in animal learning. In this thesis, we propose the Two Level Reinforcement Learning with Communication (2LRL) method, a new RL technique to provide cooperative action selection in a multi-agent environm...
Attention mechanisms for semantic few-shot learning
Baran, Orhun Buğra; Cinbiş, Ramazan Gökberk; İkizler-Cinbiş, Nazlı; Department of Computer Engineering (2021-9-1)
One of the fundamental difficulties in contemporary supervised learning approaches is the dependency on labelled examples. Most state-of-the-art deep architectures, in particular, tend to perform poorly in the absence of large-scale annotated training sets. In many practical problems, however, it is not feasible to construct sufficiently large training sets, especially in problems involving sensitive information or consisting of a large set of fine-grained classes. One of the main topics in machine learning...
Citation Formats
G. O. Tanik and Ş. Ertekin Bolelli, “Recursive Compositional Reinforcement Learning for Continuous Control Sürekli Kontrol Uygulamalari için Özyinelemeli Bileşimsel Pekiştirmeli Öǧrenme,” presented at the 30th Signal Processing and Communications Applications Conference, SIU 2022, Safranbolu, Türkiye, 2022, Accessed: 00, 2023. [Online]. Available: