Free gait generation with reinforcement learning for a six-legged robot

2008-03-31
Erden, Mustafa Suphi
Leblebicioğlu, Mehmet Kemal
In this paper the problem of free gait generation and adaptability with reinforcement learning are addressed for a six-legged robot. Using the developed free gait generation algorithm the robot maintains to generate stable gaits according to the commanded velocity. The reinforcement learning scheme incorporated into the free gait generation makes the robot choose more stable states and develop a continuous walking pattern with a larger average stability margin. While walking in normal conditions with no external effects causing unstability, the robot is guaranteed to have stable walk, and the reinforcement learning only improves the stability. The adaptability of the learning scheme is tested also for the abnormal case of deficiency in one of the rear-legs. The robot gets a negative reinforcement when it falls, and a positive reinforcement when a stable transition is achieved. In this way the robot learns to achieve a continuous pattern of stable walk with five legs. The developed free gait generation with reinforcement learning is applied in real-time on the actual robot both for normal walking with different speeds and learning of five-legged walking in the abnormal case.
ROBOTICS AND AUTONOMOUS SYSTEMS

Suggestions

Positive impact of state similarity on reinforcement learning performance
Girgin, Sertan; Polat, Faruk; Alhaj, Reda (Institute of Electrical and Electronics Engineers (IEEE), 2007-10-01)
In this paper, we propose a novel approach to identify states with similar subpolicies and show how they can be integrated into the reinforcement learning framework to improve learning performance. The method utilizes a specialized tree structure to identify common action sequences of states, which are derived from possible optimal policies, and defines a similarity function between two states based on the number of such sequences. Using this similarity function, updates on the action-value function of a st...
Multiagent reinforcement learning using function approximation
Abul, O; Polat, Faruk; Alhajj, R (Institute of Electrical and Electronics Engineers (IEEE), 2000-11-01)
Learning in a partially observable and nonstationary environment is still one of the challenging problems In the area of multiagent (MA) learning. Reinforcement learning is a generic method that suits the needs of MA learning in many aspects. This paper presents two new multiagent based domain independent coordination mechanisms for reinforcement learning; multiple agents do not require explicit communication among themselves to learn coordinated behavior. The first coordination mechanism Is perceptual coor...
Novel solutions for Global Urban Localization
DOĞRUER, CAN ULAŞ; Koku, Ahmet Buğra; Dölen, Melik (Elsevier BV, 2010-05-31)
In this study, novel solutions to Global Urban Localization problem is proposed and examined rigorously. Classical approaches including Particle Filter, mixture of Gaussians, as well as novel solutions like Viterbi Algorithm and differential evolution are evaluated. The contribution of this paper is twofold: The Viterbi algorithm is extended by exploiting the structure of the problem at hand that is the states are partially connected temporally. Differential evolution is modified by taking into account the ...
Dynamic modeling and parameter estimation for traction, rolling, and lateral wheel forces to enhance mobile robot trajectory tracking
BAYAR, Gokhan; Koku, Ahmet Buğra; Konukseven, Erhan İlhan (Cambridge University Press (CUP), 2015-12-01)
Studying wheel and ground interaction during motion has the potential to increase the performance of localization, navigation, and trajectory tracking control of a mobile robot. In this paper, a differential mobile robot is modeled in a way that (traction, rolling, and lateral) wheel forces are included in the overall system dynamics. Lateral wheel forces are included in the mathematical model together with traction and rolling forces. A least square parameter estimation process is proposed to estimate the ...
COSMO: Contextualized scene modeling with Boltzmann Machines
Bozcan, Ilker; Kalkan, Sinan (Elsevier BV, 2019-03-01)
Scene modeling is very crucial for robots that need to perceive, reason about and manipulate the objects in their environments. In this paper, we adapt and extend Boltzmann Machines (BMs) for contextualized scene modeling. Although there are many models on the subject, ours is the first to bring together objects, relations, and affordances in a highly-capable generative model. For this end, we introduce a hybrid version of BMs where relations and affordances are incorporated with shared, tri-way connections...
Citation Formats
M. S. Erden and M. K. Leblebicioğlu, “Free gait generation with reinforcement learning for a six-legged robot,” ROBOTICS AND AUTONOMOUS SYSTEMS, pp. 199–212, 2008, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/41763.