A developmental framework for learning affordances

Download
2010
Uğur, Emre
We propose a developmental framework that enables the robot to learn affordances through interaction with the environment in an unsupervised way and to use these affordances at different levels of robot control, ranging from reactive response to planning. Inspired from Developmental Psychology, the robot’s discovery of action possibilities is realized in two sequential phases. In the first phase, the robot that initially possesses a limited number of basic actions and reflexes discovers new behavior primitives by exercising these actions and by monitoring the changes created in its initially crude perception system. In the second phase, the robot explores a more complicated environment by executing the discovered behavior primitives and using more advanced perception to learn further action possibilities. For this purpose, first, the robot discovers commonalities in action-effect experiences by finding effect categories, and then builds predictors for each behavior to map object features and behavior parameters into effect categories. After learning affordances through self-interaction and self-observation, the robot can make plans to achieve desired goals, emulate end states of demonstrated actions, monitor the plan execution and take corrective actions using the perceptual structures employed or discovered during learning. Mobile and manipulator robots were used to realize the proposed framework. Similar to infants, these robots were able to form behavior repertoires, learn affordances, and gain prediction capabilities. The learned affordances were shown to be relative to the robots, provide perceptual economy and encode general relations. Additionally, the affordance-based planning ability was verified in various tasks such as table cleaning and object transportation.

Suggestions

GESwarm Grammatical Evolution for the Automatic Synthesis of Collective Behaviors in Swarm Robotics
Ferrante, Eliseo; Turgut, Ali Emre; DuenezGuzman, Edgar; Wenseleers, Tom (2013-07-10)
In this paper we propose GESwarm, a novel tool that can automatically synthesize collective behaviors for swarms of autonomous robots through evolutionary robotics. Evolutionary robotics typically relies on artificial evolution for tuning the weights of an artificial neural network that is then used as individual behavior representation. The main caveat of neural networks is that they are very difficult to reverse engineer, meaning that once a suitable solution is found, it is very difficult to analyze, to ...
Using learned affordances for robotic behavior development
Doğar, Mehmet Remzi; Şahin, Erol; Department of Civil Engineering (2007)
“Developmental robotics” proposes that, instead of trying to build a robot that shows intelligence once and for all, what one must do is to build robots that can develop. A robot should go through cognitive development just like an animal baby does. These robots should be equipped with behaviors that are simple but enough to bootstrap the system. Then, as the robot interacts with its environment, it should display increasingly complex behaviors. Studies in developmental psychology and neurophysiology provid...
Robot planing based on learned affordances
Çakmak, Maya; Şahin, Erol; Department of Computer Engineering (2007)
This thesis studies how an autonomous robot can learn affordances from its interactions with the environment and use these affordances in planning. It is based on a new formalization of the concept which proposes that affordances are relations that pertain to the interactions of an agent with its environment. The robot interacts with environments containing different objects by executing its atomic actions and learns the different effects it can create, as well as the invariants of the environments that aff...
DEVELOPMENT OF A SOCIAL REINFORCEMENT LEARNING BASED AGGREGATION METHOD WITH A MOBILE ROBOT SWARM
Gür, Emre; Turgut, Ali Emre; Şahin, Erol; Department of Mechanical Engineering (2022-9-09)
In this thesis, the development of a social, reinforcement learning-based aggregation method is covered together with the development of a mobile robot swarm of Kobot- Tracked (Kobot-T) robots. The proposed method is developed to improve efficiency in low robot density swarm environments especially when the aggregated area is difficult to find. The method is called Social Reinforcement Learning, and Landmark-Based Aggregation (SRLA) and it is based on Q learning. In this method, robots share their Q tables ...
Evolving aggregation behaviors for swarm robotics systems: a systematic case study
Bahçeci, Erkin; Şahin, Erol; Department of Computer Engineering (2005)
Evolutionary methods are shown to be useful in developing behaviors in robotics. Interest in the use of evolution in swarm robotics is also on the rise. However, when one attempts to use artificial evolution to develop behaviors for a swarm robotic system, he is faced with decisions to be made regarding some parameters of fitness evaluations and of the genetic algorithm. In this thesis, aggregation behavior is chosen as a case, where performance and scalability of aggregation behaviors of perceptron control...
Citation Formats
E. Uğur, “A developmental framework for learning affordances,” Ph.D. - Doctoral Program, Middle East Technical University, 2010.