Using learned affordances for robotic behavior development

Download
2007
Doğar, Mehmet Remzi
“Developmental robotics” proposes that, instead of trying to build a robot that shows intelligence once and for all, what one must do is to build robots that can develop. A robot should go through cognitive development just like an animal baby does. These robots should be equipped with behaviors that are simple but enough to bootstrap the system. Then, as the robot interacts with its environment, it should display increasingly complex behaviors. Studies in developmental psychology and neurophysiology provide support for the view that, the animals start with innate simple behaviors, and develop more complex behaviors through the differentiation, sequencing, and combination of these primitive behaviors. In this thesis, we propose such a development scheme for a mobile robot. J.J. Gibson's concept of “affordances” provides the basis of this development scheme, and we use a formalization of affordances to make the robot learn about the dynamics of its interactions with its environment. We show that an autonomous robot can start with pre-coded primitive behaviors, and as it executes its behaviors randomly in an environment, it can learn the affordance relations between the environment and its behaviors. We then present two ways of using these learned structures, in achieving more complex, voluntary behaviors. In the first case, the robot still uses its pre-coded primitive behaviors only, but the sequencing of these are such that new more complex behaviors emerge. In the second case, the robot uses its pre-coded primitive behaviors to create new behaviors.

Suggestions

Using learned affordances for robotic behavior development
Doǧar, Mehmet R.; Ugur, Emre; Şahin, Erol; Çakmak, Maya (2008-09-18)
“Developmental robotics” proposes that, instead of trying to build a robot that shows intelligence once and for all, what one must do is to build robots that can develop. These robots should be equipped with behaviors that are simple but enough to bootstrap the system. Then, as the robot interacts with its environment, it should display increasingly complex behaviors. In this paper, we propose such a development scheme for a mobile robot. J.J. Gibson’s concept of “affordances” provides the basis of this dev...
Vision-based robot localization using artificial and natural landmarks
Arıcan, Zafer; Halıcı, Uğur; Department of Electrical and Electronics Engineering (2004)
In mobile robot applications, it is an important issue for a robot to know where it is. Accurate localization becomes crucial for navigation and map building applications because both route to follow and positions of the objects to be inserted into the map highly depend on the position of the robot in the environment. For localization, the robot uses the measurements that it takes by various devices such as laser rangefinders, sonars, odometry devices and vision. Generally these devices give the distances o...
Evolving aggregation behaviors for swarm robotics systems: a systematic case study
Bahçeci, Erkin; Şahin, Erol; Department of Computer Engineering (2005)
Evolutionary methods are shown to be useful in developing behaviors in robotics. Interest in the use of evolution in swarm robotics is also on the rise. However, when one attempts to use artificial evolution to develop behaviors for a swarm robotic system, he is faced with decisions to be made regarding some parameters of fitness evaluations and of the genetic algorithm. In this thesis, aggregation behavior is chosen as a case, where performance and scalability of aggregation behaviors of perceptron control...
Robot planing based on learned affordances
Çakmak, Maya; Şahin, Erol; Department of Computer Engineering (2007)
This thesis studies how an autonomous robot can learn affordances from its interactions with the environment and use these affordances in planning. It is based on a new formalization of the concept which proposes that affordances are relations that pertain to the interactions of an agent with its environment. The robot interacts with environments containing different objects by executing its atomic actions and learns the different effects it can create, as well as the invariants of the environments that aff...
Reinforcement learning using potential field for role assignment in a multi-robot two-team game
Fidan, Özgül; Erkmen, İsmet; Department of Electrical and Electronics Engineering (2004)
In this work, reinforcement learning algorithms are studied with the help of potential field methods, using robosoccer simulators as test beds. Reinforcement Learning (RL) is a framework for general problem solving where an agent can learn through experience. The soccer game is selected as the problem domain a way of experimenting multi-agent team behaviors because of its popularity and complexity.
Citation Formats
M. R. Doğar, “Using learned affordances for robotic behavior development,” M.S. - Master of Science, Middle East Technical University, 2007.