Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Unsupervised Learning of Affordance Relations on a Humanoid Robot
Date
2009-09-16
Author
Akgun, Baris
Dag, Nilguen
Bilal, Tahir
Atil, Ilkay
Şahin, Erol
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
211
views
0
downloads
Cite This
In this paper, we study how the concepts learned by a robot can be linked to verbal concepts that humans use in language. Specifically, we develop a simple tapping behaviour on the iCub humanoid robot simulator and allow the robot to interact with a set of objects of different types and sizes to learn affordance relations in its environment. The robot records its perception, obtained from a range camera, as a feature vector, before and after applying tapping on an object. We compute effect features by subtracting initial features from final features. We cluster the effect features using Kohonen self-organizing maps to generate a set of effect categories in an unsupervised fashion. We analyze the clusters using the types and sizes of objects that fall into the effect clusters, as well as the success/fail labels manually attached to the interactions. The hand labellings and the clusters formed by robot are found to match. We conjecture that this leads to the interpretation that the robot and humans share the same "effect concepts" which could be used in human-robot communication, for example as verbs. Furthermore, we use ReliefF feature extraction method to determine the initial features that are related to clustered effects and train a multi-class support vector machine (SVM) classifier to learn the mapping between the relevant initial features and the effect categories. The results show that, 1) despite the lack of supervision, the effect clusters tend to be homogeneous in terms of success/fail, 2) the relevant features consist mainly of shape, but not size, 3) the number of relevant features remains approximately constant with respect to the number of effect clusters formed, and 4) the SVM classifier can successfully learn the effect categories using the relevant features.
Subject Keywords
Unsupervised learning
,
Humanoid robots
,
Support vector machines
,
Support vector machine classification
,
Computational modeling
,
Human robot interaction
,
Robot vision systems
,
Cameras
,
Self organizing feature maps
,
Failure analysis
URI
https://hdl.handle.net/11511/56031
Conference Name
24th International Symposium on Computer and Information Sciences
Collections
Department of Computer Engineering, Conference / Seminar
Suggestions
OpenMETU
Core
Unsupervised Learning of Object Affordances for Planning in a Mobile Manipulation Platform
Ugur, Emre; Şahin, Erol; Oztop, Erhan (2011-05-13)
In this paper, we use the notion of affordances, proposed in cognitive science, as a framework to propose a developmental method that would enable a robot to ground symbolic planning mechanisms in the continuous sensory-motor experiences of a robot. We propose a method that allows a robot to learn the symbolic relations that pertain to its interactions with the world and show that they can be used in planning. Specifically, the robot interacts with the objects in its environment using a pre-coded repertoire...
The learning and use of traversability affordance using range images on a mobile robot
Ugur, Emre; Dogar, Mehmet R.; Cakmak, Maya; Şahin, Erol (2007-04-14)
We are interested in how the concept of affordances can affect our view to autonomous robot control, and how the results obtained from autonomous robotics can be reflected back upon the discussion and studies on the concept of affordances. In this paper, we studied how a mobile robot, equipped with a 3D laser scanner, can learn to perceive the traversability affordance and use it to wander in a room filled with spheres, cylinders and boxes. The results showed that after learning, the robot can wander around...
The learning of adjectives and nouns from affordance and appearance features
Yürüten, Onur; Şahin, Erol; Kalkan, Sinan (SAGE Publications, 2013-8-22)
We study how a robot can link concepts represented by adjectives and nouns in language with its own sensorimotor interactions. Specifically, an iCub humanoid robot interacts with a group of objects using a repertoire of manipulation behaviors. The objects are labeled using a set of adjectives and nouns. The effects induced on the objects are labeled as affordances, and classifiers are learned to predict the affordances from the appearance of an object. We evaluate three different models for learning adjecti...
Free gait generation with reinforcement learning for a six-legged robot
Erden, Mustafa Suphi; Leblebicioğlu, Mehmet Kemal (Elsevier BV, 2008-03-31)
In this paper the problem of free gait generation and adaptability with reinforcement learning are addressed for a six-legged robot. Using the developed free gait generation algorithm the robot maintains to generate stable gaits according to the commanded velocity. The reinforcement learning scheme incorporated into the free gait generation makes the robot choose more stable states and develop a continuous walking pattern with a larger average stability margin. While walking in normal conditions with no ext...
Learning adjectives and nouns from affordances on the iCub humanoid robot
Yürüten, Onur; Uyanik, Kadir Firat; Çalişkan, Yiǧit; Bozcuoǧlu, Asil Kaan; Şahin, Erol; Kalkan, Sinan (2012-09-14)
This article studies how a robot can learn nouns and adjectives in language. Towards this end, we extended a framework that enabled robots to learn affordances from its sensorimotor interactions, to learn nouns and adjectives using labeling from humans. Specifically, an iCub humanoid robot interacted with a set of objects (each labeled with a set of adjectives and a noun) and learned to predict the effects (as labeled with a set of verbs) it can generate on them with its behaviors. Different from appearance...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
B. Akgun, N. Dag, T. Bilal, I. Atil, and E. Şahin, “Unsupervised Learning of Affordance Relations on a Humanoid Robot,” presented at the 24th International Symposium on Computer and Information Sciences, Guzelyurt, CYPRUS, 2009, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/56031.