Hierarchical incremental context modeling on robots

Download
2017
Doğan, Fethiye Irmak
Context is very crucial for robots to be able to adapt themselves to circumstances and to fulfill their tasks accordingly. There have been many studies on modeling context on robots, however, these studies either do not construct an incremental and hierarchical structure (i.e., use a fixed number of contexts and context layers) or determine the necessity of adding a new context by using rule-based approaches. In this thesis, we propose two different methods to model context. In the first method, we extend the Restricted Boltzmann Machines, a generative associative model, by incrementing the number of contexts and context layers when needed. This model constructs the hierarchical and incremental contextual representations by considering the confidence of the objects and contexts after each new scene encountered. Moreover, this deep incremental model obtains better or on-par results when compared to the incremental or non-incremental models in the literature on different tasks. In the second method, in contrast to our first method and the methods in the literature, determining the necessity of adding a new context is formulated as a learning problem. In order to be able to do that, Latent Dirichlet Allocation (LDA) model is used to generate the data with known number of contexts. The intermediate LDA models with/without the correct number of contexts are then fed to a Recurrent Model, which is trained to predict whether to add a new context or not. Our analysis on artificial and real datasets demonstrate that such a learning-based approach generalizes well, and is a promising approach for solving such incremental problems.