Human action recognition for mobile robots

Bayram, Ulya
In this thesis, a study of human action recognition with an application on mobile robots is conducted. Movements of the robot depend on human actions estimated after human detection, tracking and action recognition. Therefore, the use of appropriate algorithms for these tasks is proposed. Moving regions are detected by optical flow vectors determined from consecutive frames. People are detected by human recognition and tracking. Each person is assigned a different id and the coordinates of his/her location at each frame are collected. Then, extracted action recognition features are separated using the coordinates belonging to each person so that action recognition is conducted for each person separately. Furthermore, to make a distinction between action cycles, the idea of performing recognition in short frame sequences is proposed. Two feature extraction methods are selected and compared for action recognition: cuboids and tracklets. Based on the obtained results, the advantages of tracklets over cuboids are proven. Then, a literature search is conducted for classification of the tracklet features. The codebook based method, which is the most popularly used method in the action recognition literature, is experimented. Weaknesses of the codebook based method are shown for classification of features extracted from short frame sequences. Thus, a novel classification method is proposed based on the idea of iterative matching. Success of the proposed method over the codebook method is proven with various tests in terms of accuracy and computational time consumption. With the final design of tracking and action recognition system, the mobile robot, experimented on Pioneer2, is commanded instructions: move forward, turn right or left, give alarm to warn authorities.