This paper proposes a vision-based human-robot interaction system for mobile robot platform. A mobile robot first finds an interested person who wants to interact with it. Once it finds a subject, the robot stops in the front of him or her and finally interprets her or his upper body gestures. We represent each gesture as a sequence of body poses and the robot recognizes four upper body gestures: "Idle", "I love you", "Hello left", and "Hello right". A key posebased particle filter determines the pose sequence and key poses are sparsely collected from the pose space. Pictorial Structure-based upper body model represents key poses and these key poses are used to build an efficient proposal distribution for the particle filtering. Thus, the particles are drawn from key pose-based proposal distribution for the effective prediction of upper body pose. The Viterbi algorithm estimates the gesture probabilities with a hidden Markov model. The experimental results show the robustness of our upper body tracking and gesture recognition system.
Publications
International Conference
Upper Body Gesture Recognition for Human-Robot Interaction
조회 수 1853
댓글 0
저 자 | Chi-Min Oh, Md. Zahidul Islam, Jun-Sung Lee, Chil-Woo Lee, In So Kweon |
---|---|
학 회 | 14th international conference on Human-computer interaction: interaction techniques and environments |
논문일시(Year) | 2011 |
논문일시(Month) | 07 |