Service robotics is currently a pivotal research area in robotics, with enormous social potential. Since service robots directly interact with people, finding natural and intuitive interfaces is of fundamental importance. While past work has predominately focused on issues such as navigation and manipulation, relatively few robotic systems are equipped with flexible user interfaces that permit controlling the robot by "natural" means.
A visual gesture interface was developed for human-robot interaction through this thesis, and tested on service robot AMI in real environment. The interaction model combining speech and gesture was designed first. Following this interaction model, a fast and robust active face detection algorithm enables robot to track operator``s face reliably. Next, a hand tracker combining skin color detection and motion segmentation is to extract the centroid of operator``s hand from input image sequence. The 2D hand trajectory input to neural network for gesture classification. Finally, the recognized gesture is interpreted into control signal to enable the robot to perform desired actions.
Through the experiments, we investigate three basic features, including location, angle, and velocity, for gesture recognition. The results show that the system we proposed could tackle complex background, variable lighting condition, and different person. Furthermore, 96.32% recognition rate was achieved reliably in real-time.