DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Yang, Hyun-Seung | - |
dc.contributor.advisor | 양현승 | - |
dc.contributor.author | Shang, Yu-Liang | - |
dc.contributor.author | 상유량 | - |
dc.date.accessioned | 2011-12-13T06:04:46Z | - |
dc.date.available | 2011-12-13T06:04:46Z | - |
dc.date.issued | 2005 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=243800&flag=dissertation | - |
dc.identifier.uri | http://hdl.handle.net/10203/34646 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전산학전공, 2005.2, [ v, 48, vi p. ] | - |
dc.description.abstract | Service robotics is currently a pivotal research area in robotics, with enormous social potential. Since service robots directly interact with people, finding natural and intuitive interfaces is of fundamental importance. While past work has predominately focused on issues such as navigation and manipulation, relatively few robotic systems are equipped with flexible user interfaces that permit controlling the robot by "natural" means. A visual gesture interface was developed for human-robot interaction through this thesis, and tested on service robot AMI in real environment. The interaction model combining speech and gesture was designed first. Following this interaction model, a fast and robust active face detection algorithm enables robot to track operator``s face reliably. Next, a hand tracker combining skin color detection and motion segmentation is to extract the centroid of operator``s hand from input image sequence. The 2D hand trajectory input to neural network for gesture classification. Finally, the recognized gesture is interpreted into control signal to enable the robot to perform desired actions. Through the experiments, we investigate three basic features, including location, angle, and velocity, for gesture recognition. The results show that the system we proposed could tackle complex background, variable lighting condition, and different person. Furthermore, 96.32% recognition rate was achieved reliably in real-time. | eng |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | 신경망 | - |
dc.subject | 유한 상태 오토마타 | - |
dc.subject | 손 추적 | - |
dc.subject | Neural Network | - |
dc.subject | Finite State Automata | - |
dc.subject | Hand Tracking | - |
dc.subject | Gesture Recognition | - |
dc.subject | 제스쳐 인식 | - |
dc.title | Design and implementation of visual gesture interface for human-robot interaction | - |
dc.title.alternative | 인간 컴퓨터 상호작용을 위한 시각 제스처 인터페이스의 설계 및 구현 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 243800/325007 | - |
dc.description.department | 한국과학기술원 : 전산학전공, | - |
dc.identifier.uid | 020034316 | - |
dc.contributor.localauthor | Yang, Hyun-Seung | - |
dc.contributor.localauthor | 양현승 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.