Design and implementation of visual gesture interface for human-robot interaction인간 컴퓨터 상호작용을 위한 시각 제스처 인터페이스의 설계 및 구현

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 544
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorYang, Hyun-Seung-
dc.contributor.advisor양현승-
dc.contributor.authorShang, Yu-Liang-
dc.contributor.author상유량-
dc.date.accessioned2011-12-13T06:04:46Z-
dc.date.available2011-12-13T06:04:46Z-
dc.date.issued2005-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=243800&flag=dissertation-
dc.identifier.urihttp://hdl.handle.net/10203/34646-
dc.description학위논문(석사) - 한국과학기술원 : 전산학전공, 2005.2, [ v, 48, vi p. ]-
dc.description.abstractService robotics is currently a pivotal research area in robotics, with enormous social potential. Since service robots directly interact with people, finding natural and intuitive interfaces is of fundamental importance. While past work has predominately focused on issues such as navigation and manipulation, relatively few robotic systems are equipped with flexible user interfaces that permit controlling the robot by "natural" means. A visual gesture interface was developed for human-robot interaction through this thesis, and tested on service robot AMI in real environment. The interaction model combining speech and gesture was designed first. Following this interaction model, a fast and robust active face detection algorithm enables robot to track operator``s face reliably. Next, a hand tracker combining skin color detection and motion segmentation is to extract the centroid of operator``s hand from input image sequence. The 2D hand trajectory input to neural network for gesture classification. Finally, the recognized gesture is interpreted into control signal to enable the robot to perform desired actions. Through the experiments, we investigate three basic features, including location, angle, and velocity, for gesture recognition. The results show that the system we proposed could tackle complex background, variable lighting condition, and different person. Furthermore, 96.32% recognition rate was achieved reliably in real-time.eng
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject신경망-
dc.subject유한 상태 오토마타-
dc.subject손 추적-
dc.subjectNeural Network-
dc.subjectFinite State Automata-
dc.subjectHand Tracking-
dc.subjectGesture Recognition-
dc.subject제스쳐 인식-
dc.titleDesign and implementation of visual gesture interface for human-robot interaction-
dc.title.alternative인간 컴퓨터 상호작용을 위한 시각 제스처 인터페이스의 설계 및 구현-
dc.typeThesis(Master)-
dc.identifier.CNRN243800/325007 -
dc.description.department한국과학기술원 : 전산학전공, -
dc.identifier.uid020034316-
dc.contributor.localauthorYang, Hyun-Seung-
dc.contributor.localauthor양현승-
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0