Enhanced sign language transcription system via hand tracking and pose estimation

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 751
  • Download : 0
In this study, we propose a new system for constructing parallel corpora for sign languages, which are generally underresourced in comparison to spoken languages. In order to achieve scalability and accessibility regarding data collection and corpus construction, our system utilizes deep learning-based techniques and predicts depth information to perform pose estimation on hand information obtainable from video recordings by a single RGB camera. These estimated poses are then transcribed into expressions in SignWriting. We evaluate the accuracy of hand tracking and hand pose estimation modules of our system quantitatively, using the American Sign Language Image Dataset and the American Sign Language Lexicon Video Dataset. The evaluation results show that our transcription system has a high potential to be successfully employed in constructing a sizable sign language corpus using various types of video resources. Copyright
Publisher
Korean Institute of Information Scientists and Engineers
Issue Date
2016-09
Language
English
Citation

Journal of Computing Science and Engineering, v.10, no.3, pp.95 - 101

ISSN
1976-4677
DOI
10.5626/JCSE.2016.10.3.95
URI
http://hdl.handle.net/10203/222807
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0