KSL-Guide: A Large-scale Korean Sign Language Dataset Including Interrogative Sentences for Guiding the Deaf and Hard-of-Hearing

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 98
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorHam, Soominko
dc.contributor.authorPark, Kibaekko
dc.contributor.authorJang, Yeongjunko
dc.contributor.authorOh, Youngtaekko
dc.contributor.authorYun, Seokminko
dc.contributor.authorYoon, Sukwonko
dc.contributor.authorKim, Chang Joko
dc.contributor.authorPark, Han-Muko
dc.contributor.authorKweon, In-Soko
dc.date.accessioned2023-09-12T01:03:47Z-
dc.date.available2023-09-12T01:03:47Z-
dc.date.created2023-09-12-
dc.date.issued2021-12-
dc.identifier.citation16th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2021-
dc.identifier.issn2326-5396-
dc.identifier.urihttp://hdl.handle.net/10203/312466-
dc.description.abstractMany advancements in computer vision and machine learning have shown potential for significantly improving the lives of people with disabilities. In particular, recent research has demonstrated that deep neural network models could be used to bridge the gap between the deaf who use sign language and hearing people. The major impediment to advancing such models is the lack of high-quality and large-scale training data. Moreover, previously released sign language datasets include few or no interrogative sentences compared to declarative sentences. In this paper, we introduce a new publicly available large-scale Korean Sign Language (KSL) dataset-KSL-Guide-that includes both declarative sentences and comparable interrogative sentences, which are required for a model to achieve high performance in real-world interactive tasks deployed on service applications. Our dataset contains a total of 121K sign language video samples featuring sentences and words spoken by native KSL speakers with extensive annotations (e.g., gloss, translation, keypoints, and timestamps). We exploit a multi-camera system to produce 3D human pose keypoints as well as 2D keypoints from multi-view RGB. Our experiments quantitatively demonstrate that the inclusion of interrogative sentences in training for sign language recognition and translation tasks greatly improves their performance. Furthermore, we empirically show the qualitative results by developing a prototype application using our dataset, providing an interactive guide service that helps to lower the communication barrier between sign language speakers and hearing people.-
dc.languageEnglish-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.titleKSL-Guide: A Large-scale Korean Sign Language Dataset Including Interrogative Sentences for Guiding the Deaf and Hard-of-Hearing-
dc.typeConference-
dc.identifier.wosid000784811600077-
dc.identifier.scopusid2-s2.0-85125100869-
dc.type.rimsCONF-
dc.citation.publicationname16th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2021-
dc.identifier.conferencecountryII-
dc.identifier.conferencelocationVirtual-
dc.identifier.doi10.1109/FG52635.2021.9667011-
dc.contributor.localauthorKweon, In-So-
dc.contributor.nonIdAuthorYun, Seokmin-
dc.contributor.nonIdAuthorYoon, Sukwon-
dc.contributor.nonIdAuthorKim, Chang Jo-
dc.contributor.nonIdAuthorPark, Han-Mu-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0