Attention-Augmented Electromagnetic Representation of Sign Language for Human-Computer Interaction in Deaf-and-Mute Community

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 51
  • Download : 0
In order to provide a new interface between computers and deaf-And-dumb users, this paper proposed a method of translating sign language into a sequence of time-frequency spectrograms based on a 24 GHz 1T-2R Doppler radar sensor. By processing two pairs of the immediate frequency I/Q signals based on time-frequency analysis, a complete sign sentence can be captured and segmented according to the electromagnetic wave-based patterns. Rather than the traditional classifier, a convolutional neural network was utilized to classify the basic signs and make the complete sentence lucid to the computer. For greater accuracy, an attention module was augmented to the network. The proposed methods could reach the accuracy of 96% in translating short sentences such as 'Yes', 'No', 'Thanks', and 'Hello', which are with the highest usage rate in sign language. The work done by this paper can be considered as a supplement to current human-computer interactions, especially for the deaf-And-dumb community.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2021-12
Language
English
Citation

2021 IEEE USNC-URSI Radio Science Meeting (Joint with AP-S Symposium), USNC-URSI 2021, pp.47 - 48

ISSN
2572-3804
DOI
10.23919/USNC-URSI51813.2021.9703456
URI
http://hdl.handle.net/10203/312477
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0