On-the-Fly Facial Expression Prediction using LSTM Encoded Appearance-Suppressed Dynamics

Cited 7 time in webofscience Cited 0 time in scopus
  • Hit : 295
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorAlhaj Baddar, Wissamko
dc.contributor.authorLee, Sangminko
dc.contributor.authorRo, Yong Manko
dc.date.accessioned2022-04-15T06:50:52Z-
dc.date.available2022-04-15T06:50:52Z-
dc.date.created2019-11-20-
dc.date.created2019-11-20-
dc.date.created2019-11-20-
dc.date.issued2022-01-
dc.identifier.citationIEEE TRANSACTIONS ON AFFECTIVE COMPUTING, v.13, no.1, pp.159 - 174-
dc.identifier.issn1949-3045-
dc.identifier.urihttp://hdl.handle.net/10203/294824-
dc.description.abstractEncoding the facial expression dynamics is efficient in classifying and recognizing facial expressions. Most facial dynamics-based methods assume that a sequence is temporally segmented before prediction. This requires the prediction to wait until a full sequence is available, resulting in prediction delay. To reduce the prediction delay and enable prediction "on-the-fly" (as frames are fed to the system), we propose new dynamics feature learning method that allows prediction with partial (incomplete) sequences. The proposed method utilizes the readiness of recurrent neural networks (RNNs) for on-the-fly prediction, and introduces novel learning constraints to induce early prediction with partial sequences. We further show that a delay in accurate prediction using RNNs could originate from the effect that the subject appearance has on the spatio-temporal features encoded by the RNN. We refer to that effect as "appearance bias". We propose the appearance suppressed dynamics feature, which utilizes a static sequence to suppress the appearance bias. Experimental results have shown that the proposed method achieved higher recognition rates compared to the state-of-the-art methods on publicly available datasets. The results also verified that the proposed method improved on-the-fly prediction at subtle expression frames early in the sequence, using partial sequence inputs.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleOn-the-Fly Facial Expression Prediction using LSTM Encoded Appearance-Suppressed Dynamics-
dc.typeArticle-
dc.identifier.wosid000766268600013-
dc.identifier.scopusid2-s2.0-85126082625-
dc.type.rimsART-
dc.citation.volume13-
dc.citation.issue1-
dc.citation.beginningpage159-
dc.citation.endingpage174-
dc.citation.publicationnameIEEE TRANSACTIONS ON AFFECTIVE COMPUTING-
dc.identifier.doi10.1109/TAFFC.2019.2957465-
dc.contributor.localauthorRo, Yong Man-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorFacial expression recognition (FER)dynamic featureson-the-fly predictionrecurrent neural networks (RNN)long short-term memory (LSTM)-
dc.subject.keywordPlusLOCAL BINARY PATTERNSRECOGNITIONROBUSTFACEMODEL-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 7 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0