Multi-Objective based Spatio-Temporal Feature Representation Learning Robust to Expression Intensity Variations for Facial Expression Recognition

Cited 119 time in webofscience Cited 95 time in scopus
  • Hit : 627
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Dae Hoeko
dc.contributor.authorBaddar, Wissam J.ko
dc.contributor.authorJang, Jinhyeokko
dc.contributor.authorRo, Yong Manko
dc.date.accessioned2019-06-24T01:30:22Z-
dc.date.available2019-06-24T01:30:22Z-
dc.date.created2017-05-15-
dc.date.created2017-05-15-
dc.date.created2017-05-15-
dc.date.created2017-05-15-
dc.date.created2017-05-15-
dc.date.created2017-05-15-
dc.date.created2017-05-15-
dc.date.issued2019-04-
dc.identifier.citationIEEE TRANSACTIONS ON AFFECTIVE COMPUTING, v.10, no.2, pp.223 - 236-
dc.identifier.issn1949-3045-
dc.identifier.urihttp://hdl.handle.net/10203/262792-
dc.description.abstractFacial expression recognition (FER) is increasingly gaining importance in various emerging affective computing applications. In practice, achieving accurate FER is challenging due to the large amount of inter-personal variations such as expression intensity variations. In this paper, we propose a new spatio-temporal feature representation learning for FER that is robust to expression intensity variations. The proposed method utilizes representative expression-states (e.g., onset, apex and offset of expressions) which can be specified in facial sequences regardless of the expression intensity. The characteristics of facial expressions are encoded in two parts in this paper. As the first part, spatial image characteristics of the representative expression-state frames are learned via a convolutional neural network. Five objective terms are proposed to improve the expression class separability of the spatial feature representation. In the second part, temporal characteristics of the spatial feature representation in the first part are learned with a long short-term memory of the facial expression. Comprehensive experiments have been conducted on a deliberate expression dataset (MMI) and a spontaneous micro-expression dataset (CASME II). Experimental results showed that the proposed method achieved higher recognition rates in both datasets compared to the state-of-the-art methods.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleMulti-Objective based Spatio-Temporal Feature Representation Learning Robust to Expression Intensity Variations for Facial Expression Recognition-
dc.typeArticle-
dc.identifier.wosid000470020700007-
dc.identifier.scopusid2-s2.0-85066636077-
dc.type.rimsART-
dc.citation.volume10-
dc.citation.issue2-
dc.citation.beginningpage223-
dc.citation.endingpage236-
dc.citation.publicationnameIEEE TRANSACTIONS ON AFFECTIVE COMPUTING-
dc.identifier.doi10.1109/TAFFC.2017.2695999-
dc.contributor.localauthorRo, Yong Man-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorFacial expression recognition (FER)-
dc.subject.keywordAuthorexpression intensity variation-
dc.subject.keywordAuthorspatio-temporal feature representation-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthorlong short-term memory (LSTM)-
dc.subject.keywordPlusLOCAL BINARY PATTERNS-
dc.subject.keywordPlusFACE-
dc.subject.keywordPlusMODEL-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 119 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0