DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Dae Hoe | ko |
dc.contributor.author | Baddar, Wissam J. | ko |
dc.contributor.author | Jang, Jinhyeok | ko |
dc.contributor.author | Ro, Yong Man | ko |
dc.date.accessioned | 2019-06-24T01:30:22Z | - |
dc.date.available | 2019-06-24T01:30:22Z | - |
dc.date.created | 2017-05-15 | - |
dc.date.created | 2017-05-15 | - |
dc.date.created | 2017-05-15 | - |
dc.date.created | 2017-05-15 | - |
dc.date.created | 2017-05-15 | - |
dc.date.created | 2017-05-15 | - |
dc.date.created | 2017-05-15 | - |
dc.date.issued | 2019-04 | - |
dc.identifier.citation | IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, v.10, no.2, pp.223 - 236 | - |
dc.identifier.issn | 1949-3045 | - |
dc.identifier.uri | http://hdl.handle.net/10203/262792 | - |
dc.description.abstract | Facial expression recognition (FER) is increasingly gaining importance in various emerging affective computing applications. In practice, achieving accurate FER is challenging due to the large amount of inter-personal variations such as expression intensity variations. In this paper, we propose a new spatio-temporal feature representation learning for FER that is robust to expression intensity variations. The proposed method utilizes representative expression-states (e.g., onset, apex and offset of expressions) which can be specified in facial sequences regardless of the expression intensity. The characteristics of facial expressions are encoded in two parts in this paper. As the first part, spatial image characteristics of the representative expression-state frames are learned via a convolutional neural network. Five objective terms are proposed to improve the expression class separability of the spatial feature representation. In the second part, temporal characteristics of the spatial feature representation in the first part are learned with a long short-term memory of the facial expression. Comprehensive experiments have been conducted on a deliberate expression dataset (MMI) and a spontaneous micro-expression dataset (CASME II). Experimental results showed that the proposed method achieved higher recognition rates in both datasets compared to the state-of-the-art methods. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Multi-Objective based Spatio-Temporal Feature Representation Learning Robust to Expression Intensity Variations for Facial Expression Recognition | - |
dc.type | Article | - |
dc.identifier.wosid | 000470020700007 | - |
dc.identifier.scopusid | 2-s2.0-85066636077 | - |
dc.type.rims | ART | - |
dc.citation.volume | 10 | - |
dc.citation.issue | 2 | - |
dc.citation.beginningpage | 223 | - |
dc.citation.endingpage | 236 | - |
dc.citation.publicationname | IEEE TRANSACTIONS ON AFFECTIVE COMPUTING | - |
dc.identifier.doi | 10.1109/TAFFC.2017.2695999 | - |
dc.contributor.localauthor | Ro, Yong Man | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Facial expression recognition (FER) | - |
dc.subject.keywordAuthor | expression intensity variation | - |
dc.subject.keywordAuthor | spatio-temporal feature representation | - |
dc.subject.keywordAuthor | deep learning | - |
dc.subject.keywordAuthor | long short-term memory (LSTM) | - |
dc.subject.keywordPlus | LOCAL BINARY PATTERNS | - |
dc.subject.keywordPlus | FACE | - |
dc.subject.keywordPlus | MODEL | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.