Partial Matching of Facial Expression Sequence Using Over-Complete Transition Dictionary for Emotion Recognition

Cited 11 time in webofscience Cited 0 time in scopus
  • Hit : 485
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Seung Hoko
dc.contributor.authorRo, Yong-Manko
dc.date.accessioned2017-01-18T02:51:05Z-
dc.date.available2017-01-18T02:51:05Z-
dc.date.created2015-10-08-
dc.date.created2015-10-08-
dc.date.created2015-10-08-
dc.date.issued2016-10-
dc.identifier.citationIEEE TRANSACTIONS ON AFFECTIVE COMPUTING, v.7, no.4, pp.389 - 408-
dc.identifier.issn1949-3045-
dc.identifier.urihttp://hdl.handle.net/10203/219660-
dc.description.abstractFacial dynamics contain useful information for facial expression recognition (FER). However, exploiting dynamics in FER is challenging. This is mainly due to a variety of expression transitions. For example, video sequences belonging to a same emotion class may have different characteristics in transition duration and/or transition type (e.g., onset versus offset). The temporal mismatches between query and training video sequences could degrade the FER. This paper proposes a new partial matching framework that aims to overcome the temporal mismatch of expression transition. During the training stage, we construct an over-complete transition dictionary where many possible partial expression transitions are contained. During the test stage, we extract a number of partial expression transitions from a query video sequence. Each partial expression transition is analyzed individually. This increases the possibility of matching a partial expression transition in the query video sequence against the partial expression transitions in the over-complete transition dictionary. To make a partial matching subject-independent and robust to the temporal mismatch, each partial expression transition is defined as facial shape displacement between a pair of face clusters. Experimental results show that the proposed method is robust to variations of transition duration and transition type in subject-independent recognition.-
dc.languageEnglish-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.subjectLOCAL BINARY PATTERNS-
dc.subjectSPARSE REPRESENTATION-
dc.subjectFACE RECOGNITION-
dc.subjectIMAGE SEQUENCES-
dc.subjectMINIMIZATION-
dc.subjectREDUCTION-
dc.subjectALGORITHM-
dc.subjectFEATURES-
dc.subjectMANIFOLD-
dc.subjectPCA-
dc.titlePartial Matching of Facial Expression Sequence Using Over-Complete Transition Dictionary for Emotion Recognition-
dc.typeArticle-
dc.identifier.wosid000389328800008-
dc.identifier.scopusid2-s2.0-85027465989-
dc.type.rimsART-
dc.citation.volume7-
dc.citation.issue4-
dc.citation.beginningpage389-
dc.citation.endingpage408-
dc.citation.publicationnameIEEE TRANSACTIONS ON AFFECTIVE COMPUTING-
dc.identifier.doi10.1109/TAFFC.2015.2496320-
dc.contributor.localauthorRo, Yong-Man-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorFacial expression recognition (FER)-
dc.subject.keywordAuthorsparse representation based classifier (SRC)-
dc.subject.keywordAuthorover-complete transition dictionary-
dc.subject.keywordAuthorpartial expression transition features-
dc.subject.keywordPlusLOCAL BINARY PATTERNS-
dc.subject.keywordPlusSPARSE REPRESENTATION-
dc.subject.keywordPlusFACE RECOGNITION-
dc.subject.keywordPlusIMAGE SEQUENCES-
dc.subject.keywordPlusMINIMIZATION-
dc.subject.keywordPlusREDUCTION-
dc.subject.keywordPlusALGORITHM-
dc.subject.keywordPlusFEATURES-
dc.subject.keywordPlusMANIFOLD-
dc.subject.keywordPlusPCA-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 11 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0