Hierarchical committee of deep convolutional neural networks for robust facial expression recognition

Cited 144 time in webofscience Cited 0 time in scopus
  • Hit : 1003
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Bo-Kyeongko
dc.contributor.authorRoh, Jihyeonko
dc.contributor.authorDong, Suh Yeonko
dc.contributor.authorLee, Soo-Youngko
dc.date.accessioned2016-09-06T07:22:28Z-
dc.date.available2016-09-06T07:22:28Z-
dc.date.created2016-05-31-
dc.date.created2016-05-31-
dc.date.created2016-05-31-
dc.date.created2016-05-31-
dc.date.issued2016-06-
dc.identifier.citationJOURNAL ON MULTIMODAL USER INTERFACES, v.10, no.2, pp.173 - 189-
dc.identifier.issn1783-7677-
dc.identifier.urihttp://hdl.handle.net/10203/212286-
dc.description.abstractThis paper describes our approach towards robust facial expression recognition (FER) for the third Emotion Recognition in the Wild (EmotiW2015) challenge. We train multiple deep convolutional neural networks (deep CNNs) as committee members and combine their decisions. To improve this committee of deep CNNs, we present two strategies: (1) in order to obtain diverse decisions from deep CNNs, we vary network architecture, input normalization, and random weight initialization in training these deep models, and (2) in order to form a better committee in structural and decisional aspects, we construct a hierarchical architecture of the committee with exponentially-weighted decision fusion. In solving a seven-class problem of static FER in the wild for the EmotiW2015, we achieve a test accuracy of 61.6 %. Moreover, on other public FER databases, our hierarchical committee of deep CNNs yields superior performance, outperforming or competing with state-of-the-art results for these databases.-
dc.languageEnglish-
dc.publisherSpringer-
dc.titleHierarchical committee of deep convolutional neural networks for robust facial expression recognition-
dc.typeArticle-
dc.identifier.wosid000378580400008-
dc.identifier.scopusid2-s2.0-84954425754-
dc.type.rimsART-
dc.citation.volume10-
dc.citation.issue2-
dc.citation.beginningpage173-
dc.citation.endingpage189-
dc.citation.publicationnameJOURNAL ON MULTIMODAL USER INTERFACES-
dc.identifier.doi10.1007/s12193-015-0209-0-
dc.contributor.localauthorLee, Soo-Young-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorHierarchical committee-
dc.subject.keywordAuthorExponentially-weighted decision fusion-
dc.subject.keywordAuthorDeep convolutional neural network-
dc.subject.keywordAuthorFacial expression recognition-
dc.subject.keywordPlusFACE DETECTION-
dc.subject.keywordPlusCLASSIFICATION-
dc.subject.keywordPlusCLASSIFIERS-
dc.subject.keywordPlusREPRESENTATION-
dc.subject.keywordPlusENSEMBLES-
dc.subject.keywordPlusEXPERTS-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 144 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0