Facial emotion recognition in presence of speech using a default ARTMAP classifier

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 117
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorZaheer, Sheir Afgenko
dc.contributor.authorKim, Jong-Hwanko
dc.date.accessioned2023-08-24T11:00:50Z-
dc.date.available2023-08-24T11:00:50Z-
dc.date.created2023-07-06-
dc.date.issued2017-11-
dc.identifier.citation9th International Joint Conference on Computational Intelligence, IJCCI 2017, pp.436 - 442-
dc.identifier.urihttp://hdl.handle.net/10203/311805-
dc.description.abstractThis paper proposes a scheme for facial emotion recognition in the presence of speech, i.e. the interacting subjects are also speaking. We propose the usage of default ARTMAP, a variant of fuzzy ARTMAP, as a classifier for facial emotions using feature vectors derived from facial animation parameters (FAP). The proposed scheme is tested on Interactive Emotional Dyadic Motion Capture (IEMOCAP) database. The results show the effectiveness of the approach as a standalone facial emotion classifier as well as its relatively superior performance on IEMOCAP in comparison to the existing similar approaches.-
dc.languageEnglish-
dc.publisherSciTePress-
dc.titleFacial emotion recognition in presence of speech using a default ARTMAP classifier-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85055252757-
dc.type.rimsCONF-
dc.citation.beginningpage436-
dc.citation.endingpage442-
dc.citation.publicationname9th International Joint Conference on Computational Intelligence, IJCCI 2017-
dc.identifier.conferencecountryPO-
dc.identifier.conferencelocationFunchal-
dc.identifier.doi10.5220/0006572204360442-
dc.contributor.localauthorKim, Jong-Hwan-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0