Multimodal emotion recognition using multi-attribute aggregation for attributes with uncertainties불확실성을 고려한 다중 속성 통합 프레임워크 기반 멀티모달 감성인식

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 504
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorKim, Jong-Hwan-
dc.contributor.advisor김종환-
dc.contributor.authorZaheer, Sheir Afgen-
dc.contributor.authorAFGEN-
dc.date.accessioned2018-05-23T19:37:35Z-
dc.date.available2018-05-23T19:37:35Z-
dc.date.issued2017-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=675826&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/242028-
dc.description학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2017.2,[iv, 61 p. :]-
dc.description.abstractAffective interaction between humans and robots/machines is a cherished goal for socially intelligent robots/machines. Ability to recognize human emotional states is an essential prerequisite to such affective interactions. Therefore, this dissertation addresses the issue of human emotion recognition by first individually processing and then aggregating different modes of human communication through a classification and aggregation framework. Specifically, the proposed framework analyzes the speech acoustics, facial expressions, and body language using unimodal emotion classifiers. The speech emotion is classified using a deep neural network (DNN) while Facial and body language emotion classifiers are implemented using classifiers implemented through supervised fuzzy adaptive resonance theory (ARTMAP). The speech emotion classifier uses acoustic features, the facial emotion classifier uses features based on facial animation parameters (FAP), and body language emotion classifier uses head and hands motion capture data to formulate body language features. These unimodal evaluations are then aggregated using a fuzzy integral for interval type-2 fuzzy-valued attributes (FIIFA). FIIFA is proposed in this dissertation as a novel aggregation framework for attribute evaluations with linguistic and numeric uncertainties. Moreover, FIIFA also utilizes reliability based preferences for the unimodal evaluations. The dissertation proposed to generate these reliabilities based preferences from the accuracies of the unimodal classifiers for each emotion. The framework was tested and compared against the existing state-of-the-art. The results show that the proposed framework significantly outperforms the existing techniques. Furthermore, because of late fusion, the functionality of the proposed approach is robust to unavailability all but one of the modes of communication.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectEmotion recognition-
dc.subjectMulti-attribute aggregation-
dc.subjectAffective human robot interaction (HRI)-
dc.subjectFuzzy Integral-
dc.subjectDefault ARTMAP-
dc.subject감정인식-
dc.subject다중 속성 통합-
dc.subject감정적 상호 작용-
dc.subject퍼지 적분-
dc.titleMultimodal emotion recognition using multi-attribute aggregation for attributes with uncertainties-
dc.title.alternative불확실성을 고려한 다중 속성 통합 프레임워크 기반 멀티모달 감성인식-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0