Decision-Level Fusion Method for Emotion Recognition using Multimodal Emotion Recognition Information

Cited 10 time in webofscience Cited 0 time in scopus
  • Hit : 216
  • Download : 0
Human emotion recognition is an important factor for social robots. In previous research, emotion recognizers with many modalities have been studied, but there are several problems that make recognition rates lower when a recognizer is applied to a robot. This paper proposes a decision level fusion method that takes the outputs of each recognizer as an input and confirms which combination of features achieves the highest accuracy. We used EdNet, which was developed in KAIST based Convolutional Neural Networks (CNNs), as a facial expression recognizer and a speech analytics engine developed for speech emotion recognition. Finally, we confirmed a higher accuracy 43.40% using an artificial neural network (ANN) or the k-Nearest Neighbor (k-NN) algorithm for classification of combinations of features from EdN et and the speech analytics engine.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2018-06-27
Language
English
Citation

15th International Conference on Ubiquitous Robots, UR 2018, pp.472 - 476

DOI
10.1109/URAI.2018.8441795
URI
http://hdl.handle.net/10203/246121
Appears in Collection
ME-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 10 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0