Human emotion recognition is an important factor for social robots. In previous research, emotion recognizers with many modalities have been studied, but there are several problems that make recognition rates lower when a recognizer is applied to a robot. This paper proposes a decision level fusion method that takes the outputs of each recognizer as an input and confirms which combination of features achieves the highest accuracy. We used EdNet, which was developed in KAIST based Convolutional Neural Networks (CNNs), as a facial expression recognizer and a speech analytics engine developed for speech emotion recognition. Finally, we confirmed a higher accuracy 43.40% using an artificial neural network (ANN) or the k-Nearest Neighbor (k-NN) algorithm for classification of combinations of features from EdN et and the speech analytics engine.