Multimodal Analysis of Physiological Signals for Wearable-Based Emotion Recognition Using Machine Learning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 54
  • Download : 0
Recent advancements in wearable technology and machine learning have led to an increased research interest in the use of peripheral physiological signals to recognize emotion granularity. In healthcare, the ability to create an algorithm that classifies emotion content can aid in the development of treatment protocols for psychopathology and chronic disease. The non-invasive nature of peripheral physiological signals however is usually of low quality due to low sampling rates. As a result, single-mode physiological signal-based emotion recognition shows low performance. In this research, we explore the use of multi-modal wearable-based emotion recognition using the K-EmoCon dataset. Physiological signals in addition to self-reported arousal and valence records were analyzed with a battery of datamining algorithms including decision trees, support vector machines, k-nearest neighbors, and ensembles. Performance was evaluated using accuracy, true positive rate, and area under the receiver operating characteristic curve. Results support the assumption with 83% average accuracy when using an ensemble bagged tree algorithm compared to single heart rate-based emotion accuracy of 56.1%. Emotion granularity can be identified by wearables with multi-modal signal recording capabilities that improve diagnostics and possibly treatment efficacy.
Publisher
IEEE Computer Society
Issue Date
2022-09
Language
English
Citation

2022 Computing in Cardiology, CinC 2022

DOI
10.22489/CinC.2022.328
URI
http://hdl.handle.net/10203/312627
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0