K-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations

Cited 48 time in webofscience Cited 33 time in scopus
  • Hit : 903
  • Download : 243
DC FieldValueLanguage
dc.contributor.authorPark, Cheul Youngko
dc.contributor.authorCha, Naraeko
dc.contributor.authorKang, Soowonko
dc.contributor.authorKim, Aukko
dc.contributor.authorKhandoker, Ahsan Habibko
dc.contributor.authorHadjileontiadis, Leontiosko
dc.contributor.authorOh, Aliceko
dc.contributor.authorJeong, Yongko
dc.contributor.authorLee, Uichinko
dc.date.accessioned2020-10-21T07:55:32Z-
dc.date.available2020-10-21T07:55:32Z-
dc.date.created2020-10-13-
dc.date.created2020-10-13-
dc.date.issued2020-09-
dc.identifier.citationSCIENTIFIC DATA, v.7, no.1, pp.293-
dc.identifier.issn2052-4463-
dc.identifier.urihttp://hdl.handle.net/10203/276823-
dc.description.abstractRecognizing emotions during social interactions has many potential applications with the popularization of low-cost mobile sensors, but a challenge remains with the lack of naturalistic affective interaction data. Most existing emotion datasets do not support studying idiosyncratic emotions arising in the wild as they were collected in constrained environments. Therefore, studying emotions in the context of social interactions requires a novel dataset, and K-EmoCon is such a multimodal dataset with comprehensive annotations of continuous emotions during naturalistic conversations. The dataset contains multimodal measurements, including audiovisual recordings, EEG, and peripheral physiological signals, acquired with off-the-shelf devices from 16 sessions of approximately 10-minute long paired debates on a social issue. Distinct from previous datasets, it includes emotion annotations from all three available perspectives: self, debate partner, and external observers. Raters annotated emotional displays at intervals of every 5 seconds while viewing the debate footage, in terms of arousal-valence and 18 additional categorical emotions. The resulting K-EmoCon is the first publicly available emotion dataset accommodating the multiperspective assessment of emotions during social interactions.-
dc.languageEnglish-
dc.publisherNATURE RESEARCH-
dc.titleK-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations-
dc.typeArticle-
dc.identifier.wosid000571812600006-
dc.identifier.scopusid2-s2.0-85091266376-
dc.type.rimsART-
dc.citation.volume7-
dc.citation.issue1-
dc.citation.beginningpage293-
dc.citation.publicationnameSCIENTIFIC DATA-
dc.identifier.doi10.1038/s41597-020-00630-y-
dc.contributor.localauthorOh, Alice-
dc.contributor.localauthorJeong, Yong-
dc.contributor.localauthorLee, Uichin-
dc.contributor.nonIdAuthorPark, Cheul Young-
dc.contributor.nonIdAuthorCha, Narae-
dc.contributor.nonIdAuthorKhandoker, Ahsan Habib-
dc.contributor.nonIdAuthorHadjileontiadis, Leontios-
dc.description.isOpenAccessY-
dc.type.journalArticleArticle; Data Paper-
dc.subject.keywordPlusFACIAL EXPRESSIONS-
dc.subject.keywordPlusINTELLIGENCE-
dc.subject.keywordPlusDATABASE-
dc.subject.keywordPlusMEMORY-
dc.subject.keywordPlusINTENSITY-
dc.subject.keywordPlusSPEECH-
dc.subject.keywordPlusGAME-
dc.subject.keywordPlusBIAS-
dc.subject.keywordPlusGO-
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 48 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0