DC Field | Value | Language |
---|---|---|
dc.contributor.author | Koo, Inyong | ko |
dc.contributor.author | Park, Yeonju | ko |
dc.contributor.author | Jeong, Minki | ko |
dc.contributor.author | Kim, Changick | ko |
dc.date.accessioned | 2023-05-13T05:02:37Z | - |
dc.date.available | 2023-05-13T05:02:37Z | - |
dc.date.created | 2023-05-12 | - |
dc.date.created | 2023-05-12 | - |
dc.date.created | 2023-05-12 | - |
dc.date.issued | 2023-01 | - |
dc.identifier.citation | IEEE SENSORS JOURNAL, v.23, no.1, pp.506 - 513 | - |
dc.identifier.issn | 1530-437X | - |
dc.identifier.uri | http://hdl.handle.net/10203/306815 | - |
dc.description.abstract | Recently, the widespread use of smart devices has invoked more interest in sensor-based applications. Human activity recognition (HAR) with body-worn sensors is an underlying task that aims to recognize a person's physical activity from on-body sensor readings. In this article, we address the HAR problem that utilizes accelerometers and gyroscopes. While deep-learning-based feature extraction methods are advancing, recent HAR systems with multimodal sensors mostly use data-level fusion, aggregating different sensor signals into one multichannel input. However, they neglect the fact that different sensors capture different physical properties and produce distinct patterns. In this article, we propose a two-stream convolutional neural network (CNN) model that processes the accelerometer and gyroscope signals separately. The modality-specific features are fused in feature-level and jointly used for the recognition task. Furthermore, we introduce a self-supervised learning (SSL) task that pairs the accelerometer and the gyroscope embeddings acquired from the same activity instance. This auxiliary objective allows the feature extractors of our model to communicate during training and exploit complementary information, achieving better representations for HAR. We name our end-to-end multimodal HAR system the Contrastive Accelerometer-Gyroscope Embedding (CAGE) model. CAGE outperforms preceding HAR models in four publicly available benchmarks. The code is available at github.com/quotation2520/CAGE4HAR. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Contrastive Accelerometer-Gyroscope Embedding Model for Human Activity Recognition | - |
dc.type | Article | - |
dc.identifier.wosid | 000971198700045 | - |
dc.identifier.scopusid | 2-s2.0-85144069999 | - |
dc.type.rims | ART | - |
dc.citation.volume | 23 | - |
dc.citation.issue | 1 | - |
dc.citation.beginningpage | 506 | - |
dc.citation.endingpage | 513 | - |
dc.citation.publicationname | IEEE SENSORS JOURNAL | - |
dc.identifier.doi | 10.1109/JSEN.2022.3222825 | - |
dc.contributor.localauthor | Kim, Changick | - |
dc.contributor.nonIdAuthor | Park, Yeonju | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Sensors | - |
dc.subject.keywordAuthor | Gyroscopes | - |
dc.subject.keywordAuthor | Feature extraction | - |
dc.subject.keywordAuthor | Accelerometers | - |
dc.subject.keywordAuthor | Human activity recognition | - |
dc.subject.keywordAuthor | Task analysis | - |
dc.subject.keywordAuthor | Intelligent sensors | - |
dc.subject.keywordAuthor | Deep learning | - |
dc.subject.keywordAuthor | human activity recognition (HAR) | - |
dc.subject.keywordAuthor | multimodal fusion | - |
dc.subject.keywordAuthor | self-supervised learning (SSL) | - |
dc.subject.keywordPlus | SMARTPHONE | - |
dc.subject.keywordPlus | FUSION | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.