DC Field | Value | Language |
---|---|---|
dc.contributor.author | Bae, Sangmin | ko |
dc.contributor.author | Kim, Sungnyun | ko |
dc.contributor.author | Ko, Jongwoo | ko |
dc.contributor.author | Lee, Gihun | ko |
dc.contributor.author | Noh, Seungjong | ko |
dc.contributor.author | Yun, Seyoung | ko |
dc.date.accessioned | 2023-12-08T01:05:01Z | - |
dc.date.available | 2023-12-08T01:05:01Z | - |
dc.date.created | 2023-12-07 | - |
dc.date.issued | 2023-02-10 | - |
dc.identifier.citation | 37th AAAI Conference on Artificial Intelligence, AAAI 2023, pp.197 - 205 | - |
dc.identifier.uri | http://hdl.handle.net/10203/316039 | - |
dc.description.abstract | Contrastive loss has significantly improved performance in supervised classification tasks by using a multi-viewed framework that leverages augmentation and label information. The augmentation enables contrast with another view of a single image but enlarges training time and memory usage. To exploit the strength of multi-views while avoiding the high computation cost, we introduce a multi-exit architecture that outputs multiple features of a single image in a single-viewed framework. To this end, we propose Self-Contrastive (SelfCon) learning, which self-contrasts within multiple outputs from the different levels of a single network. The multi-exit architecture efficiently replaces multi-augmented images and leverages various information from different layers of a network. We demonstrate that SelfCon learning improves the classification performance of the encoder network, and empirically analyze its advantages in terms of the single-view and the sub-network. Furthermore, we provide theoretical evidence of the performance increase based on the mutual information bound. For ImageNet classification on ResNet-50, SelfCon improves accuracy by +0.6% with 59% memory and 48% time of Supervised Contrastive learning, and a simple ensemble of multi-exit outputs boosts performance up to +1.5%. Our code is available at https://github.com/raymin0223/self-contrastive-learning. | - |
dc.language | English | - |
dc.publisher | AAAI Press | - |
dc.title | Self-Contrastive Learning: Single-Viewed Supervised Contrastive Framework Using Sub-network | - |
dc.type | Conference | - |
dc.identifier.scopusid | 2-s2.0-85165295648 | - |
dc.type.rims | CONF | - |
dc.citation.beginningpage | 197 | - |
dc.citation.endingpage | 205 | - |
dc.citation.publicationname | 37th AAAI Conference on Artificial Intelligence, AAAI 2023 | - |
dc.identifier.conferencecountry | US | - |
dc.identifier.conferencelocation | Washington | - |
dc.contributor.localauthor | Yun, Seyoung | - |
dc.contributor.nonIdAuthor | Noh, Seungjong | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.