Self-Contrastive Learning: Single-Viewed Supervised Contrastive Framework Using Sub-network

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 44
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorBae, Sangminko
dc.contributor.authorKim, Sungnyunko
dc.contributor.authorKo, Jongwooko
dc.contributor.authorLee, Gihunko
dc.contributor.authorNoh, Seungjongko
dc.contributor.authorYun, Seyoungko
dc.date.accessioned2023-12-08T01:05:01Z-
dc.date.available2023-12-08T01:05:01Z-
dc.date.created2023-12-07-
dc.date.issued2023-02-10-
dc.identifier.citation37th AAAI Conference on Artificial Intelligence, AAAI 2023, pp.197 - 205-
dc.identifier.urihttp://hdl.handle.net/10203/316039-
dc.description.abstractContrastive loss has significantly improved performance in supervised classification tasks by using a multi-viewed framework that leverages augmentation and label information. The augmentation enables contrast with another view of a single image but enlarges training time and memory usage. To exploit the strength of multi-views while avoiding the high computation cost, we introduce a multi-exit architecture that outputs multiple features of a single image in a single-viewed framework. To this end, we propose Self-Contrastive (SelfCon) learning, which self-contrasts within multiple outputs from the different levels of a single network. The multi-exit architecture efficiently replaces multi-augmented images and leverages various information from different layers of a network. We demonstrate that SelfCon learning improves the classification performance of the encoder network, and empirically analyze its advantages in terms of the single-view and the sub-network. Furthermore, we provide theoretical evidence of the performance increase based on the mutual information bound. For ImageNet classification on ResNet-50, SelfCon improves accuracy by +0.6% with 59% memory and 48% time of Supervised Contrastive learning, and a simple ensemble of multi-exit outputs boosts performance up to +1.5%. Our code is available at https://github.com/raymin0223/self-contrastive-learning.-
dc.languageEnglish-
dc.publisherAAAI Press-
dc.titleSelf-Contrastive Learning: Single-Viewed Supervised Contrastive Framework Using Sub-network-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85165295648-
dc.type.rimsCONF-
dc.citation.beginningpage197-
dc.citation.endingpage205-
dc.citation.publicationname37th AAAI Conference on Artificial Intelligence, AAAI 2023-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationWashington-
dc.contributor.localauthorYun, Seyoung-
dc.contributor.nonIdAuthorNoh, Seungjong-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0