P-PseudoLabel: Enhanced Pseudo-Labeling Framework With Network Pruning in Semi-Supervised Learning

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 237
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorHam, Gyeongdoko
dc.contributor.authorCho, Yucheolko
dc.contributor.authorLee, Jae-Hyeokko
dc.contributor.authorKim, Dae-Shikko
dc.date.accessioned2022-11-23T08:01:31Z-
dc.date.available2022-11-23T08:01:31Z-
dc.date.created2022-11-22-
dc.date.created2022-11-22-
dc.date.created2022-11-22-
dc.date.issued2022-10-
dc.identifier.citationIEEE ACCESS, v.10, pp.115652 - 115662-
dc.identifier.issn2169-3536-
dc.identifier.urihttp://hdl.handle.net/10203/300591-
dc.description.abstractSemi-supervised learning (SSL) methods for classification tasks exhibit a significant performance gain because they combine regularization and pseudo-labeling methods. General pseudo-labeling methods only depend on the model's prediction when assigning pseudo-labels, but this approach often leads to the generation of incorrect pseudo-labels, due to the network being biased toward easy classes or to the presence of confusing samples in the training set, which further decreases model performance. To address this issue, we propose a novel pseudo-labeling framework that dramatically reduces the ambiguity of pseudo-labels for confusing samples in SSL. We operate our method, called Pruning for Pseudo-Label (P-PseudoLabel), using the Easy-to-Forget (ETF) Sample Finder, which compares the outputs of the model and the pruned model to identify confusing samples. Next, we perform negative learning using the confusing samples to decrease the risk of providing incorrect information and to improve performance. Our method achieves better performance than those of recent state-of-the-art SSL methods on the CIFAR-10, CIFAR-100, and Mini-ImageNet datasets, and is on par with the state-of-the-art methods on SVHN and STL-10.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleP-PseudoLabel: Enhanced Pseudo-Labeling Framework With Network Pruning in Semi-Supervised Learning-
dc.typeArticle-
dc.identifier.wosid000880595800001-
dc.identifier.scopusid2-s2.0-85141484565-
dc.type.rimsART-
dc.citation.volume10-
dc.citation.beginningpage115652-
dc.citation.endingpage115662-
dc.citation.publicationnameIEEE ACCESS-
dc.identifier.doi10.1109/ACCESS.2022.3218161-
dc.contributor.localauthorKim, Dae-Shik-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorPredictive models-
dc.subject.keywordAuthorComputational modeling-
dc.subject.keywordAuthorData models-
dc.subject.keywordAuthorSemi-supervised learning-
dc.subject.keywordAuthorLabeling-
dc.subject.keywordAuthorTask analysis-
dc.subject.keywordAuthorSupervised learning-
dc.subject.keywordAuthorConsistency regularization-
dc.subject.keywordAuthornetwork pruning-
dc.subject.keywordAuthornegative learning-
dc.subject.keywordAuthorpseudo labeling-
dc.subject.keywordAuthorsemi-supervised learning-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0