P-PseudoLabel: Enhanced Pseudo-Labeling Framework With Network Pruning in Semi-Supervised Learning

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 223
  • Download : 0
Semi-supervised learning (SSL) methods for classification tasks exhibit a significant performance gain because they combine regularization and pseudo-labeling methods. General pseudo-labeling methods only depend on the model's prediction when assigning pseudo-labels, but this approach often leads to the generation of incorrect pseudo-labels, due to the network being biased toward easy classes or to the presence of confusing samples in the training set, which further decreases model performance. To address this issue, we propose a novel pseudo-labeling framework that dramatically reduces the ambiguity of pseudo-labels for confusing samples in SSL. We operate our method, called Pruning for Pseudo-Label (P-PseudoLabel), using the Easy-to-Forget (ETF) Sample Finder, which compares the outputs of the model and the pruned model to identify confusing samples. Next, we perform negative learning using the confusing samples to decrease the risk of providing incorrect information and to improve performance. Our method achieves better performance than those of recent state-of-the-art SSL methods on the CIFAR-10, CIFAR-100, and Mini-ImageNet datasets, and is on par with the state-of-the-art methods on SVHN and STL-10.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2022-10
Language
English
Article Type
Article
Citation

IEEE ACCESS, v.10, pp.115652 - 115662

ISSN
2169-3536
DOI
10.1109/ACCESS.2022.3218161
URI
http://hdl.handle.net/10203/300591
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0