Naive semi-supervised deep learning using pseudo-label

Cited 19 time in webofscience Cited 21 time in scopus
  • Hit : 645
  • Download : 0
To facilitate the utilization of large-scale unlabeled data, we propose a simple and effective method for semi-supervised deep learning that improves upon the performance of the deep learning model. First, we train a classifier and use its outputs on unlabeled data as pseudo-labels. Then, we pre-train the deep learning model with the pseudo-labeled data and fine-tune it with the labeled data. The repetition of pseudo-labeling, pre-training, and fine-tuning is called naive semi-supervised deep learning. We apply this method to the MNIST, CIFAR-10, and IMDB data sets, which are each divided into a small labeled data set and a large unlabeled data set by us. Our method achieves significant performance improvements compared to the deep learning model without pre-training. We further analyze the factors that affect our method to provide a better understanding of how to utilize naive semi-supervised deep learning in practical application.
Publisher
SPRINGER
Issue Date
2019-09
Language
English
Article Type
Article
Citation

PEER-TO-PEER NETWORKING AND APPLICATIONS, v.12, no.5, pp.1358 - 1368

ISSN
1936-6442
DOI
10.1007/s12083-018-0702-9
URI
http://hdl.handle.net/10203/267705
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 19 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0