Confident Multiple Choice Learning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 213
  • Download : 0
Ensemble methods are arguably the most trustworthy techniques for boosting the performance of machine learning models. Popular independent ensembles (IE) relying on na¨ıve averaging/voting scheme have been of typical choice for most applications involving deep neural networks, but they do not consider advanced collaboration among ensemble models. In this paper, we propose new ensemble methods specialized for deep neural networks, called confident multiple choice learning (CMCL): it is a variant of multiple choice learning (MCL) via addressing its overconfidence issue. In particular, the proposed major components of CMCL beyond the original MCL scheme are (i) new loss, i.e., con- fident oracle loss, (ii) new architecture, i.e., feature sharing and (iii) new training method, i.e., stochastic labeling. We demonstrate the effect of CMCL via experiments on the image classification on CIFAR and SVHN, and the foregroundbackground segmentation on the iCoseg. In particular, CMCL using 5 residual networks provides 14.05% and 6.60% relative reductions in the top-1 error rates from the corresponding IE scheme for the classification task on CIFAR and SVHN, respectively.
Publisher
International Machine Learning Society (IMLS)
Issue Date
2017-08-08
Language
English
Citation

34th International Conference on Machine Learning

ISSN
2640-3498
DOI
10.48550/arXiv.1706.03475
URI
http://hdl.handle.net/10203/227632
Appears in Collection
AI-Conference Papers(학술대회논문)EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0