Adjusting Decision Boundary for Class Imbalanced Learning

Cited 26 time in webofscience Cited 14 time in scopus
  • Hit : 335
  • Download : 608
The training of deep neural networks heavily depends on the data distribution. In particular, these networks easily suffer from class imbalance. The trained networks would recognize the frequent classes better than the infrequent classes. To resolve this problem, existing approaches typically propose novel loss functions to obtain better feature embedding. In this paper, we argue that drawing a better decision boundary is as important as learning better features. Based on our observations, we investigate how the class imbalance affects the decision boundary and deteriorates the performance. We also investigate the feature-distributional discrepancy between training and test time. Accordingly, we propose a novel, yet simple method for class imbalanced learning. Despite its simplicity, our method exhibits outstanding performance. Specifically, the experimental results show that we can significantly improve a network by scaling the weight vectors, even without additional training processes.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2020-04
Language
English
Article Type
Article
Citation

IEEE ACCESS, v.8, pp.81674 - 81685

ISSN
2169-3536
DOI
10.1109/ACCESS.2020.2991231
URI
http://hdl.handle.net/10203/275814
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
115588.pdf(5.93 MB)Download
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 26 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0