Learning Not to Learn: Training Deep Neural Networks with Biased Data

Cited 155 time in webofscience Cited 117 time in scopus
  • Hit : 189
  • Download : 0
We propose a novel regularization algorithm to train deep neural networks, in which data at training time is severely biased. Since a neural network efficiently learns data distribution, a network is likely to learn the bias information to categorize input data. It leads to poor performance at test time, if the bias is, in fact, irrelevant to the categorization. In this paper, we formulate a regularization loss based on mutual information between feature embedding and bias. Based on the idea of minimizing this mutual information, we propose an iterative algorithm to unlearn the bias information. We employ an additional network to predict the bias distribution and train the network adversarially against the feature embedding network. At the end of learning, the bias prediction network is not able to predict the bias not because it is poorly trained, but because the feature embedding network successfully unlearns the bias information. We also demonstrate quantitative and qualitative experimental results which show that our algorithm effectively removes the bias information from feature embedding.
Publisher
IEEE Computer Society, the Computer Vision Foundation (CVF)
Issue Date
2019-06-20
Language
English
Citation

CVPR 2019 IEEE Conference on Computer Vision and Pattern Recognition , pp.9004 - 9012

DOI
10.1109/CVPR.2019.00922
URI
http://hdl.handle.net/10203/268689
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 155 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0