Revisiting Batch Normalization for Improving Corruption Robustness

Cited 24 time in webofscience Cited 0 time in scopus
  • Hit : 75
  • Download : 0
The performance of DNNs trained on clean images has been shown to decrease when the test images have common corruptions. In this work, we interpret corruption robustness as a domain shift and propose to rectify batch normalization (BN) statistics for improving model robustness. This is motivated by perceiving the shift from the clean domain to the corruption domain as a style shift that is represented by the BN statistics. We find that simply estimating and adapting the BN statistics on a few (32 for instance) representation samples, without retraining the model, improves the corruption robustness by a large margin on several benchmark datasets with a wide range of model architectures. For example, on ImageNet-C, statistics adaptation improves the top1 accuracy of ResNet50 from 39.2% to 48.7%. Moreover, we find that this technique can further improve state-of-the-art robust models from 58.1% to 63.3%.
Publisher
IEEE COMPUTER SOC
Issue Date
2021-01
Language
English
Citation

IEEE Winter Conference on Applications of Computer Vision (WACV), pp.494 - 503

ISSN
2472-6737
DOI
10.1109/WACV48630.2021.00054
URI
http://hdl.handle.net/10203/288441
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 24 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0