Consistency Regularization for Certified Robustness of Smoothed Classifiers

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 302
  • Download : 0
A recent technique of randomized smoothing has shown that the worst-case (adversarial) ℓ 2 -robustness can be transformed into the average-case Gaussian-robustness by "smoothing" a classifier, i.e., by considering the averaged prediction over Gaussian noise. In this paradigm, one should rethink the notion of adversarial robustness in terms of generalization ability of a classifier under noisy observations. We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise. This relationship allows us to design a robust training objective without approximating a non-existing smoothed classifier, e.g., via soft smoothing. Our experiments under various deep neural network architectures and datasets show that the "certified" ℓ 2 -robustness can be dramatically improved with the proposed regularization, even achieving better or comparable results to the state-of-the-art approaches with significantly less training costs and hyperparameters.
Publisher
NeurIPS committee
Issue Date
2020-12-07
Language
English
Citation

34th Conference on Neural Information Processing Systems (NeurIPS) 2020

URI
http://hdl.handle.net/10203/278234
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0