Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 252
  • Download : 0
Adversarial examples, generated by carefully crafted perturbation, have attracted considerable attention in research fields. Recent works have argued that the existence of the robust and non-robust features is a primary cause of the adversarial examples, and investigated their internal interactions in the feature space. In this paper, we propose a way of explicitly distilling feature representation into the robust and non-robust features, using Information Bottleneck. Specifically, we inject noise variation to each feature unit and evaluate the information flow in the feature representation to dichotomize feature units either robust or non-robust, based on the noise variation magnitude. Through comprehensive experiments, we demonstrate that the distilled features are highly correlated with adversarial prediction, and they have human-perceptible semantic information by themselves. Furthermore, we present an attack mechanism intensifying the gradient of non-robust features that is directly related to the model prediction, and validate its effectiveness of breaking model robustness.
Publisher
Neural Information Processing Systems
Issue Date
2021-12-06
Language
English
Citation

35th Conference on Neural Information Processing Systems (NeurIPS)

ISSN
1049-5258
DOI
10.48550/arXiv.2204.02735
URI
http://hdl.handle.net/10203/289062
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0