Backpropagating Smoothly Improves Transferability of Adversarial Examples

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 328
  • Download : 0
Probably the most popular yet controversial explanation for adversarial examples is the hypothesis on the linear nature of modern DNNs. Initially supported by the FGSMattack this has been challenged by prior works from various perspectives. Further aligning with the linearity hypothesis, a recent work shows that backpropagating linearly (LinBP) improves the transferability of adversarial examples. One widely recognized issue of the commonly used ReLU activation function is that its derivative is non-continuous. We conjecture that the reason LinBP improves the transferability is mainly due to a continuous approximation for the ReLU in the backward pass. In other words, backpropagating continuously might be sufficient for improving transferability. To this end, we propose ConBP that adopts a smooth yet non-linear gradient approximation. Our ConBP consistently achieves equivalent or superior performance than the recently proposed LinBP, suggesting the core source of improved transferability lies in the approximation derivative being smooth, regardless of being linear or not. Our work highlights that any new evidence for either supporting or refuting the linearity hypothesis deserves a closer look. As a byproduct, our investigation also results in a new variant backpropagation method for improving the transferability of adversarial examples.
Publisher
Computer Vision Foundation (CVF), IEEE Computer Society
Issue Date
2021-06-19
Language
English
Citation

Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (AML-CV)

URI
http://hdl.handle.net/10203/289128
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0