Backpropagating Smoothly Improves Transferability of Adversarial Examples

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 330
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorZhang, Chaoningko
dc.contributor.authorBenz, Philippko
dc.contributor.authorCho, Gyusangko
dc.contributor.authorKarjauv, Adilko
dc.contributor.authorHam, Soominko
dc.contributor.authorYoun, Chan-Hyunko
dc.contributor.authorKweon, In-Soko
dc.date.accessioned2021-11-10T06:50:45Z-
dc.date.available2021-11-10T06:50:45Z-
dc.date.created2021-11-09-
dc.date.issued2021-06-19-
dc.identifier.citationWorkshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (AML-CV)-
dc.identifier.urihttp://hdl.handle.net/10203/289128-
dc.description.abstractProbably the most popular yet controversial explanation for adversarial examples is the hypothesis on the linear nature of modern DNNs. Initially supported by the FGSMattack this has been challenged by prior works from various perspectives. Further aligning with the linearity hypothesis, a recent work shows that backpropagating linearly (LinBP) improves the transferability of adversarial examples. One widely recognized issue of the commonly used ReLU activation function is that its derivative is non-continuous. We conjecture that the reason LinBP improves the transferability is mainly due to a continuous approximation for the ReLU in the backward pass. In other words, backpropagating continuously might be sufficient for improving transferability. To this end, we propose ConBP that adopts a smooth yet non-linear gradient approximation. Our ConBP consistently achieves equivalent or superior performance than the recently proposed LinBP, suggesting the core source of improved transferability lies in the approximation derivative being smooth, regardless of being linear or not. Our work highlights that any new evidence for either supporting or refuting the linearity hypothesis deserves a closer look. As a byproduct, our investigation also results in a new variant backpropagation method for improving the transferability of adversarial examples.-
dc.languageEnglish-
dc.publisherComputer Vision Foundation (CVF), IEEE Computer Society-
dc.titleBackpropagating Smoothly Improves Transferability of Adversarial Examples-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.publicationnameWorkshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (AML-CV)-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationOnline-
dc.contributor.localauthorYoun, Chan-Hyun-
dc.contributor.localauthorKweon, In-So-
dc.contributor.nonIdAuthorKarjauv, Adil-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0